visionOS

RSS for tag

Discuss developing for spatial computing and Apple Vision Pro.

Posts under visionOS tag

1,228 Posts
Sort by:
Post not yet marked as solved
2 Replies
127 Views
Our app needs to scan QR codes (or a similar mechanism) to populate it with content the user wants to see. Is there any update on QR code scanning availability on this platform? I asked this before, but never got any feedback. I know that there is no way to access the camera (which is an issue in itself), but at least the system could provide an API to scan codes. (It would be also cool if we were able to use the same codes Vision Pro uses for detecting the Zeiss glasses, as long as we could create these via server-side JavaScript code.)
Posted
by waldgeist.
Last updated
.
Post not yet marked as solved
1 Replies
103 Views
Dear developers, now that we have played with Vision Pro for 3 months, I am wondering why some features are missing on Vision Pro, especially some seem to be very basic/fundamental. So I would like to see if you know more about the reasons or correct me if I'm wrong! You are also welcome to share features that you think is fundamental, but missing on Vision Pro. My list goes below: (1) GPS/Compass: cost? heat? battery? (2) Moving image tracking: surrounding environment processing is already too computing intensive? (3) 3D object tracking: looks like only supported on iOS and iPadOS, but not visionOS (4) Does not invoke application focus/pause callback: maybe I'm wrong? But we were not able to detect if an app has been put on background or brought to foreground to invoke a callback
Posted
by jjjjjom.
Last updated
.
Post not yet marked as solved
1 Replies
91 Views
I've created a Full Immersive VisionOS project and added a spacial video player in the ImmersiveView swift file. I have a few buttons on a different VideosView swift file on a floating window and i'd like switch the video playing in ImmersiveView when i click on a button in VideosView file. Video player working great in ImmersiveView: RealityView { content in if let videoEntity = try? await Entity(named: "Video", in: realityKitContentBundle) { guard let url = Bundle.main.url(forResource: "video1", withExtension: "mov") else {fatalError("Video was not found!")} let asset = AVURLAsset(url: url) let playerItem = AVPlayerItem(asset: asset) let player = AVPlayer() videoEntity.components[VideoPlayerComponent.self] = .init(avPlayer: player) content.add(videoEntity) player.replaceCurrentItem(with: playerItem) player.play() }else { print("file not found!") } } Buttons in floating window from VideosView: struct VideosView: View { var body: some View { VStack{ Button(action: {}) { Text("video 1").font(.title) } Button(action: {}) { Text("video 2").font(.title) } Button(action: {}) { Text("video 3").font(.title) } } } } In general how do I control the video player across views and how do I replace the video when each button is selected. Any help/code/links would be greatly appreciated.
Posted Last updated
.
Post not yet marked as solved
0 Replies
65 Views
I'm trying to understand better how to 'navigate' around a large USD scene inside a RealityView in SwiftUI (itself in a volume on VisionOS). With a little trial and error I have been able to understand scale and translate transforms, and I can have the USD zoom to 'presets' of different scale and translation transforms. Separately I can also rotate an unscaled and untranslated USD, and have it rotate in place 90 degrees at a time to return to a rotation of 0 degrees. But if I try to combine the two activities, the rotation occurs around the center of the USD, not my zoomed location. Is there a session or sample code available that combines these activities? I think I would understand relatively quickly if I saw it in action. Thanks for any pointers available!
Posted Last updated
.
Post not yet marked as solved
0 Replies
36 Views
Hello,technologists! I want to take screenshot in visionOS,but i dont know which API is applicative。Now,i try to use “RPScreenRecorder” ,but it‘s not work。And “UIScreen” is not supported by visionOS。
Posted
by humilezr.
Last updated
.
Post not yet marked as solved
0 Replies
55 Views
WindowGroup{ SolarDisplayView() .environment(model) } .windowStyle(.plain) Why is the code above correct while the code below reports an error? How to modify the following code? WindowGroup{ SolarDisplayView() .environment(model) } .windowStyle(model.isShow ? .plain : .automatic)
Posted
by xuhengfei.
Last updated
.
Post not yet marked as solved
0 Replies
70 Views
In the past, Apple recommended restricting USDZ models to a maximum of 100,000 triangles and a texture sizes of 2048x2048 for Apple QuickLook (and I think for RealityKit on iOS in general). Does Apple have any recommended max polygon counts for visionOS? Is it the same for models running in a Volumetric window in the shared space and in ImmersiveSpace? What is the recommended texture size for visionOS? (I seem to recall 8192x8192, but I can't find it now)
Posted
by Todd2.
Last updated
.
Post not yet marked as solved
2 Replies
113 Views
I have the Vision Pro developer strap, do I need to do anything to make Instruments transfer the data over it rather than wifi? Or will it do that automatically? It seems incredibly slow for transferring and then analysing data. I can see the Vision Pro recognised in Configurator, so assume it's working. Otherwise.. Any tips for speeding up Instruments? Capturing 5 mins of gameplay (high-freq) then takes 30-40+ mins to appear in Instruments on an M2 Max 32gb. Thanks!
Posted
by yezz.
Last updated
.
Post not yet marked as solved
0 Replies
82 Views
I want to transfer this video stream to another device and then view it on the other device. But I did not see any development information related to the camera by checking the VisionOS documentation information, so I would like to ask if anyone knows how to do it? Thank you.
Posted
by iOS-LI.
Last updated
.
Post not yet marked as solved
0 Replies
94 Views
I'm creating a full immersive app of a large 3d environment in which I need to be able to move the player with different options like, hand gestures, game controller and teleporting. I have worked with unreal engine in which moving the player is easy and well documented. But I have not been able to find any information on how I could achieve this in visionOS. Has anyone done something similar that could give me some advice or sample code? any help appreciated Guillermo
Posted
by gl5.
Last updated
.
Post not yet marked as solved
5 Replies
515 Views
Hello everyone! For applications with multiple windows, if I close all open windows, I've noticed that visionOS reopen only the last closed window when launching the app again from the menu. Is there a way I can set a main window that will always be open when the application is launched?
Posted
by kentvchr.
Last updated
.
Post not yet marked as solved
0 Replies
101 Views
We want to overlay a SwiftUI attachment on a RealityView, like it is done in the Diorama sample. By default, the attachments seem to be placed centered at their position. However, for our use-case we need to set a different anchor point, so the attachment is always aligned to one of the corners of the attachment view, e.g. the lower left should be aligned with the attachment's position. Is this possible?
Posted
by waldgeist.
Last updated
.
Post marked as solved
4 Replies
121 Views
New to Apple development. Vision Pro is the reason I got a developer license and am learning XCode, SwiftUI .... The Vision Pro tutorials seem to use WIFI or the developer strap to connect the Development environment to the Vision Pro. I have the developer strap, but can't use it on my company computer. I have been learning using the developer tools, but I can't test the apps on my personal Vision Pro. Is there a way to generate an app file on the Mac Book that I can download to the Vision Pro? This would be a file that I could transfer to cloud storage and download using Safari to the Vision Pro. I will eventually get a Vision Pro at work, but till then I want to start developing.
Posted Last updated
.
Post not yet marked as solved
1 Replies
109 Views
As the title says. While I can find the video capture on the desktop but I can not find where it is storing the screenshots even when it says Screenshot's succeeded. I am referencing this: https://developer.apple.com/documentation/visionos/capturing-screenshots-and-video-from-your-apple-vision-pro-for-2d-viewing
Posted Last updated
.
Post not yet marked as solved
1 Replies
121 Views
I have some usdz files saved and I would like to make thumbnails for them in 2D of course. I was checking Creating Quick Look Thumbnails to Preview Files in Your App but it says Augmented reality objects using the USDZ file format (iOS and iPadOS only) I would like to have the same functionality in my visionOS app. How can I do that? I thought about using some api to convert 3d asset into 2d asset, but it would be better If I could do that inside the Swift environment. Basically I wanna do Image(uiImage: "my_usdz_file")
Posted
by Hygison.
Last updated
.
Post not yet marked as solved
1 Replies
82 Views
I have a unity scene which i have created for vision pro and i have also created a biomatric authentication application for vision os using Xcode and swift. What i want to do is call unity scene after the authentication has taken place form the xcode. now i have seen medium post but it only shows how we can do that for apps, I am not bale to do that for vision Pro I have followed this post : https://medium.com/mop-developers/launch-a-unity-game-from-a-swiftui-ios-app-11a5652ce476 All this i am doing because as far as i know Apple vision pro is not currently supporting optic id authentication with unity's polyspatial plugin. Any help on this will be appreciated. Thank you in advace.
Posted
by snusharma.
Last updated
.
Post not yet marked as solved
1 Replies
146 Views
Hi there. I've been trying to take a snapshot programmatically on apple vision pro but haven't succeeded. This is the code I am using so far: func takeSnapshot<Content: View>(of view: Content) -> UIImage? { var image: UIImage? uiQueue.sync { let controller = UIHostingController(rootView: view) controller.view.bounds = UIScreen.main.bounds let renderer = UIGraphicsImageRenderer(size: controller.view.bounds.size) image = renderer.image { context in controller.view.drawHierarchy(in: controller.view.bounds, afterScreenUpdates: true) } } return image } However, UIScreen is unavailable on visionOS. Any idea of how I can achieve this? Thanks Oscar
Posted
by ojrlopez.
Last updated
.
Post marked as solved
1 Replies
101 Views
I built two parts of my app a bit disjointed: my physics component, which controls all SceneReconstruction, HandTracking, and WorldTracking. my spatial GroupActivities component that allows you to see personas of those that join the activity. My problem: When trying to use any DataProvider in a spatial experience, I get the ARKit Session Event: dataProviderStateChanged, which disables all of my providers. My question: Has anyone successfully been able to find a workaround for this? I think it would be amazing to have one user be able to be the "host" for the activity and the scene reconstruction provider still continue to run for them.
Posted Last updated
.
Post not yet marked as solved
1 Replies
140 Views
I'm trying to understand how Apple handles dragging windows around in an immersive space. 3d Gestures seem to be only half of the solution in that they are great if you're standing still and want to move the window an exaggerated amount around the environment, but if you then start walking while dragging, the amplified gesture sends the entity flying off into the distance. It seems they quickly transition from one coordinate system to another depending on if the user is physically moving. If you drag a window and start walking the movement suddenly matches your speed. When you stop moving, you can push and pull the windows around again like a super hero. Am I missing something obvious in how to copy this behavior? Hello world, which uses the 3d gesture has the same problem. You can move the world around but if you walk with it, it flies off. Are they tracking the head movement and if it's moved more than a certain amount it uses that offset instead? Is there anything out of the box that can do this before I try and hack my own solution?
Posted Last updated
.