Post not yet marked as solved
I need to obtain data through mqtt and subscription? Is there any idea or framework ?
Think you
Post not yet marked as solved
Environment
Apple Silicon M1 Pro
macOS 14.4
Xcode 15.3 (15E204a)
visionOS simulator 1.1
Step
Create a new visionOS app project and compile it through xcodebuild:
xcodebuild -destination "generic/platform=visionOS"
It fails on RealityAssetsCompile with log :
error: Failed to find newest available Simulator runtime
But if I open the Xcode IDE and start building, it works fine. This error only occurs in xcodebuild.
More
I noticed that in xcrun simctl list the vision pro simulator is in unavailable state:
-- visionOS 1.1 --
Apple Vision Pro (6FB1310A-393E-4E82-9F7E-7F6D0548D136) (Booted) (unavailable, device type profile not found)
And i can't find the vision pro device type in xcrun simctl list devicetypes, does it matter? I have tried to completely reinstall Xcode and simulator runtime, but still the same error.
Post not yet marked as solved
Today I have tried to add a second archive action for visionOS. I had added a visionOS destination to my app target a while back and can build and archive my app for visionOS in Xcode 15.3 locally, and also run it on the device.
Xcode Cloud is giving me the following errors in the Archive - visionOS action (Archive - iOS works):
Invalid Info.plist value. The value for the key 'DTPlatformName' in bundle MyApp.app is invalid.
Invalid sdk value. The value provided for the sdk portion of LC_BUILD_VERSION in MyApp.app/MyApp is 17.4 which is greater than the maximum allowed value of 1.2.
This bundle is invalid. The value provided for the key MinimumOSVersion '17.0' is not acceptable.
Type Mismatch. The value for the Info.plist key CFBundleIcons.CFBundlePrimaryIcon is not of the required type for that key. See the Information Property List Key Reference at https://developer.apple.com/library/ios/documentation/general/Reference/InfoPlistKeyReference/Introduction/Introduction.html#//apple_ref/doc/uid/TP40009248-SW1
All 4 errors are annotated with "Prepare Build for App Store Connect" and I get them for both "TestFlight (Internal Testing Only)" and "TestFlight and App Store" deployment preparation options.
I have tried to remove the visionOS destination and add it back, but this is not changing the project at all.
Any ideas what I am missing?
Post not yet marked as solved
Hi Guys, I would like to ask if anyone knows the FPS of screen recording and airplay on Vision Pro. Airplay refers to mirroring the Vision Pro view to MacBook/iPhone/iPad. Also, is there any way to record the screen with the raw FPS of Vision Pro (i.e., 90)?
Post not yet marked as solved
Hi there. I've been trying to take a snapshot programmatically on apple vision pro but haven't succeeded.
This is the code I am using so far:
func takeSnapshot<Content: View>(of view: Content) -> UIImage? {
var image: UIImage?
uiQueue.sync {
let controller = UIHostingController(rootView: view)
controller.view.bounds = UIScreen.main.bounds
let renderer = UIGraphicsImageRenderer(size: controller.view.bounds.size)
image = renderer.image { context in
controller.view.drawHierarchy(in: controller.view.bounds, afterScreenUpdates: true)
}
}
return image
}
However, UIScreen is unavailable on visionOS.
Any idea of how I can achieve this?
Thanks
Oscar
Post not yet marked as solved
Hello. I have a model of a CD record and box, and I would like to change the artwork of it via an external image URL. My 3D knowledge is limited, but what I can say is that the RealityView contains the USDZ of the record, which in turn contains multiple materials: ArtBack, ArtFront, PlasticBox, CD.
How do I target an artwork material and change it to another image? Here is the code so far.
RealityView { content in
do {
let entity = try await Entity.init(named: "VinylScene", in: realityKitContentBundle)
entity.scale = SIMD3<Float>(repeating: 0.6)
content.add(entity)
} catch {
ProgressView()
}
}
Post not yet marked as solved
Hello everyone!
For applications with multiple windows, if I close all open windows, I've noticed that visionOS reopen only the last closed window when launching the app again from the menu.
Is there a way I can set a main window that will always be open when the application is launched?
Post not yet marked as solved
Since camera access is not allowed right now, does Apple have the same restriction on screenshot?
What I am trying to do is, I would like to have my user take a screenshot, then my app will detect and read this screenshot to process information (without letting my user to select and upload manually) automatically.
But I did not find Vision Pro documentation about this, should I check SwiftUI or other developer documentation?
Post not yet marked as solved
I am working for a team developing solutions for HMD(Meta and others). We are exploring feasibility for development of solutions for Apple Vision Pro from India. Could you suggest the prerequisites to begin development. Also please confirm if there any regional constraints in development on Vision OS.
Post not yet marked as solved
It is well known that visionOS do not provide this feature due to privacy issue. I'm wondering if developer can gain this ability only if user accepted to better develop MR or AR apps?
Post not yet marked as solved
Our app needs to scan QR codes (or a similar mechanism) to populate it with content the user wants to see.
Is there any update on QR code scanning availability on this platform? I asked this before, but never got any feedback.
I know that there is no way to access the camera (which is an issue in itself), but at least the system could provide an API to scan codes.
(It would be also cool if we were able to use the same codes Vision Pro uses for detecting the Zeiss glasses, as long as we could create these via server-side JavaScript code.)
Post not yet marked as solved
I'm on VisionOS 1.2 beta and Instruments will capture everything but RealityKit information.
RealityKit frames and RealityKit metrics captures no data. This used to work though I'm not sure what version it did. Unbelievably frustrating.
Hi team,
I'm running into the following issue, for which I don't seem to find a good solution.
I would like to be able to drag and drop items from a view into empty space to open a new window that displays detailed information about this item.
Now, I know something similar has been flagged already in this post (FB13545880: Support drag and drop to create a new window on visionOS)
HOWEVER, all this does, is launch the App again with the SAME WindowGroup and display ContentView in a different state (show a selected product e.g.).
What I would like to do, is instead launch ONLY the new WindowGroup, without a new instance of ContentView.
This is the closest I got so far. It opens the desired window, but in addition it also displays the ContentView WindowGroup
WindowGroup {
ContentView()
.onContinueUserActivity(Activity.openWindow, perform: handleOpenDetail)
}
WindowGroup(id: "Detail View", for: Reminder.ID.self) { $reminderId in
ReminderDetailView(reminderId: reminderId! )
}
.onDrag({
let userActivity = NSUserActivity(activityType: Activity.openWindow)
let localizedString = NSLocalizedString("DroppedReminterTitle", comment: "Activity title with reminder name")
userActivity.title = String(format: localizedString, reminder.title)
userActivity.targetContentIdentifier = "\(reminder.id)"
try? userActivity.setTypedPayload(reminder.id)
// When setting the identifier
let encoder = JSONEncoder()
if let jsonData = try? encoder.encode(reminder.persistentModelID),
let jsonString = String(data: jsonData, encoding: .utf8) {
userActivity.userInfo = ["id": jsonString]
}
return NSItemProvider(object: userActivity)
})
func handleOpenDetail(_ userActivity: NSUserActivity) {
guard let idString = userActivity.userInfo?["id"] as? String else {
print("Invalid or missing identifier in user activity")
return
}
if let jsonData = idString.data(using: .utf8) {
do {
let decoder = JSONDecoder()
let persistentID = try decoder.decode(PersistentIdentifier.self, from: jsonData)
openWindow(id: "Detail View", value: persistentID)
} catch {
print("Failed to decode PersistentIdentifier: \(error)")
}
} else {
print("Failed to convert string to data")
}
}
Post not yet marked as solved
Is there a maximum distance at which an entity will register a TapGesture()? I'm unable to interact with entities farther than 8 or 9 meters away. The below code generates a series of entities progressively farther away. After about 8 meters, the entities no long respond to tap gestures.
RealityView { content in
var body: some View {
RealityView { content in
for i in 0..<10 {
if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) {
content.add(immersiveContentEntity)
immersiveContentEntity.position = SIMD3<Float>(x: Float(-i*i), y: 0.75, z: Float(-1*i)-3)
}
}
}
.gesture(tap)
}
var tap: some Gesture {
TapGesture()
.targetedToAnyEntity()
.onEnded { value in
AudioServicesPlaySystemSound(1057)
print(value.entity.name)
}
}
}
Post not yet marked as solved
Hi!
I was trying to port our sdk for visionOS.
I was going through the documentation and saw this video: https://developer.apple.com/videos/play/wwdc2023/10089/
Is there any working code sample for it, same goes for arkit c api ?
Couldn't find any links. Thanks in advance.
Sahil
Post not yet marked as solved
I am developing an immersive application featured with hands interacting my virtual objects. When my hand passes through the object, the rendered color of my hand is like blending hand color with object's color together, both semi transparent. I wonder if it is possible to make my hand be always "opaque", or say the alpha value of rendered hand (coz it's VST) is always 1, but the object's alpha value could be varied in terms of whether it is interacting with hand.
(I was thinking this kind of feature might be supported by a specific component (just like HoverEffectComponent), but I didn't find that out)
Post not yet marked as solved
Good day. I'm inquiring if there is a way to test functionality between Apple Pencil Pro and Apple Vision Pro? I'm trying to work on an idea that would require a tool like the Pencil as an input device. Will there be an SDK for this kind of connectivity?
Post not yet marked as solved
how to get a clear background with navigationstack in visionOS app?
Post not yet marked as solved
I have been trying to replicate the entity transform functionality present in the magnificent app Museum That Never Was (https://apps.apple.com/us/app/the-museum-that-never-was/id6477230794) -- it allows you to simultaneously rotate, magnify and translate the entity, using gestures with both hands (as opposed to normal DragGesture() which is a one-handed gesture). I am able to rotate & magnify simultaneously but translating via drag does not activate while doing two-handed gestures. Any ideas? My setup is something like so:
Gestures:
var drag: some Gesture {
DragGesture()
.targetedToEntity(where: QueryPredicate<Entity>.has(MyComponent.self))
.onChanged { value in
gestureTranslation = value.convert(value.translation3D, from: .local, to: .scene)
}
.onEnded { value in
itemTranslation += gestureTranslation
gestureTranslation = .init()
}
}
var rotate: some Gesture {
RotateGesture3D()
.targetedToEntity(where: QueryPredicate<Entity>.has(MyComponent.self))
.onChanged { value in
gestureRotation = simd_quatf(value.rotation.quaternion).inverse
}
.onEnded { value in
itemRotation = gestureRotation * itemRotation
gestureRotation = .identity
}
}
var magnify: some Gesture {
MagnifyGesture()
.targetedToEntity(where: QueryPredicate<Entity>.has(MyComponent.self))
.onChanged { value in
gestureScale = Float(value.magnification)
}
.onEnded { value in
itemScale *= gestureScale
gestureScale = 1.0
}
}
RealityView modifiiers:
.simultaneousGesture(drag)
.simultaneousGesture(rotate)
.simultaneousGesture(magnify)
RealityView update block:
entity.position = itemTranslation + gestureTranslation + exhibitDefaultPosition
entity.orientation = gestureRotation * itemRotation
entity.scaleAll(itemScale * gestureScale)
Post not yet marked as solved
I've been getting random crashes in an immersive RealityView, using model entities with physics. While the crashes do randomly happen with fewer objects, they can be reproduced consistently by placing 100 objects on top of each other.
The project I used as a starter, is here : https://developer.apple.com/documentation/visionos/incorporating-real-world-surroundings-in-an-immersive-experience
To reproduce the error, create 100 cubes in a loop on top of each other in addCube, instead of creating just 1. This gives exc_bad_access, as soon as the cubes are created. Tested on a Vision Pro, not a simulator.
Any advice on how to resolve this? I'm trying to have around 100 objects moving around the environment, but it still gives exc_bad_access, eventually.
I'm trying to attach the crash log, but I keep getting the error about sensitive materials. Testing to see if I can post without it.