#HoloToolkit
Explore tagged Tumblr posts
elbrunoc · 8 years ago
Text
#Hololens – What to do when your app is not a 3D App!
#Hololens – What to do when your app is not a 3D App!
Tumblr media
Hi !
Today is a quick posts on Unity3D and Hololens, I’ve been dealing with com nice and heavy C# bot code during the past days so it’s time to go back to Hololens. So, a couple of days ago I found that my Apps were not deployed as Unity VR Apps to the Hololens, instead they were 2D UWP Apps. I usually leave all the configuration of the project to HoloToolkit.
I’ve already wrote about this.…
View On WordPress
1 note · View note
hollydownerdesign · 6 years ago
Text
06/03: Notes
Reading: Beginning Windows Mixed Reality; for Hololens and mixed reality headsets: blending 3D visualisations with your physical environment. Sean Ong, 2017.
Comparing the future of mixed reality devices with smart phones and computers. “if you were told 30-40 years ago that they would not be able to fully participate in future society without owning or knowing how to use a computer, they would be hesitant about such a future.” most jobs now days use computers in some form, i.e checking emails. we also now rely heavily on smart phones. what if in the future everyone uses some form of mixed reality headset.
This book is a useful tool for learning Unity and setting up for the Hololens. 
Tracking and spatial mapping. 5 cameras on front of the hololens for spatial mapping. 1 for video and image capture. “the hololens is constantly tracking it’s environment and building a 3D model of the area its in.” this is used for; letting holograms know when “to hide from view.” allow you to pin items to your wall, allows characters to interact with your environment, i.e jump over objects, sit on things or hide. objects can stay in place in your environment even if you move around. Spatial sound; “we rely heavily on our ears to precisely locate objects around us.” “this increases the feeling of immersion.” “this increases uses perception of these objects and make the holograms feel like they are actually in the users area.” 
Tutorials:
hololens set up (33-48). Unity tutorial (49-72). Hologram set up, making your first hologram (73-97). Holotoolkit; input (gestures, clickers, gaze, and voice commands), sharing, spatial mapping (allows for digital objects to interact), spatial sound, utilities and build (98-110). Interacting with holograms, gaze works as a mouse. controlled by movement of your head and physical gaze of your eyes, gestures, voice, motion controllers, other hardware (bluetooth mouse and key pad) (111).  Gaze tutorial/set up (112-114). Gestures tutorial/set up (117-123). voice command tutorial/set up (123-129). Other hardware imput tutorial/set up (129-130). Spatial mapping (131-155) (spatial plane finding 137, occlusion tutorial 143, spatial understanding 147, spatial anchors and persistence 152). Spatial sound (156-167). growing as a holographic developer (168-208). Turning holograms into money (209-214). community resources (215-230).
Medium Resources:
-https://uxplanet.org/ux-101-for-virtual-and-mixed-reality-part-2-working-with-the-senses-c39fbd502494     (CV) computer vision. VR haptics; “when we reach out to touch a virtual object and feel nothing in our hands the virtual remains just that, Virtual”.
0 notes
automaticvr · 6 years ago
Video
vimeo
HoloLens AR exhibition with tattoo art by Oskar Stjärne aka. @startwithapen (prototype). Made with Unity, C#, HoloToolKit (aka. Mixed Reality ToolKit), Microsoft Academy assets, and shaders by Roland Smeenk. Audio made with Analog Four and Ableton Live. #hologram #holographic #readyplayerone #hmi #tattoo #inked #metavision #programmer #joeldittrich #minorityreport #virtual #augmented #magicleap #unity #hololens #matrix #futurism #cyberpunk #oskarstjarne #kinect #leapmotion #startwithapen #mixedreality #spatialcomputing #mdc #vr #xr #ar #mr #xuxoe
0 notes
homedevises · 6 years ago
Text
The Ten Secrets That You Shouldn’t Know About Free Architectural Plans Online | free architectural plans online
By Michael Shyu, Iffat Mai, and Fei Xie for Autodesk University
Free Online Architecture and Design Courses | ArchDaily – free architectural plans online | free architectural plans online
Mixed absoluteness (MR) combines the basic and the accurate realities into one amplitude and offers an agitative new architectonics archetype for architects. By bulging a BIM archetypal anon over a accurate armpit in alloyed reality, architects can acquaint architectonics account to the aggregation and audience in an immersive and alternate way.
This commodity will authenticate case studies of alloyed reality, appliance Microsoft HoloLens, activated for altered phases of architectural projects. We will allotment our assay of the action of absorption alignment of a BIM archetypal with the accurate activity site, followed by appliance alloyed absoluteness for basic cartoon and assay of architectonics options. Finally, we will appraise the abeyant of alloyed absoluteness accoutrement for architectonics administering and ahead on-site alloyed absoluteness affray apprehension appliance Microsoft HoloLens.
Design advice is advancing rapidly. The use of accoutrement such as basic absoluteness (VR) has become a commonplace convenance for architectonics teams. Alloyed absoluteness offers added allowances to basic absoluteness by overlaying basic altar assimilate a absolute accurate environment. This could be a game-changing apparatus for all phases of architectonics and communication. By amalgam the Microsoft HoloLens and BIM clay with real-world sites, alloyed absoluteness offers new possibilities for architects to acquaint architectonics account to the aggregation and clients.
Our activity began as an incubator angle accomplished by Michael Shyu and Fei Xie. Already the angle won as an incubator project, we formed an ad hoc aggregation to assignment on the research, design, and architectonics of a alloyed absoluteness tool. The cold was to analyze new opportunities that alloyed absoluteness could accommodate for the architectonics and architectonics industry. We achievement to use the alloyed absoluteness apparatus in all phases of design, from antecedent abstraction and armpit assay to architectonics administration. It will accommodate architectonics archetypal options, autogenous designs activated to accurate sites, and affray apprehension for architectonics administration.
Explore architectonics advice appliance alloyed absoluteness for all phases of architectural design.
Virtual absoluteness uses a computer to actualize a apish ambiance that is absolutely abandoned from the absolute accurate ambiance about you. It offers you an immersive experience, but it additionally blocks out any accord amid the basic and accurate worlds. On the added hand, aggrandized absoluteness (AR), which is a technology that has been about for decades, presents basic advice on top of a absolute accurate environment. AR acquired ballyhoo for its accepted Pokémon GO app breadth users can see basic Pokémon as they airing about boondocks appliance their adaptable phone. MR appliance the HoloLens avalanche amid AR and VR, breadth one can acquaintance the basic altar alloyed with the accurate objects, not by attractive at a adaptable buzz or tablet, but through the cellophane lenses of a HoloLens angle and interacting with the basic article appliance a accustomed interface.
What separates our abstraction from added MR accoutrement is that our ambition is not artlessly to represent the BIM archetypal in absolute time, but to accept the apparatus acquirements algorithms adjust the archetypal to the armpit and actuate what in the activity would absolutely alarm for the absorption of the artist and artist via the HoloLens. Our alignment aboriginal starts with alignment. Utilizing SLAM (Simultaneous Localization and Mapping) technology anchored in the HoloLens, it is attainable to admit surfaces and appropriately we are able to adjust advertence credibility from the BIM archetypal to the absolute space.
We created a simple user interface, breadth a user with no above-mentioned acquaintance with the HoloLens will be able to acquaintance their architectonics archetypal both as a armpit archetypal and as a all-encompassing autogenous space. The user interface of our MR apparatus uses the designer’s boring for cursor control, a articulation command arrangement for designers to collaborate with the archetypal in a hands-free fashion, and a action ascendancy alternation of commands appliance simple air-tapping gestures.
2223 More 223 Bedroom 223D Floor Plans – free architectural plans online | free architectural plans online
Before starting the project, it is important to defended all the all-important accouterments and software accoutrement bare for the project. The HoloLens angle is the primary accessory that will be acclimated for the project. We additionally bare a laptop that meets the minimum arrangement requirements for MR development:
Note: As of October 17, 2017, Microsoft has appear the Windows 10 Fall Creators Update which works with the newer adaptation of Beheld Studio 2017 and Unity 3D 2017. If you accept to use the newer adaptation software, be abiding to use all the newer adaptation accoutrement to advance the affinity of software and drivers.
Here are the workflow accomplish for importing 3D models from Revit to HoloLens:
Our antecedent angle alleged for appliance an alive activity that included an absorbing advance and accession as our on-site analysis case. However, afterwards accepting a bigger compassionate of SLAM and the architectonics agenda of the project, we had appear to apprehend it was bigger for us to analysis out the MR accoutrement in a added controlled ambiance afore activity on an alive activity site. The accoutrement that bare to be accurately developed and programed were alignment, layering, and annotation. We ahead that afterwards the animate anatomy is erected, that the HoloLens will be a abundant added able apparatus in that it can about tie itself to the architectonics and activate to bury aggrandized basic advice on the site.
If we were to use the SLAM technology to adjust to an exoteric foundation footing, the sensors adeptness accept a adamantine time locking the edges in abode and the archetypal would acquaintance astringent drift. In addition, the beheld limitation of the awning lends itself to a added interior-oriented absoluteness experience. We absitively the best testing arena for our abstracts was the 12th attic of 225 Franklin Street, which happens to be the attic anon aloft the Perkins and Will Boston Office. It additionally happens to be an abandoned appointment amplitude that is calmly attainable to our team.
Windows Developer centermost provides abounding accessible tutorials and assets to adviser newbie developers with sample codes and best practices. Another advantageous apparatus is the HoloToolkit, which is a chargeless download.
The archetypal alignment amid the basic and the accurate worlds is the aboriginal footfall bare to ballast the archetypal assimilate the exact breadth in the absolute world. In adjustment to adjust the two spaces, it is all-important to lock three basic credibility to three accurate credibility in adjustment to lock the X, Y, Z axes. The antecedent account beneath shows the aboriginal accomplish of the process.
One analytical point to accept about HoloLens is the way it creates basic holograms through an accretion action utilizing ablaze to actualize the holographic projections. It about cannot decrease information, and the blush accurate atramentous would apprehend as transparent. Caliginosity can be accomplished through greys and aphotic blues, but alive shadowing is absolute computationally accelerated for the HoloLens at this time.
Image from page 318 of “A history of architecture in Italy from the time of Constantine to the dawn of the renaissance” (1901) – free architectural plans online | free architectural plans online
Alignment is the analytical aboriginal footfall in actuality able to activity a believable hologram. In adjustment to adjust a basic object, the HoloLens about utilizes SLAM to admit edges and after allows it to ballast the basic article into place. Aback edges are not present, as in an exoteric space, or there are too abounding shadows, the sensors cannot apprehend the absolute bend for adapted alignment and about cannot ballast the archetypal down. This abstruse limitation bedfast our adeptness to accompany the aboriginal ambit of the incubator. In the future, we brainstorm it would be attainable to tie exoteric models into abode with GPS in affiliation with SLAM; however, it would crave accouterments development and alien sensor accoutrement that acquaint with the HoloLens to accomplish this result.
Through hands-on testing, we bent the SLAM abuttals for HoloLens to be an breadth of about 20 anxiety by 10 feet. This is analytical to accept because that will abode a absolute on how far the HoloLens can accumulate its alignment.
With the advice from our aggregation affiliate Ryan Zhang, a researcher from the MIT Media Lab and GSI, we developed a two-point alignment arrangement that enables the user to calmly abode a basic archetypal and adjust it to the accurate world. The user would analyze two allocation credibility in the absolute apple and the basic world. To adjust the spaces, one would abode the aboriginal ballast point (represented by a white ball) at the aboriginal allocation point appliance the SLAM to breeze to the exact point, again annoyance the additional point (represented by a red ball) to the additional alike point, which will again ascertain the calibration and acclimatization of the basic model. Our UI designer, Chance Heath, developed a nice alternation of user interface airheaded to adviser the user on how to use air-tap gestures to baddest and abode the allocation points.
Our final HoloLens Architectonics App starts with a simple beheld apprenticeship on how to use air tap to baddest the functions. The card will appearance three audible modes that offers Archetypal Observatory for abstraction design, Architectonics Options for schematic and architectonics development, and Architectonics Assistant for the CA phase.
The aboriginal approach is the Archetypal Observatory, breadth the artist can appearance a preloaded archetypal actualization as a scaled bottomward desktop model. The artist can again abode the archetypal assimilate a amplitude appliance HoloLens’s SLAM capability, the archetypal will breeze to the called surface. The artist can again collaborate with the archetypal and bang on the buttons to assay altered options. This approach is absolute advantageous for designers to appearance the absolute archetypal at a abate scale.
The additional approach is Architectonics Option, breadth designers would abode the basic archetypal appliance the two-point alignment adjustment so the archetypal is at abounding calibration anchored accurately assimilate the absolute world. Again appliance the architectonics advantage buttons, the artist can assay the assorted altered architectonics models at abounding scale. Actuality in abounding scale, the designers can airing about and acquaintance the amplitude in an immersive fashion. The architectonics advantage includes abstracts such as marble, wood, and concrete.
The final approach is the Architectonics Assistant mode. Currently, this area is in the architectonics phase, breadth we are assuming abeyant functions that our architectonics administering managers could advance during that appearance of the project. We interviewed with our architectonics manager, Heather Miller, who had completed abundant architectonics administering assignment on projects. Here is some ascribe and acknowledgment from her:
2. Articulation command and notations
Futuristic Online Architectural Drawing Software Sensational – Home … – free architectural plans online | free architectural plans online
3. Artist liability
4. Precision
The technology for the HoloLens is absolutely amazing, and there is no added MR artefact on the bazaar that combines its accelerated SLAM recognition, fast tracking, untethered portability, and adeptness to upload aerial allegiance models and programs. If we amalgamate the HoloLens with apparatus acquirements and BIM modeling, in the abreast approaching the sensors audition the SLAM boundaries will be able to extrapolate that into absolute article recognition.
The ability for a computer to admit an article from altered angles and be able to absolutely action what that article agency in agreement of its inherent abstracts is an badly important step. Brainstorm in the abreast approaching aback you go to a job armpit with the HoloLens, you browse a allotment of aqueduct assignment which will again cantankerous advertence automatically aback to the BIM model. Appliance apparatus learning, the HoloLens again recognizes that the accurate article is absolutely a acknowledgment duct, and will automatically accommodate you with all advice associated with it — what affectionate of aqueduct is it, how abundant air it moves, breadth it runs in the all-embracing building. Or bigger yet, you aren’t alike actively attractive at the duct, but the HoloLens automatically detects that the aqueduct is not in the absolute breadth and brings up the arrangement abstracts for you to assay as a reference! Afterwards it detects the anomaly, you are able to aftermath a acreage address recording your allegation for assay aback you acknowledgment to the office.
Another academic acquaintance is if you are aggravating to acquaint this affray to the accepted contractor, and you appetite to analyze what you see to what the arrangement abstracts state. You both put on a brace of HoloLens and are instantly able to see what the absorbed is on the accurate site, allowance up any abeyant abashing or mistakes appropriately extenuative money for the project.
The approaching is ablaze for the technology, and the aggregate acquaintance is one that will boss the bazaar in the advancing years. It takes a lot of accomplishment to advance the accoutrement needed. However, already you advance the tools, your antecedent investments will instantly pay assets for years to come.
Michael Shyu began his assay in smartphone applications and their interface with accurate architectonics in 2009 with his Bachelor of Architectonics apriorism at Syracuse University. He again agitated his assay to Columbia University GSAPP breadth he alternate in architectonics studios focused on aggrandized absoluteness and amplitude planning and developed appliance interfaces and designs for assorted activity types.
Iffat Mai is the firm-wide architectonics appliance development administrator of Perkins and Will. During her added than 20 years of alive in the AEC technology field, Ms. Mai has apparent administration in authoritative cardinal technology decisions, developing avant-garde solutions, and amalgam cutting-edge technologies into AEC architectonics workflow. Her contempo focus has been aberrant VR, AR, and MR with BIM into able architectural practice.
Architecture. Free Online Floor Plan Maker: House Floor … – free architectural plans online | free architectural plans online
Fei Xie started aggrandized absoluteness assay during his internship at Adrian Smith Gordon Gill in 2013. He auspiciously developed an app via AR-Media SDK acceptance audience to analyze altered architectonics options with AR technology. In 2014 he began a activity which allows bodies to actualize their own AR portfolios. Fei becoming a bachelor’s amount in physics afore accepting his master’s amount in architectonics from Washington University in St. Louis.
Learn added with the abounding chic at AU online: A New Architectonics Archetype in Alloyed Reality — Using HoloLens for Architectural Design.
The Ten Secrets That You Shouldn’t Know About Free Architectural Plans Online | free architectural plans online – free architectural plans online | Delightful to my personal weblog, within this time I’ll demonstrate about keyword. And today, this can be the primary picture:
Build Floor Plans Online Free Building Drawing Plans How To Draw … – free architectural plans online | free architectural plans online
Think about graphic above? is usually which amazing???. if you think maybe therefore, I’l d show you some photograph all over again below:
So, if you wish to have all these great pics about (The Ten Secrets That You Shouldn’t Know About Free Architectural Plans Online | free architectural plans online), press save button to store the images in your personal pc. There’re prepared for down load, if you love and wish to grab it, click save symbol on the post, and it will be instantly downloaded in your desktop computer.} At last if you need to secure new and the latest picture related to (The Ten Secrets That You Shouldn’t Know About Free Architectural Plans Online | free architectural plans online), please follow us on google plus or book mark this blog, we try our best to present you daily update with fresh and new graphics. We do hope you love staying right here. For many up-dates and recent information about (The Ten Secrets That You Shouldn’t Know About Free Architectural Plans Online | free architectural plans online) pictures, please kindly follow us on twitter, path, Instagram and google plus, or you mark this page on book mark section, We attempt to offer you up-date periodically with fresh and new pictures, enjoy your browsing, and find the ideal for you.
Here you are at our site, articleabove (The Ten Secrets That You Shouldn’t Know About Free Architectural Plans Online | free architectural plans online) published .  Nowadays we are excited to announce we have discovered an extremelyinteresting contentto be reviewed, namely (The Ten Secrets That You Shouldn’t Know About Free Architectural Plans Online | free architectural plans online) Lots of people trying to find information about(The Ten Secrets That You Shouldn’t Know About Free Architectural Plans Online | free architectural plans online) and certainly one of these is you, is not it?
Floor Layout Software – Home Design Jobs – free architectural plans online | free architectural plans online
Floor Layout Software – Home Design Jobs – free architectural plans online | free architectural plans online
Home design plans online free interior design house plans house … – free architectural plans online | free architectural plans online
Floor Layout Software – Home Design Jobs – free architectural plans online | free architectural plans online
Malaysia – Kuala Lumpur – Petronas Twin Towers – 26 – free architectural plans online | free architectural plans online
Home design plans online free interior design house plans house … – free architectural plans online | free architectural plans online
23D Floor Plans | RoomSketcher – free architectural plans online | free architectural plans online
Malaysia – Kuala Lumpur – Petronas Twin Towers – 26 – free architectural plans online | free architectural plans online
23D Floor Plans | RoomSketcher – free architectural plans online | free architectural plans online
23D Floor Plans | RoomSketcher – free architectural plans online | free architectural plans online
Online Free Create House Floor Plans Online Fr 23 | aidemyst.info – free architectural plans online | free architectural plans online
Online Free Create House Floor Plans Online Fr 23 | aidemyst.info – free architectural plans online | free architectural plans online
23D Floor Plans | RoomSketcher – free architectural plans online | free architectural plans online
Online Free Create House Floor Plans Online Fr 23 | aidemyst.info – free architectural plans online | free architectural plans online
Design Floor Plan House Design Kitchen Floor Plan Online Free … – free architectural plans online | free architectural plans online
Online Free Create House Floor Plans Online Fr 23 | aidemyst.info – free architectural plans online | free architectural plans online
Design Floor Plan House Design Kitchen Floor Plan Online Free … – free architectural plans online | free architectural plans online
The post The Ten Secrets That You Shouldn’t Know About Free Architectural Plans Online | free architectural plans online appeared first on Home Devise.
from WordPress https://homedevise.com/the-ten-secrets-that-you-shouldnt-know-about-free-architectural-plans-online-free-architectural-plans-online/
0 notes
hanaleistudios · 7 years ago
Text
Holiday Holograms #3: Prototyping measurement tool
In todays blog post I’d like to run you through how I made my first Hololens prototype! It’s a pretty simple tool for measuring out different areas using the tap gesture! Shortly after creating the prototype I realised there was a much more advanced version of it included in the HoloToolKit examples but they took a very different approach to mine so hopefully you’ll learn something from my take on it!
1. Functions of the app
The basic functionality of the app should be:
Have the user tap in one position.
Have the user tap in a second position.
Have a line draw between the two points.
Have the distance between those 2 points appear in the middle of the line.
If the user taps in another position have the first line disappear until they select another position then have the line re appear with the new information 
2. Programming for the HoloLens
After setting up our project using the steps from the last entry in the Holiday Holograms series to actually set up our project we’ll now need to actually code for the HoloLens. The main thing well need is to get if the user performs the “tap” gesture. 
I used the TappedEvent event to do this, you will first need to create a Gesture Recogniser then assign our delegate to that event. Here is the code to do it: https://pastebin.com/mk2e91Yx
Lets break it down a bit more though:
Tumblr media
In the start function we create our gesture recogniser “gestureRecogniser” then we assign our measurement tool stuff to the tapped event.
Tumblr media
Then in our delegate we do everything to measure out the line, first we check if we aren’t setting the points, if not we set the first position to the position of the cursor and set the position of the line render to under the floor and the bool setting points to true.
Then we wait for the second tap, set the positions of the line to the right spots, get the distance and set the distance string to that number.
Tumblr media
Then in update we set the text meshes text to the distance from before, its transform to the middle of the line and make sure that it is looking at the player. 
3. Demonstration
youtube
I hope this whole post wasn’t a mess but it totally was lol. Just had to get it out the door! 
0 notes
kinect-hack-videos-blog · 7 years ago
Video
youtube
ARKit vs. HoloLens HoloToolKit https://youtu.be/9X9SivAImYE
0 notes
hastebuds · 7 years ago
Quote
I liked a @YouTube video https://t.co/fgz3qaqXcl ARKit vs. HoloLens HoloToolKit
http://twitter.com/Hastebuds/status/883622744956559361
0 notes
webflotsam-blog · 8 years ago
Link
0 notes
8tak4 · 8 years ago
Text
Tokyo HoloLens Meetup vol.2 行ってきた
Tokyo HoloLens Meetup🕶 (@ Microsoft Japan Co., Ltd - @mskkpr in 港区, 東京都) https://t.co/SqOTHsLeRf pic.twitter.com/GXTMtmO5Lf
— プらチナ (@Ptu_) March 25, 2017
connpass ページ
都合上開発セッションだけの参加ですが、以下に走り書きのメモを残しておきます。
前置き
HoloMagicians コミュニティ
HolographicAcademyJP 日本語訳
原本
HoloLens アプリ開発の勘どころ
(HoloLens エバンジェリスト養成講座 その1)
Mixed Reality (MR)
物理世界と仮想世界
五感を通して実際の存在を理解
五感を通して人工的に作られてものを理解
以上の2つを融合するもの -> MR
SAO は MR の部類
AR: 情報の重ね合わせ(実際のものと干渉しない)、MR: 実際のものと同じような��るまい
アプリの設計
アプリケーションのタイプ
手段に合わせて技術を選ぶ。特性をよく理解する
AR: 場所に依存しない
MR: オブジェクトの大きさは物理世界のそれに合わせる
スクリーンの領域が AR に比べて広く使える (360°)
VR: e.g. HoloTour
クオリティの高さ・集中してコンテンツを見ることによる没入感
動画と静止画
ストーリーを作成する: 誰が使うのか、何を表示するのか、目的・対象は何か
Envisioning: HoloLensを使った利用ストーリを考える
ポストイットに書き出す(Brain Storming)・カテゴリに分ける(トレーニング・作業支援etc)...
ペルソナを想定した利用シナリオ(場面とその時の行動を作成)
ビデオストーリーとして可視化(e.g. アクシデントを想定して HoloLens でどう対応するか -> 必要な機能を絞り出す)
ホログラムをどこに配置するか
スイートスポット: ユーザから 1〜5m の範囲
ホログラムには触らないように設計 (ユーザが直接触れて操作すべきでない。指先でドラッグさせる)
2m: 2D アプリケーションの配置 (動画・ブラウザ)
スタートメニューもだいたいこの位置に配置されている
1.25〜5m: ホログラムを置くのに適している
World-Locked
実空間のある場所に固定
Display-Locked
ディスプレイのある場所に固定
ストレスになるのでおすすめしない
Body-Locked
ディスプレイのある場所 & 奥行き
スタートメニューはこの動き
動作遅延がストレス
矢印などでナビゲーションさせる: 360°を有効に活用できる
その他 Tips
オブジェクトを軽く
HoloLens is Mobile device (あくまでモバイルデバイスの位置付け。作り込み過ぎて動作を重くしない)
ポリゴン���は減らす
視野角・オブジェクトサイズ
e.g. 車の全体を見せる場合は 5m 程度離す
重力加速度に沿った運動 -> リアリティを増す
Unity Asset
Live Codingしてた
Unity ちゃんのポリゴン数減らす Asset
ホログラム適用 Assets
物体認識 Assets
https://developer.vuforia.com/
テクスチャに沿ってコンテンツを表示
スマホのスキャンアプリで物体スキャン -> Target Manager にデータセット用意
現実の R2D2 を Unity ちゃんに置き換えるデモ
Unity で始める HoloLens 開発
HoloToolkit-Unity
READMEを読む、DEMO を触る
Import -> 設定を反映 -> HoloLensCamera 配置 -> Build
e.g. EnglishBird
HoloToolkit のうち Gesture Input, Voice Input, TextToSpeech, Spatial Mapping... などを使用
画面外のオブジェクトが多い場合は DirectionIndicator 使わない方がいい
e.g. HoloGiraffe
大きいオブジェクトを表示 -> 3D プリンタっぽくして視線誘導
プレイヤーの視線を誘導する (コンテンツに没入させたいことを意識)
開発してから考える
Tips
Debug モードは FPS 遅い。権限確認、AssetStore プラグインの互換性、デバッグは TextToSpeech ですると便利
(HoloLens に限らず) VR Samples 便利
UWP って何
デバイスファミリ共通の API セット
Sharing Deep Dive
複数人で同一のホログラムを体験する
同一空間で / 異なる空間で
Holograms 240 (tutorial) で学べる
WorldAnchor
座標系の共有: 複数のユーザで共通の空間アンカーを固定
通常はアプリ開始時にそれぞれ座標が決定(HoloLens の位置が (0,0,0)、z 軸は正面・y 軸は上方向)
localPosition、localRotation
双方向通信: HoloToolkit-Unity
アプリ再開時に前回の座標を読み込む: WorldAnchorStore (Key: Value)
WorldAnchorTransferBatch: 空間アンカーに関する情報を byte 配列にシリアライズ、ネットワーク経由で全ユーザに送信
内部仕様は謎
データサイズが小さいと失敗する (MB オーダーになる場合もあるので注意)、保存・読み込みのリトライ処理を実装する
MS 公式デモは相当作り込んでる?
MaginOnion で双方向通信実装できる
0 notes
aphaena · 8 years ago
Text
List of Open-Source Hololens code in C++/C#/DirectX
List of Open-Source Hololens code in C++/C#/DirectX
https://forums.hololens.com/discussion/2578/list-of-open-source-hololens-code-in-c-c-directx   github.com/Microsoft HoloToolkit https://github.com/Microsoft/HoloToolkit Holographic Academy https://github.com/Microsoft/HolographicAcademy FaceTracking https://github.com/Microsoft/Windows-universal-samples/tree/master/Samples/HolographicFaceTracking MixedRealityCapture…
View On WordPress
0 notes
repwinpril9y0a1 · 8 years ago
Text
Building the Terminator Vision HUD in HoloLens
James Cameron’s 1984 film The Terminator introduced many science-fiction idioms we now take for granted. One of the most persistent is the thermal head-up-display (HUD) shot that allows the audience to see the world through the eyes of Arnold Schwarzenegger’s T-800 character. In design circles, it is one of the classic user interfaces that fans frequently try to recreate both as a learning tool and as a challenge.
In today’s post, you’ll learn how to recreate this iconic interface for the HoloLens. To sweeten the task, you’ll also hook up this interface to Microsoft Cognitive Services to perform an analysis of objects in the room, face detection and even some Optical Character Recognition (OCR).
While on the surface this exercise is intended to just be fun, there is a deeper level. Today, most computing is done in 2D. We sit fixed at our desks and stare at rectangular screens. All of our input devices, our furniture and even our office spaces are designed to help us work around 2D computing. All of this will change over the next decade.
Modern computing will eventually be overtaken by both 3D interfaces and 1-dimensional interfaces. 3D interfaces are the next generation of mixed reality devices that we are all so excited about. 1D interfaces, driven by advances in AI research, are overtaking our standard forms of computing more quietly, but just as certainly.
By speaking or looking in a certain direction, we provide inputs to AI systems in the cloud that can quickly analyze our world and provide useful information. When 1D and 3D are combined—as you are going to do in this walkthrough—a profoundly new type of experience is created that may one day lead to virtual personal assistants that will help us to navigate our world and our lives.
The first step happens to be figuring out how to recreate the T-800 thermal HUD display.
Recreating the UI
Start by creating a new 3D project in Unity and call it “Terminator Vision.” Create a new scene called “main.” Add the HoloToolkit unity package to your app. You can download the package from the HoloToolkit project’s GitHub repository. This guide uses HoloToolkit-Unity-v1.5.5.0.unitypackage. In the Unity IDE, select the Assets tab. Then click on Import Package -> Custom Package and find the download location of the HoloTookit to import it into the scene. In the menu for your Unity IDE, click on HoloToolkit -> Configure to set up your project to target HoloLens.
Once your project and your scene are properly configured, the first thing to add is a Canvas object to the scene to use as a surface to write on. In the hierarchy window, right-click on your “main” scene and select GameObject -> UI -> Canvas from the context menu to add it. Name your Canvas “HUD.”
The HUD also needs some text, so the next step is to add a few text regions to the HUD. In the hierarchy view, right-click on your HUD and add four Text objects by selecting UI -> Text. Call them BottomCenterText, MiddleRightText, MiddleLeftText and MiddleCenterText. Add some text to help you match the UI to the UI from the Terminator movie. For the MiddleRightText add:
SCAN MODE 43984
SIZE ASSESSMENT
ASSESSMENT COMPLETE
FIT PROBABILITY 0.99
RESET TO ACQUISITION
MODE SPEECH LEVEL 78
PRIORITY OVERRIDE
DEFENSE SYSTEMS SET
ACTIVE STATUS
LEVEL 2347923 MAX
For the MiddleLeftText object, add:
ANALYSIS:
***************
234654 453 38
654334 450 16
245261 856 26
453665 766 46
382856 863 09
356878 544 04
664217 985 89
For the BottomCenterText, just write “MATCH.” In the scene panel, adjust these Text objects around your HUD until they match with screenshots from the Terminator movie. MiddleCenterText can be left blank for now. You’re going to use it later for surfacing debug messages.
Getting the fonts and colors right are also important – and there are lots of online discussions around identifying exactly what these are. Most of the text in the HUD is probably Helvetica. By default, Unity in Windows assigns Arial, which is close enough. Set the font color to an off-white (236, 236, 236, 255), font-style to bold, and the font size to 20.
The font used for the “MATCH” caption at the bottom of the HUD is apparently known as Heinlein. It was also used for the movie titles. Since this font isn’t easy to find, you can use another font created to emulate the Heinlein font called Modern Vision, which you can find by searching for it on internet. To use this font in your project, create a new folder called Fonts under your Assets folder. Download the custom font you want to use and drag the TTF file into your Fonts folder. Once this is done, you can simply drag your custom font into the Font field of BottomCenterText or click on the target symbol next to the value field for the font to bring up a selection window. Also, increase the font size for “MATCH” to 32 since the text is a bit bigger than other text in the HUD.
In the screenshots, the word “MATCH” has a white square placed to its right. To emulate this square, create a new InputField (UI -> Input Field) under the HUD object and name it “Square.” Remove the default text, resize it and position it until it matches the screenshots.
Locking the HUD into place
By default, the Canvas will be locked to your world space. You want it to be locked to the screen, however, as it is in the Terminator movies.
To configure a camera-locked view, select the Canvas and examine its properties in the Inspector window. Go to the Render Mode field of your HUD Canvas and select Screen Space – Camera in the drop down menu. Next, drag the Main Camera from your hierarchy view into the Render Camera field of the Canvas. This tells the canvas which camera perspective it is locked to.
The Plane Distance for your HUD is initially set to one meter. This is how far away the HUD will be from your face in the Terminator Vision mixed reality app. Because HoloLens is stereoscopic, adjusting the view for each eye, this is actually a bit close for comfort. The current focal distance for HoloLens is two meters, so we should set the plane distance at least that far away.
For convenience, set Plane Distance to 100. All of the content associated with your HUD object will automatically scale so it fills up the same amount of your visual field.
It should be noted that locking visual content to the camera, known as head-locking, is generally discouraged in mixed reality design as it can cause visual comfort. Instead, using body-locked content that tags along with the player is the recommended way to create mixed reality HUDs and menus. For the sake of verisimilitude, however, you’re going to break that rule this time.
La vie en rose
Terminator view is supposed to use heat vision. It places a red hue on everything in the scene. In order to create this effect, you are going to play a bit with shaders.
A shader is a highly optimized algorithm that you apply to an image to change it. If you’ve ever worked with any sort of photo-imaging software, then you are already familiar with shader effects like blurring. To create the heat vision colorization effect, you would configure a shader that adds a transparent red distortion to your scene.
If this were a virtual reality experience, in which the world is occluded, you would apply your shader to the camera using the RenderWithShader method. This method takes a shader and applies it to any game object you look at. In a holographic experience, however, this wouldn’t work since you also want to apply the distortion to real-life objects.
In the Unity toolbar, select Assets -> Create -> Material to make a new material object. In the Shader field, click on the drop-down menu and find HoloToolkit -> Lambertian Configurable Transparent. The shaders that come with the HoloToolkit are typically much more performant in HoloLens apps and should be preferred. The Lambertian Configurable Transparent shader will let you select a red to apply; (200, 43, 38) seems to work well, but you should choose the color values that look good to you.
Add a new plane (3D Object -> Plane) to your HUD object and call it “Thermal.” Then drag your new material with the configured Lambertian shader onto the Thermal plane. Set the Rotation of your plane to 270 and set the Scale to 100, 1, 100 so it fills up the view.
Finally, because you don’t want the red colorization to affect your text, set the Z position of each of your Text objects to -10. This will pull the text out in front of your HUD a little so it stands out from the heat vision effect.
Deploy your project to a device or the emulator to see how your Terminator Vision is looking.
Making the text dynamic
To hook up the HUD to Cognitive Services, first orchestrate a way to make the text dynamic. Select your HUD object. Then, in the Inspector window, click on Add Component -> New Script and name your script “Hud.”
Double-click Hud.cs to edit your script in Visual Studio. At the top of your script, create four public fields that will hold references to the Text objects in your project. Save your changes.
public Text InfoPanel; public Text AnalysisPanel; public Text ThreatAssessmentPanel; public Text DiagnosticPanel;
If you look at the Hud component in the Inspector, you should now see four new fields that you can set. Drag the HUD Text objects into these fields, like so.
In the Start method, add some default text so you know the dynamic text is working.
void Start() { AnalysisPanel.text = "ANALYSIS:n**************ntestntestntest"; ThreatAssessmentPanel.text = "SCAN MODE XXXXXnINITIALIZE"; InfoPanel.text = "CONNECTING"; //... }
When you deploy and run the Terminator Vision app, the default text should be overwritten with the new text you assign in Start. Now set up a System.Threading.Timer to determine how often you will scan the room for analysis. The Timer class measures time in milliseconds. The first parameter you pass to it is a callback method. In the code shown below, you will call the Tick method every 30 seconds. The Tick method, in turn, will call a new method named AnalyzeScene, which will be responsible for taking a photo of whatever the Terminator sees in front of him using the built-in color camera, known as the locatable camera, and sending it to Cognitive Services for further analysis.
System.Threading.Timer _timer; void Start() { //... int secondsInterval = 30; _timer = new System.Threading.Timer(Tick, null, 0, secondsInterval * 1000); } private void Tick(object state) { AnalyzeScene(); }
Unity accesses the locatable camera in the same way it would normally access any webcam. This involves a series of calls to create the photo capture instance, configure it, take a picture and save it to the device. Along the way, you can also add Terminator-style messages to send to the HUD in order to indicate progress.
void AnalyzeScene() { InfoPanel.text = "CALCULATION PENDING"; PhotoCapture.CreateAsync(false, OnPhotoCaptureCreated); } PhotoCapture _photoCaptureObject = null; void OnPhotoCaptureCreated(PhotoCapture captureObject) { _photoCaptureObject = captureObject; Resolution cameraResolution = PhotoCapture.SupportedResolutions.OrderByDescending((res) => res.width * res.height).First(); CameraParameters c = new CameraParameters(); c.hologramOpacity = 0.0f; c.cameraResolutionWidth = cameraResolution.width; c.cameraResolutionHeight = cameraResolution.height; c.pixelFormat = CapturePixelFormat.BGRA32; captureObject.StartPhotoModeAsync(c, OnPhotoModeStarted); } private void OnPhotoModeStarted(PhotoCapture.PhotoCaptureResult result) { if (result.success) { string filename = string.Format(@"terminator_analysis.jpg"); string filePath = System.IO.Path.Combine(Application.persistentDataPath, filename); _photoCaptureObject.TakePhotoAsync(filePath, PhotoCaptureFileOutputFormat.JPG, OnCapturedPhotoToDisk); } else { DiagnosticPanel.text = "DIAGNOSTICn**************nnUnable to start photo mode."; InfoPanel.text = "ABORT"; } }
If the photo is successfully taken and saved, you will grab it, serialize it as an array of bytes and send it to Cognitive Services to retrieve an array of tags that describe the room as well. Finally, you will dispose of the photo capture object.
void OnCapturedPhotoToDisk(PhotoCapture.PhotoCaptureResult result) { if (result.success) { string filename = string.Format(@"terminator_analysis.jpg"); string filePath = System.IO.Path.Combine(Application.persistentDataPath, filename); byte[] image = File.ReadAllBytes(filePath); GetTagsAndFaces(image); ReadWords(image); } else { DiagnosticPanel.text = "DIAGNOSTICn**************nnFailed to save Photo to disk."; InfoPanel.text = "ABORT"; } _photoCaptureObject.StopPhotoModeAsync(OnStoppedPhotoMode); } void OnStoppedPhotoMode(PhotoCapture.PhotoCaptureResult result) { _photoCaptureObject.Dispose(); _photoCaptureObject = null; }
In order to make a REST call, you will need to use the Unity WWW object. You also need to wrap the call in a Unity coroutine in order to make the call non-blocking. You can also get a free Subscription Key to use the Microsoft Cognitive Services APIs just by signing up.
string _subscriptionKey = "b1e514eYourKeyGoesHere718c5"; string _computerVisionEndpoint = "http://ift.tt/2lVcrgm;; public void GetTagsAndFaces(byte[] image) { coroutine = RunComputerVision(image); StartCoroutine(coroutine); } IEnumerator RunComputerVision(byte[] image) { var headers = new Dictionary<string, string>() { { "Ocp-Apim-Subscription-Key", _subscriptionKey }, { "Content-Type", "application/octet-stream" } }; WWW www = new WWW(_computerVisionEndpoint, image, headers); yield return www; List<string> tags = new List<string>(); var jsonResults = www.text; var myObject = JsonUtility.FromJson<AnalysisResult>(jsonResults); foreach (var tag in myObject.tags) { tags.Add(tag.name); } AnalysisPanel.text = "ANALYSIS:n***************nn" + string.Join("n", tags.ToArray()); List<string> faces = new List<string>(); foreach (var face in myObject.faces) { faces.Add(string.Format("{0} scanned: age {1}.", face.gender, face.age)); } if (faces.Count > 0) { InfoPanel.text = "MATCH"; } else { InfoPanel.text = "ACTIVE SPATIAL MAPPING"; } ThreatAssessmentPanel.text = "SCAN MODE 43984nTHREAT ASSESSMENTnn" + string.Join("n", faces.ToArray()); }
The Computer Vision tagging feature is a way to detect objects in a photo. It can also be used in an application like this one to do on-the-fly object recognition.
When the JSON data is returned from the call to cognitive services, you can use the JsonUtility to deserialize the data into an object called AnalysisResult, shown below.
public class AnalysisResult { public Tag[] tags; public Face[] faces; } [Serializable] public class Tag { public double confidence; public string hint; public string name; } [Serializable] public class Face { public int age; public FaceRectangle facerectangle; public string gender; } [Serializable] public class FaceRectangle { public int height; public int left; public int top; public int width; }
One thing to be aware of when you use JsonUtility is that it only works with fields and not with properties. If your object classes have getters and setters, JsonUtility won’t know what to do with them.
When you run the app now, it should update the HUD every 30 seconds with information about your room.
To make the app even more functional, you can add OCR capabilities.
string _ocrEndpoint = "http://ift.tt/2muK6ka;; public void ReadWords(byte[] image) { coroutine = Read(image); StartCoroutine(coroutine); } IEnumerator Read(byte[] image) { var headers = new Dictionary<string, string>() { { "Ocp-Apim-Subscription-Key", _subscriptionKey }, { "Content-Type", "application/octet-stream" } }; WWW www = new WWW(_ocrEndpoint, image, headers); yield return www; List<string> words = new List<string>(); var jsonResults = www.text; var myObject = JsonUtility.FromJson<OcrResults>(jsonResults); foreach (var region in myObject.regions) foreach (var line in region.lines) foreach (var word in line.words) { words.Add(word.text); } string textToRead = string.Join(" ", words.ToArray()); if (myObject.language != "unk") { DiagnosticPanel.text = "(language=" + myObject.language + ")n" + textToRead; } }
This service will pick up any words it finds and redisplay them for the Terminator.
It will also attempt to determine the original language of any words that it finds, which in turn can be used for further analysis.
Conclusion
In this post, you discovered how to recreate a cool visual effect from an iconic sci-fi movie. You also found out how to call Microsoft Cognitive Services from Unity in order to make a richer recreation.
You can extend the capabilities of the Terminator Vision app even further by taking the text you find through OCR and calling Cognitive Services to translate it into another language using the Translator API. You could then use the Bing Speech API to read the text back to you in both the original language and the translated language. This, however, goes beyond the original goal of recreating the Terminator Vision scenario from the 1984 James Cameron film and starts sliding into the world of personal assistants, which is another topic for another time.
View the source code for Terminator Vision on Github here.
from DIYS http://ift.tt/2muKNtV
0 notes
elbrunoc · 7 years ago
Text
#Hololens – Getting Started with #MixedRealityToolkit #MRToolkit
Tumblr media
Hi!
So, HoloToolkit is gone (until you see the code on the new toolkit) and now it’s time to start using the new Mixed Reality Toolkit. There are a couple of ways to do this, IMHO the best one is to import a custom package in Unity3D with all the contents of the Mixed Reality Toolkit.
I’ve used to create and maintain my own custom packages for HoloToolkit, however I’ll follow the guidelines and…
View On WordPress
0 notes
repwincoml4a0a5 · 8 years ago
Text
Building the Terminator Vision HUD in HoloLens
James Cameron’s 1984 film The Terminator introduced many science-fiction idioms we now take for granted. One of the most persistent is the thermal head-up-display (HUD) shot that allows the audience to see the world through the eyes of Arnold Schwarzenegger’s T-800 character. In design circles, it is one of the classic user interfaces that fans frequently try to recreate both as a learning tool and as a challenge.
In today’s post, you’ll learn how to recreate this iconic interface for the HoloLens. To sweeten the task, you’ll also hook up this interface to Microsoft Cognitive Services to perform an analysis of objects in the room, face detection and even some Optical Character Recognition (OCR).
While on the surface this exercise is intended to just be fun, there is a deeper level. Today, most computing is done in 2D. We sit fixed at our desks and stare at rectangular screens. All of our input devices, our furniture and even our office spaces are designed to help us work around 2D computing. All of this will change over the next decade.
Modern computing will eventually be overtaken by both 3D interfaces and 1-dimensional interfaces. 3D interfaces are the next generation of mixed reality devices that we are all so excited about. 1D interfaces, driven by advances in AI research, are overtaking our standard forms of computing more quietly, but just as certainly.
By speaking or looking in a certain direction, we provide inputs to AI systems in the cloud that can quickly analyze our world and provide useful information. When 1D and 3D are combined—as you are going to do in this walkthrough—a profoundly new type of experience is created that may one day lead to virtual personal assistants that will help us to navigate our world and our lives.
The first step happens to be figuring out how to recreate the T-800 thermal HUD display.
Recreating the UI
Start by creating a new 3D project in Unity and call it “Terminator Vision.” Create a new scene called “main.” Add the HoloToolkit unity package to your app. You can download the package from the HoloToolkit project’s GitHub repository. This guide uses HoloToolkit-Unity-v1.5.5.0.unitypackage. In the Unity IDE, select the Assets tab. Then click on Import Package -> Custom Package and find the download location of the HoloTookit to import it into the scene. In the menu for your Unity IDE, click on HoloToolkit -> Configure to set up your project to target HoloLens.
Once your project and your scene are properly configured, the first thing to add is a Canvas object to the scene to use as a surface to write on. In the hierarchy window, right-click on your “main” scene and select GameObject -> UI -> Canvas from the context menu to add it. Name your Canvas “HUD.”
The HUD also needs some text, so the next step is to add a few text regions to the HUD. In the hierarchy view, right-click on your HUD and add four Text objects by selecting UI -> Text. Call them BottomCenterText, MiddleRightText, MiddleLeftText and MiddleCenterText. Add some text to help you match the UI to the UI from the Terminator movie. For the MiddleRightText add:
SCAN MODE 43984
SIZE ASSESSMENT
ASSESSMENT COMPLETE
FIT PROBABILITY 0.99
RESET TO ACQUISITION
MODE SPEECH LEVEL 78
PRIORITY OVERRIDE
DEFENSE SYSTEMS SET
ACTIVE STATUS
LEVEL 2347923 MAX
For the MiddleLeftText object, add:
ANALYSIS:
***************
234654 453 38
654334 450 16
245261 856 26
453665 766 46
382856 863 09
356878 544 04
664217 985 89
For the BottomCenterText, just write “MATCH.” In the scene panel, adjust these Text objects around your HUD until they match with screenshots from the Terminator movie. MiddleCenterText can be left blank for now. You’re going to use it later for surfacing debug messages.
Getting the fonts and colors right are also important – and there are lots of online discussions around identifying exactly what these are. Most of the text in the HUD is probably Helvetica. By default, Unity in Windows assigns Arial, which is close enough. Set the font color to an off-white (236, 236, 236, 255), font-style to bold, and the font size to 20.
The font used for the “MATCH” caption at the bottom of the HUD is apparently known as Heinlein. It was also used for the movie titles. Since this font isn’t easy to find, you can use another font created to emulate the Heinlein font called Modern Vision, which you can find by searching for it on internet. To use this font in your project, create a new folder called Fonts under your Assets folder. Download the custom font you want to use and drag the TTF file into your Fonts folder. Once this is done, you can simply drag your custom font into the Font field of BottomCenterText or click on the target symbol next to the value field for the font to bring up a selection window. Also, increase the font size for “MATCH” to 32 since the text is a bit bigger than other text in the HUD.
In the screenshots, the word “MATCH” has a white square placed to its right. To emulate this square, create a new InputField (UI -> Input Field) under the HUD object and name it “Square.” Remove the default text, resize it and position it until it matches the screenshots.
Locking the HUD into place
By default, the Canvas will be locked to your world space. You want it to be locked to the screen, however, as it is in the Terminator movies.
To configure a camera-locked view, select the Canvas and examine its properties in the Inspector window. Go to the Render Mode field of your HUD Canvas and select Screen Space – Camera in the drop down menu. Next, drag the Main Camera from your hierarchy view into the Render Camera field of the Canvas. This tells the canvas which camera perspective it is locked to.
The Plane Distance for your HUD is initially set to one meter. This is how far away the HUD will be from your face in the Terminator Vision mixed reality app. Because HoloLens is stereoscopic, adjusting the view for each eye, this is actually a bit close for comfort. The current focal distance for HoloLens is two meters, so we should set the plane distance at least that far away.
For convenience, set Plane Distance to 100. All of the content associated with your HUD object will automatically scale so it fills up the same amount of your visual field.
It should be noted that locking visual content to the camera, known as head-locking, is generally discouraged in mixed reality design as it can cause visual comfort. Instead, using body-locked content that tags along with the player is the recommended way to create mixed reality HUDs and menus. For the sake of verisimilitude, however, you’re going to break that rule this time.
La vie en rose
Terminator view is supposed to use heat vision. It places a red hue on everything in the scene. In order to create this effect, you are going to play a bit with shaders.
A shader is a highly optimized algorithm that you apply to an image to change it. If you’ve ever worked with any sort of photo-imaging software, then you are already familiar with shader effects like blurring. To create the heat vision colorization effect, you would configure a shader that adds a transparent red distortion to your scene.
If this were a virtual reality experience, in which the world is occluded, you would apply your shader to the camera using the RenderWithShader method. This method takes a shader and applies it to any game object you look at. In a holographic experience, however, this wouldn’t work since you also want to apply the distortion to real-life objects.
In the Unity toolbar, select Assets -> Create -> Material to make a new material object. In the Shader field, click on the drop-down menu and find HoloToolkit -> Lambertian Configurable Transparent. The shaders that come with the HoloToolkit are typically much more performant in HoloLens apps and should be preferred. The Lambertian Configurable Transparent shader will let you select a red to apply; (200, 43, 38) seems to work well, but you should choose the color values that look good to you.
Add a new plane (3D Object -> Plane) to your HUD object and call it “Thermal.” Then drag your new material with the configured Lambertian shader onto the Thermal plane. Set the Rotation of your plane to 270 and set the Scale to 100, 1, 100 so it fills up the view.
Finally, because you don’t want the red colorization to affect your text, set the Z position of each of your Text objects to -10. This will pull the text out in front of your HUD a little so it stands out from the heat vision effect.
Deploy your project to a device or the emulator to see how your Terminator Vision is looking.
Making the text dynamic
To hook up the HUD to Cognitive Services, first orchestrate a way to make the text dynamic. Select your HUD object. Then, in the Inspector window, click on Add Component -> New Script and name your script “Hud.”
Double-click Hud.cs to edit your script in Visual Studio. At the top of your script, create four public fields that will hold references to the Text objects in your project. Save your changes.
public Text InfoPanel; public Text AnalysisPanel; public Text ThreatAssessmentPanel; public Text DiagnosticPanel;
If you look at the Hud component in the Inspector, you should now see four new fields that you can set. Drag the HUD Text objects into these fields, like so.
In the Start method, add some default text so you know the dynamic text is working.
void Start() { AnalysisPanel.text = "ANALYSIS:n**************ntestntestntest"; ThreatAssessmentPanel.text = "SCAN MODE XXXXXnINITIALIZE"; InfoPanel.text = "CONNECTING"; //... }
When you deploy and run the Terminator Vision app, the default text should be overwritten with the new text you assign in Start. Now set up a System.Threading.Timer to determine how often you will scan the room for analysis. The Timer class measures time in milliseconds. The first parameter you pass to it is a callback method. In the code shown below, you will call the Tick method every 30 seconds. The Tick method, in turn, will call a new method named AnalyzeScene, which will be responsible for taking a photo of whatever the Terminator sees in front of him using the built-in color camera, known as the locatable camera, and sending it to Cognitive Services for further analysis.
System.Threading.Timer _timer; void Start() { //... int secondsInterval = 30; _timer = new System.Threading.Timer(Tick, null, 0, secondsInterval * 1000); } private void Tick(object state) { AnalyzeScene(); }
Unity accesses the locatable camera in the same way it would normally access any webcam. This involves a series of calls to create the photo capture instance, configure it, take a picture and save it to the device. Along the way, you can also add Terminator-style messages to send to the HUD in order to indicate progress.
void AnalyzeScene() { InfoPanel.text = "CALCULATION PENDING"; PhotoCapture.CreateAsync(false, OnPhotoCaptureCreated); } PhotoCapture _photoCaptureObject = null; void OnPhotoCaptureCreated(PhotoCapture captureObject) { _photoCaptureObject = captureObject; Resolution cameraResolution = PhotoCapture.SupportedResolutions.OrderByDescending((res) => res.width * res.height).First(); CameraParameters c = new CameraParameters(); c.hologramOpacity = 0.0f; c.cameraResolutionWidth = cameraResolution.width; c.cameraResolutionHeight = cameraResolution.height; c.pixelFormat = CapturePixelFormat.BGRA32; captureObject.StartPhotoModeAsync(c, OnPhotoModeStarted); } private void OnPhotoModeStarted(PhotoCapture.PhotoCaptureResult result) { if (result.success) { string filename = string.Format(@"terminator_analysis.jpg"); string filePath = System.IO.Path.Combine(Application.persistentDataPath, filename); _photoCaptureObject.TakePhotoAsync(filePath, PhotoCaptureFileOutputFormat.JPG, OnCapturedPhotoToDisk); } else { DiagnosticPanel.text = "DIAGNOSTICn**************nnUnable to start photo mode."; InfoPanel.text = "ABORT"; } }
If the photo is successfully taken and saved, you will grab it, serialize it as an array of bytes and send it to Cognitive Services to retrieve an array of tags that describe the room as well. Finally, you will dispose of the photo capture object.
void OnCapturedPhotoToDisk(PhotoCapture.PhotoCaptureResult result) { if (result.success) { string filename = string.Format(@"terminator_analysis.jpg"); string filePath = System.IO.Path.Combine(Application.persistentDataPath, filename); byte[] image = File.ReadAllBytes(filePath); GetTagsAndFaces(image); ReadWords(image); } else { DiagnosticPanel.text = "DIAGNOSTICn**************nnFailed to save Photo to disk."; InfoPanel.text = "ABORT"; } _photoCaptureObject.StopPhotoModeAsync(OnStoppedPhotoMode); } void OnStoppedPhotoMode(PhotoCapture.PhotoCaptureResult result) { _photoCaptureObject.Dispose(); _photoCaptureObject = null; }
In order to make a REST call, you will need to use the Unity WWW object. You also need to wrap the call in a Unity coroutine in order to make the call non-blocking. You can also get a free Subscription Key to use the Microsoft Cognitive Services APIs just by signing up.
string _subscriptionKey = "b1e514eYourKeyGoesHere718c5"; string _computerVisionEndpoint = "http://ift.tt/2lVcrgm;; public void GetTagsAndFaces(byte[] image) { coroutine = RunComputerVision(image); StartCoroutine(coroutine); } IEnumerator RunComputerVision(byte[] image) { var headers = new Dictionary<string, string>() { { "Ocp-Apim-Subscription-Key", _subscriptionKey }, { "Content-Type", "application/octet-stream" } }; WWW www = new WWW(_computerVisionEndpoint, image, headers); yield return www; List<string> tags = new List<string>(); var jsonResults = www.text; var myObject = JsonUtility.FromJson<AnalysisResult>(jsonResults); foreach (var tag in myObject.tags) { tags.Add(tag.name); } AnalysisPanel.text = "ANALYSIS:n***************nn" + string.Join("n", tags.ToArray()); List<string> faces = new List<string>(); foreach (var face in myObject.faces) { faces.Add(string.Format("{0} scanned: age {1}.", face.gender, face.age)); } if (faces.Count > 0) { InfoPanel.text = "MATCH"; } else { InfoPanel.text = "ACTIVE SPATIAL MAPPING"; } ThreatAssessmentPanel.text = "SCAN MODE 43984nTHREAT ASSESSMENTnn" + string.Join("n", faces.ToArray()); }
The Computer Vision tagging feature is a way to detect objects in a photo. It can also be used in an application like this one to do on-the-fly object recognition.
When the JSON data is returned from the call to cognitive services, you can use the JsonUtility to deserialize the data into an object called AnalysisResult, shown below.
public class AnalysisResult { public Tag[] tags; public Face[] faces; } [Serializable] public class Tag { public double confidence; public string hint; public string name; } [Serializable] public class Face { public int age; public FaceRectangle facerectangle; public string gender; } [Serializable] public class FaceRectangle { public int height; public int left; public int top; public int width; }
One thing to be aware of when you use JsonUtility is that it only works with fields and not with properties. If your object classes have getters and setters, JsonUtility won’t know what to do with them.
When you run the app now, it should update the HUD every 30 seconds with information about your room.
To make the app even more functional, you can add OCR capabilities.
string _ocrEndpoint = "http://ift.tt/2muK6ka;; public void ReadWords(byte[] image) { coroutine = Read(image); StartCoroutine(coroutine); } IEnumerator Read(byte[] image) { var headers = new Dictionary<string, string>() { { "Ocp-Apim-Subscription-Key", _subscriptionKey }, { "Content-Type", "application/octet-stream" } }; WWW www = new WWW(_ocrEndpoint, image, headers); yield return www; List<string> words = new List<string>(); var jsonResults = www.text; var myObject = JsonUtility.FromJson<OcrResults>(jsonResults); foreach (var region in myObject.regions) foreach (var line in region.lines) foreach (var word in line.words) { words.Add(word.text); } string textToRead = string.Join(" ", words.ToArray()); if (myObject.language != "unk") { DiagnosticPanel.text = "(language=" + myObject.language + ")n" + textToRead; } }
This service will pick up any words it finds and redisplay them for the Terminator.
It will also attempt to determine the original language of any words that it finds, which in turn can be used for further analysis.
Conclusion
In this post, you discovered how to recreate a cool visual effect from an iconic sci-fi movie. You also found out how to call Microsoft Cognitive Services from Unity in order to make a richer recreation.
You can extend the capabilities of the Terminator Vision app even further by taking the text you find through OCR and calling Cognitive Services to translate it into another language using the Translator API. You could then use the Bing Speech API to read the text back to you in both the original language and the translated language. This, however, goes beyond the original goal of recreating the Terminator Vision scenario from the 1984 James Cameron film and starts sliding into the world of personal assistants, which is another topic for another time.
View the source code for Terminator Vision on Github here.
from DIYS http://ift.tt/2muKNtV
0 notes
singlemamaco · 8 years ago
Photo
Tumblr media
HoloToolkit: How to Add Voice Commands to Your HoloLens App https://hololens.reality.news/how-to/holotoolkit-add-voice-commands-your-hololens-app-0175284/?utm_source=dlvr.it&utm_medium=tumblr #design
0 notes
hanaleistudios · 7 years ago
Text
Holiday Holograms: #2 Initial project setup tutorial
This post is all about the necessary tools and project set up that you have to do in order to get your project ready to develop hololens apps and games, depending on how good I am at explaining stuff this will either be super handy or really useless.  
What you’ll need to get started
To follow along with this you’ll need a few things to get started! They are:
A controller compatible with Unity (I’m using an Xbox One Controller)
Unity 2017.1.0f3, while you may be able to use older versions this is the one I’ll be using
Basic understanding of how Unity works and how to navigate it
1. HoloToolKit
The first step to start developing Hololens stuff in Unity is to get the HoloToolKit from GitHub, it contains all the necessary scripts and prefabs you’ll need to get started. The GitHub also has a HoloToolKit Example package you can download and use to work out how specific things work. Download both of them as we’ll use them later on.
Link to HoloToolKit GitHub: https://github.com/Microsoft/MixedRealityToolkit-Unity/releases/tag/v1.2017.1.1
2. Project setup
There are a few things you’ll need to do to your Unity project in order to get started with Hololens development, these are:
Import both the HoloToolKit and HoloToolKitExample packages from the GitHub into your project
In the build settings change the target platform to Universal Windows Platform, set your target device to Hololens, build type to D3D (Stands for Direct 3D) and tick Unity C# Project.
In the player settings tick Virtual Reality supported and make sure your Virtual Reality SDK is set to Windows Holographic
Make sure that the following options are ticked in capabilities section are ticked:
MusicLibrary
PicturesLibrary
VideosLibrary
WebCam
Microphone
SpatialPerception
You will also need to open up a holographic emulation window, set it to simulate in editor and select a room to emulate. 
3. HoloToolKit Examples
The next step is to make sure all this stuff actually worked! We can do this by checking some of the examples from the HoloToolKit Examples we downloaded earlier. Simply navigate to the HoloToolKit Examples folder, select prototyping, then scenes and go to the scene called “PositionAnObject”. If you run this scene you should see a floating object, if you use your controller and press A or the equivalent button on the object it should start moving around with you! 
If you want to see the room you are supposed to be emulating you’ll need to drop in a  prefab called “SpatialMapping” from HoloToolKit under the prefabs folder in the Spatial Mapping folder. It’ll take a while to load in the geometry of the room when you run the scene using the SpatialMapping prefab. If I were you I’d play around with all the different scenes and try to work out how they work. 
4. Thats all folks!
That should be enough to get you started! It’s a wild world out there and I’ve still got a lot to learn but that is the absolute basics of how to get started developing for the HoloLens! If you found this post at all helpful please share it around as it took me longer than usual to write it! Also if I missed anything please let me know so I can fix it.
0 notes
2vlv · 8 years ago
Link
0 notes