7 common interview questions for AR Developers

If you’re applying for a position that involves virtual reality development using Unity, you can expect to be asked questions about your Unity VR development experience. In this article, we’ll review some of the most common Unity VR interview questions and provide tips on how to answer them. By preparing for these questions, you’ll be able to confidently demonstrate your Unity VR development skills and knowledge during your job interview.

Unity Interview Questions for Game Developer Job Interview For Fresher / Experience [EP.1]

When we create an interface, we are basically creating a set of methods without any implementation that must be overridden by the implemented classes. The advantage is that it provides a way for a class to be a part of two classes: one from inheritance hierarchy and one from the interface.

# Streaming Assets: Like Resource Folders, a Streaming Assets directory can be created by intuitively creating a folder named StreamingAssetsthat, that you can use to put bundled un-managed assets. Unlike Resource folders, this directory remains intact and accessible in the Unity player. This creates a unique access point for users to add their own files to the game.

The difference between a coroutine and a thread is very much like the difference between cooperative multitasking and preemptive multitasking. Note that a coroutine runs on the main thread and must voluntarily yield control back to it, if control is not yielded (this is where the coroutine must be cooperative) then your coroutine will hang your main thread, thus hanging your game.

The Main difference between reference types (Class) and value types (Struct) we will consider is that reference types are allocated on the heap and garbage-collected, whereas value types are allocated either on the stack or inline in containing types and deallocated when the stack unwinds or when their containing type gets deallocated.

When we create an abstract class, we are creating a base class that might have one or more completed methods but at least one or more methods are left uncompleted and declared abstract. If all the methods of an abstract class are uncompleted then it is same as an interface. The purpose of an abstract class is to provide a base class definition for how a set of derived classes will work and then allow the programmers to fill the implementation in the derived classes.

Toptal sourced essential questions that the best Unity or Unity3D developers and engineers can answer. Driven from our community, we encourage experts to submit questions and offer feedback.

  • Im hiring
  • I’m looking for work
  • Answer the following questions about threading. Explain your answers:

  • Can threads be used to modify a Texture on runtime?
  • Can threads be used to move a GameObject on the scene?
  • Consider the snippet below:
  • Improve this code using threads, so the 1000000 random number generation runs without spoiling performance.

  • No. Texture and Meshes are examples of elements stored in GPU memory and Unity doesn’t allow other threads, besides the main one, to make modifications on these kinds of data.
  • No. Fetching the Transform reference isn’t thread safe in Unity.
  • When using threads, we must avoid using native Unity structures like the Mathf and Random classes:
  • Explain what a vertex shader is, and what a pixel shader is.

    Vertex shader is a script that runs for each vertex of the mesh, allowing the developer to apply transformation matrixes, and other operations, in order to control where this vertex is in the 3D space, and how it will be projected on the screen.

    Pixel shader is a script that runs for each fragment (pixel candidate to be rendered) after three vertexes are processed in a mesh’s triangle. The developer can use information like the UV / TextureCoords and sample textures in order to control the final color that will be rendered on screen.

    Explain why deferred lighting optimizes scenes with a lot of lights and elements.

    During rendering, each pixel is calculated whether it should be illuminated and receive lightning influence, and this is repeated for each light. After approximately eight repeated calculations for different lights in the scene, the overhead becomes significant.

    For large scenes, the number of pixels rendered is usually bigger than the number of pixels in the screen itself.

    Deferred Lighting makes the scene render all pixels without illumination (which is fast), and with extra information (at a cost of low overhead), it calculates the illumination step only for the pixels of the screen buffer (which is less than all pixels processed for each element). This technique allow much more light instances in the project.

    Apply to Join Toptals Development Network

    and enjoy reliable, steady, remote Freelance Unity or Unity3D Developer Jobs

    Explain why Time.deltaTime should be used to make things that depend on time operate correctly.

    Real time applications, such as games, have a variable FPS. They sometimes run at 60FPS, or when suffering slowdowns, they will run on 40FPS or less.

    If you want to change a value from A to B in 1.0 seconds you can’t simply increase A by B-A between two frames because frames can run fast or slow, so one frame can have different durations.

    The way to correct this is to measure the time taken from frame X to X+1 and increment A, leveraging this change with the frame duration deltaTime by doing A += (B-A) * DeltaTime.

    When the accumulated DeltaTime reaches 1.0 second, A will have assumed B value.

    Explain why vectors should be normalized when used to move an object.

    Normalization makes the vector unit length. It means, for instance, that if you want to move with speed 20.0, multiplying speed * vector will result in a precise 20.0 units per step. If the vector had a random length, the step would be different than 20.0 units.

    Consider the following code snippet below:

    Finish this code so the GameObject containing this script moves with constant speed towards target, and stop moving once it reaches 1.0, or less, units of distance.

    Can two GameObjects, each with only an SphereCollider, both set as trigger and raise OnTrigger events? Explain your answer.

    No. Collision events between two objects can only be raised when one of them has a RigidBody attached to it. This is a common error when implementing applications that use “physics.”

    Which of the following examples will run faster?

  • 1000 GameObjects, each with a MonoBehaviour implementing the Update callback.
  • One GameObject with one MonoBehaviour with an Array of 1000 classes, each implementing a custom Update() callback.
  • The correct answer is 2.

    The Update callback is called using a C# Reflection, which is significantly slower than calling a function directly. In our example, 1000 GameObjects each with a MonoBehaviour means 1000 Reflection calls per frame.

    Creating one MonoBehaviour with one Update, and using this single callback to Update a given number of elements, is a lot faster, due to the direct access to the method.

    Explain, in a few words, what roles the inspector, project and hierarchy panels in the Unity editor have. Which is responsible for referencing the content that will be included in the build process?

    The inspector panel allows users to modify numeric values (such as position, rotation and scale), drag and drop references of scene objects (like Prefabs, Materials and Game Objects), and others. Also it can show a custom-made UI, created by the user, by using Editor scripts.

    The project panel contains files from the file system of the assets folder in the project’s root folder. It shows all the available scripts, textures, materials and shaders available for use in the project.

    The hierarchy panel shows the current scene structure, with its GameObjects and its children. It also helps users organize them by name and order relative to the GameObject’s siblings. Order dependent features, such as UI, make use of this categorization.

    The panel responsible for referencing content in the build process is the hierarchy panel. The panel contains references to the objects that exist, or will exist, when the application is executed. When building the project, Unity searches for them in the project panel, and adds them to the bundle.

    Arrange the event functions listed below in the order in which they will be invoked when an application is closed:

    The correct execution order of these event functions when an application closes is as follows:

    Note: You might be tempted to disagree with the placement of OnApplicationQuit() in the above list, but it is correct which can be verified by logging the order in which call occurs when your application closes.

    Explain the issue with the code below and provide an alternative implementation that would correct the problem.

    The issue is that you can’t modify the position from a transform directly. This is because the position is actually a property (not a field). Therefore, when a getter is called, it invokes a method which returns a Vector3 copy which it places into the stack.

    So basically what you are doing in the code above is assigning a member of the struct a value that is in the stack and that is later removed.

    Instead, the proper solution is to replace the whole property; e.g.:

    What are the benefits of having a visualization mode for rendering optimization, as shown on the picture bellow?

    The “overdrawn” mode helps the user to profile the number of pixels being rendered in the same “area”. Yellow to white areas are “hot” areas where too many pixels are being rendered.

    Developers can use this information to adjust their materials and make better use of the Z-Test and optimize the rendering.

    There is more to interviewing than tricky technical questions, so these are intended merely as a guide. Not every “A” candidate worth hiring will be able to answer them all, nor does answering them all guarantee an “A” candidate. At the end of the day, hiring remains an art, a science — and a lot of work.

    Tired of interviewing candidates? Not sure what to ask to get you a top hire?

    Let Toptal find the best people for you.

    Our Exclusive Network of Unity or Unity3D Developers

    Looking to land a job as a Unity or Unity3D Developer?

    Let Toptal find the right job for you.

    Job Opportunities From Our Network

    Submitted questions and answers are subject to review and editing, and may or may not be selected for posting, at the sole discretion of Toptal, LLC.

    Freelance Unity or Unity3D Developer

    Johnathan has 15 years of experience writing web apps that span consumer productivity software to mission-critical financial trading platforms. He has extensive knowledge of front-end JavaScript and browser APIs as well as significant experience with popular frameworks and libraries like React and Redux. Johnathans deep full-stack experience includes Node.js and Express, MongoDB as well as more traditional technologies like PHP, ASP.NET, and MySQL.

    Freelance Unity or Unity3D Developer

    Linas is a creative full-stack senior developer. With 10+ years of experience in a broad range of software development projects, he keeps a strong focus on productivity and quality. Linas looks at software development as a craft, and constantly works on improving his own skills and knowledge.

    Freelance Unity or Unity3D Developer

    Nebojsa has a master’s degree in software engineering along with a decade of development experience, including working in a team that’s passed through two startup accelerators with gaming projects. Experiencing this has given him a solid background in game development and entrepreneurship. Recently, hes been specializing in VR development with an emphasis on its appliance in the sports industry and education.

    What is Unity 3D?Unity 3D is a powerful cross-platform and fully integrated development engine which gives out-of-box functionality to create games and other interactive 3D content.

  • It is a multi-platform game engine with features like ( 3D objects, physics, animation, scripting, lighting etc.)
  • Accompanying script editor
  • MonoDevelop (win/mac)
  • It can also use Visual Studio (Windows)
  • 3D terrain editor
  • 3D object animation manager
  • GUI System
  • Many platforms executable exporter Web player/ Android/Native application/Wii
  • In Unity 3D, you can assemble art and assets into scenes and environments like adding special effects, physics and animation, lighting, etc.

    Everything You Need to Know About Recruiting and Hiring an AR and VR Developer

    AR and VR Developers or Augmented Reality and Virtual Reality Developers are not just used for gaming anymore. The multi-faceted industry is a ‘game changer’ indeed for a variety of industries. According to Wendy Gonzalez, CEO at Sama, the adoption of AR and VR technologies is no longer an option, but a necessity to close the gaps between physical and virtual spaces.

    Along the lines, BCG Consulting also says that the adoption of AR and VR is an agile, launch-and-learn process adopted in several phases. Therefore many businesses link strategy, technology, and execution with AR and VR

    FAQ

    Is unity AR or VR?

    Augmented Reality and Virtual Reality Games. Forged in gaming and used across industries to create innovative AR/VR content, Unity’s flexible real-time platform offers unlimited possibilities to unleash your creative potential. Get started. Finally create intelligent AR experiences with our new authoring environment.

    What are the characteristics of unity tool?

    Unity developed a new architecture that improves the support for existing and future augmented reality (AR) and virtual reality (VR) platforms. Learn about the technology under the hood, the consequent benefits, and improvements to the platform, and how it impacts your workflows in creating AR/VR experiences.

    Related Posts

    Leave a Reply

    Your email address will not be published. Required fields are marked *