APIs#
Introduction#
AutonomySim
exposes application programming interfaces (APIs) that enable you to interact with vehicle in the simulation programmatically. You can use these APIs to retrieve images, get state, control the vehicle, and so on.
Python Quickstart#
If you want to use Python to call AutonomySim APIs, we recommend using Anaconda with Python 3.5 or later versions however some code may also work with Python 2.7 (help us improve compatibility!).
First install this package:
You can either get AutonomySim binaries from releases or compile from the source (Windows, Linux). Once you can run AutonomySim, choose Car as vehicle and then navigate to PythonClient\car\
folder and run:
If you are using Visual Studio 2019 then just open AutonomySim.sln, set PythonClient as startup project and choose car\hello_car.py
as your startup script.
Installing AutonomySim Package#
You can also install AutonomySim
package simply by,
You can find source code and samples for this package in PythonClient
folder in your repo.
Notes
1. You may notice a file setup_path.py
in our example folders. This file has simple code to detect if AutonomySim
package is available in parent folder and in that case we use that instead of pip installed package so you always use latest code.
2. AutonomySim is still under heavy development which means you might frequently need to update the package to use new APIs.
C++ Users#
If you want to use C++ APIs and examples, please see C++ APIs Guide.
Hello Car#
Here's how to use AutonomySim APIs using Python to control simulated car (see also C++ example):
# ready to run example: PythonClient/car/hello_car.py
import AutonomySim
import time
# connect to the AutonomySim simulator
client = AutonomySim.CarClient()
client.confirmConnection()
client.enableApiControl(True)
car_controls = AutonomySim.CarControls()
while True:
# get state of the car
car_state = client.getCarState()
print("Speed %d, Gear %d" % (car_state.speed, car_state.gear))
# set the controls for car
car_controls.throttle = 1
car_controls.steering = 1
client.setCarControls(car_controls)
# let car drive a bit
time.sleep(1)
# get camera images from the car
responses = client.simGetImages([
AutonomySim.ImageRequest(0, AutonomySim.ImageType.DepthVis),
AutonomySim.ImageRequest(1, AutonomySim.ImageType.DepthPlanar, True)])
print('Retrieved images: %d', len(responses))
# do something with images
for response in responses:
if response.pixels_as_float:
print("Type %d, size %d" % (response.image_type, len(response.image_data_float)))
AutonomySim.write_pfm('py1.pfm', AutonomySim.get_pfm_array(response))
else:
print("Type %d, size %d" % (response.image_type, len(response.image_data_uint8)))
AutonomySim.write_file('py1.png', response.image_data_uint8)
Hello Drone#
Here's how to use AutonomySim APIs using Python to control simulated quadrotor (see also C++ example):
# ready to run example: PythonClient/multirotor/hello_drone.py
import AutonomySim
import os
# connect to the AutonomySim simulator
client = AutonomySim.MultirotorClient()
client.confirmConnection()
client.enableApiControl(True)
client.armDisarm(True)
# Async methods returns Future. Call join() to wait for task to complete.
client.takeoffAsync().join()
client.moveToPositionAsync(-10, 10, -10, 5).join()
# take images
responses = client.simGetImages([
AutonomySim.ImageRequest("0", AutonomySim.ImageType.DepthVis),
AutonomySim.ImageRequest("1", AutonomySim.ImageType.DepthPlanar, True)])
print('Retrieved images: %d', len(responses))
# do something with the images
for response in responses:
if response.pixels_as_float:
print("Type %d, size %d" % (response.image_type, len(response.image_data_float)))
AutonomySim.write_pfm(os.path.normpath('/temp/py1.pfm'), AutonomySim.get_pfm_array(response))
else:
print("Type %d, size %d" % (response.image_type, len(response.image_data_uint8)))
AutonomySim.write_file(os.path.normpath('/temp/py1.png'), response.image_data_uint8)
Common APIs#
reset
: This resets the vehicle to its original starting state. Note that you must callenableApiControl
andarmDisarm
again after the call toreset
.confirmConnection
: Checks state of connection every 1 sec and reports it in Console so user can see the progress for connection.enableApiControl
: For safety reasons, by default API control for autonomous vehicle is not enabled and human operator has full control (usually via RC or joystick in simulator). The client must make this call to request control via API. It is likely that human operator of vehicle might have disallowed API control which would mean that enableApiControl has no effect. This can be checked byisApiControlEnabled
.isApiControlEnabled
: Returns true if API control is established. If false (which is default) then API calls would be ignored. After a successful call toenableApiControl
, theisApiControlEnabled
should return true.ping
: If connection is established then this call will return true otherwise it will be blocked until timeout.simPrintLogMessage
: Prints the specified message in the simulator's window. If message_param is also supplied then its printed next to the message and in that case if this API is called with same message value but different message_param again then previous line is overwritten with new line (instead of API creating new line on display). For example,simPrintLogMessage("Iteration: ", to_string(i))
keeps updating same line on display when API is called with different values of i. The valid values of severity parameter is 0 to 3 inclusive that corresponds to different colors.simGetObjectPose
,simSetObjectPose
: Gets and sets the pose of specified object in Unreal environment. Here the object means "actor" in Unreal terminology. They are searched by tag as well as name. Please note that the names shown in UE Editor are auto-generated in each run and are not permanent. So if you want to refer to actor by name, you must change its auto-generated name in UE Editor. Alternatively you can add a tag to actor which can be done by clicking on that actor in Unreal Editor and then going to Tags property, click "+" sign and add some string value. If multiple actors have same tag then the first match is returned. If no matches are found then NaN pose is returned. The returned pose is in NED coordinates in SI units in the world frame. ForsimSetObjectPose
, the specified actor must have Mobility set to Movable or otherwise you will get undefined behavior. ThesimSetObjectPose
has parameterteleport
which means object is moved through other objects in its way and it returns true if move was successful
Image/Computer Vision APIs#
AutonomySim offers comprehensive images APIs to retrieve synchronized images from multiple cameras along with ground truth including depth, disparity, surface normals and vision. You can set the resolution, FOV, motion blur etc parameters in settings.json. There is also API for detecting collision state. See also complete code that generates specified number of stereo images and ground truth depth with normalization to camera plane, computation of disparity image and saving it to pfm format.
More on image APIs and Computer Vision mode. For vision problems that can benefit from domain randomization, there is also an object texture_swapping API, which can be used in supported scenes.
Pause and Continue APIs#
AutonomySim allows to pause and continue the simulation through pause(is_paused)
API. To pause the simulation call pause(True)
and to continue the simulation call pause(False)
. You may have scenario, especially while using reinforcement learning, to run the simulation for specified amount of time and then automatically pause. While simulation is paused, you may then do some expensive computation, send a new command and then again run the simulation for specified amount of time. This can be achieved by API continueForTime(seconds)
. This API runs the simulation for the specified number of seconds and then pauses the simulation. For example usage, please see pause_continue_car.py and pause_continue_drone.py.
Collision API#
The collision information can be obtained using simGetCollisionInfo
API. This call returns a struct that has information not only whether collision occurred but also collision position, surface normal, penetration depth and so on.
Time-of-day API#
AutonomySim assumes there exist sky sphere of class EngineSky/BP_Sky_Sphere
in your environment with ADirectionalLight actor. By default, the position of the sun in the scene doesn't move with time. You can use settings to set up latitude, longitude, date and time which AutonomySim uses to compute the position of sun in the scene.
You can also use following API call to set the sun position according to given date time:
simSetTimeOfDay(self, is_enabled, start_datetime = "", is_start_datetime_dst = False, celestial_clock_speed = 1, update_interval_secs = 60, move_sun = True)
The is_enabled
parameter must be True
to enable time of day effect. If it is False
then sun position is reset to its original in the environment.
Other parameters are same as in settings.
Line-of-sight and world extent APIs#
To test line-of-sight in the sim from a vehicle to a point or between two points, see simTestLineOfSightToPoint(point, vehicle_name) and simTestLineOfSightBetweenPoints(point1, point2), respectively. Sim world extent, in the form of a vector of two GeoPoints, can be retrieved using simGetWorldExtents().
Weather APIs#
By default all weather effects are disabled. To enable weather effect, first call:
Various weather effects can be enabled by using simSetWeatherParameter
method which takes WeatherParameter
, for example,
class WeatherParameter:
Rain = 0
Roadwetness = 1
Snow = 2
RoadSnow = 3
MapleLeaf = 4
RoadLeaf = 5
Dust = 6
Fog = 7
Please note that Roadwetness
, RoadSnow
and RoadLeaf
effects requires adding materials to your scene.
Please see example code for more details.
Recording APIs#
Recording APIs can be used to start recording data through APIs. Data to be recorded can be specified using settings. To start recording, use -
Similarly, to stop recording, use client.stopRecording()
. To check whether Recording is running, call client.isRecording()
, returns a bool
.
This API works alongwith toggling Recording using R button, therefore if it's enabled using R key, isRecording()
will return True
, and recording can be stopped via API using stopRecording()
. Similarly, recording started using API will be stopped if R key is pressed in Viewport. LogMessage will also appear in the top-left of the viewport if recording is started or stopped using API.
Note that this will only save the data as specfied in the settings. For full freedom in storing data such as certain sensor information, or in a different format or layout, use the other APIs to fetch the data and save as desired. Check out Modifying Recording Data for details on how to modify the kinematics data being recorded.
Wind API#
Wind can be changed during simulation using simSetWind()
. Wind is specified in World frame, NED direction and m/s values
For example, to set 20m/s wind in north (forward) direction:
# Set wind to (20,0,0) in NED (forward direction)
wind = AutonomySim.Vector3r(20, 0, 0)
client.simSetWind(wind)
Also see example script in set_wind.py
Lidar APIs#
AutonomySim offers API to retrieve point cloud data from Lidar sensors on vehicles. You can set the number of channels, points per second, horizontal and vertical FOV, etc parameters in settings.json.
More on lidar APIs and settings and sensor settings
Light Control APIs#
Lights that can be manipulated inside AutonomySim can be created via the simSpawnObject()
API by passing either PointLightBP
or SpotLightBP
as the asset_name
parameter and True
as the is_blueprint
parameter. Once a light has been spawned, it can be manipulated using the following API:
simSetLightIntensity
: This allows you to edit a light's intensity or brightness. It takes two parameters,light_name
, the name of the light object returned by a previous call tosimSpawnObject()
, andintensity
, a float value.
Texture APIs#
Textures can be dynamically set on objects via these APIs:
simSetObjectMaterial
: This sets an object's material using an existing Unreal material asset. It takes two string parameters,object_name
andmaterial_name
.simSetObjectMaterialFromTexture
: This sets an object's material using a path to a texture. It takes two string parameters,object_name
andtexture_path
.
Multiple Vehicles#
AutonomySim supports multiple vehicles and control them through APIs. Please Multiple Vehicles doc.
Coordinate System#
All AutonomySim API uses NED coordinate system, i.e., +X is North, +Y is East and +Z is Down. All units are in SI system. Please note that this is different from coordinate system used internally by Unreal Engine. In Unreal Engine, +Z is up instead of down and length unit is in centimeters instead of meters. AutonomySim APIs takes care of the appropriate conversions. The starting point of the vehicle is always coordinates (0, 0, 0) in NED system. Thus when converting from Unreal coordinates to NED, we first subtract the starting offset and then scale by 100 for cm to m conversion. The vehicle is spawned in Unreal environment where the Player Start component is placed. There is a setting called OriginGeopoint
in settings.json which assigns geographic longitude, longitude and altitude to the Player Start component.
Vehicle Specific APIs#
APIs for Car#
Car has followings APIs available:
setCarControls
: This allows you to set throttle, steering, handbrake and auto or manual gear.getCarState
: This retrieves the state information including speed, current gear and 6 kinematics quantities: position, orientation, linear and angular velocity, linear and angular acceleration. All quantities are in NED coordinate system, SI units in world frame except for angular velocity and accelerations which are in body frame.- Image APIs.
APIs for Multirotor#
Multirotor can be controlled by specifying angles, velocity vector, destination position or some combination of these. There are corresponding move*
APIs for this purpose. When doing position control, we need to use some path following algorithm. By default AutonomySim uses carrot following algorithm. This is often referred to as "high level control" because you just need to specify high level goal and the firmware takes care of the rest. Currently lowest level control available in AutonomySim is moveByAngleThrottleAsync
API.
getMultirotorState#
This API returns the state of the vehicle in one call. The state includes, collision, estimated kinematics (i.e. kinematics computed by fusing sensors), and timestamp (nano seconds since epoch). The kinematics here means 6 quantities: position, orientation, linear and angular velocity, linear and angular acceleration. Please note that simple_slight currently doesn't support state estimator which means estimated and ground truth kinematics values would be same for simple_flight. Estimated kinematics are however available for PX4 except for angular acceleration. All quantities are in NED coordinate system, SI units in world frame except for angular velocity and accelerations which are in body frame.
Async methods, duration and max_wait_seconds#
Many API methods has parameters named duration
or max_wait_seconds
and they have Async as suffix, for example, takeoffAsync
. These methods will return immediately after starting the task in AutonomySim so that your client code can do something else while that task is being executed. If you want to wait for this task to complete then you can call waitOnLastTask
like this:
If you start another command then it automatically cancels the previous task and starts new command. This allows to use pattern where your coded continuously does the sensing, computes a new trajectory to follow and issues that path to vehicle in AutonomySim. Each newly issued trajectory cancels the previous trajectory allowing your code to continuously do the update as new sensor data arrives.
All Async method returns concurrent.futures.Future
in Python (std::future
in C++). Please note that these future classes currently do not allow to check status or cancel the task; they only allow to wait for task to complete. AutonomySim does provide API cancelLastTask
, however.
drivetrain#
There are two modes you can fly vehicle: drivetrain
parameter is set to AutonomySim.DrivetrainType.ForwardOnly
or AutonomySim.DrivetrainType.MaxDegreeOfFreedom
. When you specify ForwardOnly, you are saying that vehicle's front should always point in the direction of travel. So if you want drone to take left turn then it would first rotate so front points to left. This mode is useful when you have only front camera and you are operating vehicle using FPV view. This is more or less like travelling in car where you always have front view. The MaxDegreeOfFreedom means you don't care where the front points to. So when you take left turn, you just start going left like crab. Quadrotors can go in any direction regardless of where front points to. The MaxDegreeOfFreedom enables this mode.
yaw_mode#
yaw_mode
is a struct YawMode
with two fields, yaw_or_rate
and is_rate
. If is_rate
field is True then yaw_or_rate
field is interpreted as angular velocity in degrees/sec which means you want vehicle to rotate continuously around its axis at that angular velocity while moving. If is_rate
is False then yaw_or_rate
is interpreted as angle in degrees which means you want vehicle to rotate to specific angle (i.e. yaw) and keep that angle while moving.
You can probably see that when yaw_mode.is_rate == true
, the drivetrain
parameter shouldn't be set to ForwardOnly
because you are contradicting by saying that keep front pointing ahead but also rotate continuously. However if you have yaw_mode.is_rate = false
in ForwardOnly
mode then you can do some funky stuff. For example, you can have drone do circles and have yaw_or_rate set to 90 so camera is always pointed to center ("super cool selfie mode"). In MaxDegreeofFreedom
also you can get some funky stuff by setting yaw_mode.is_rate = true
and say yaw_mode.yaw_or_rate = 20
. This will cause drone to go in its path while rotating which may allow to do 360 scanning.
In most cases, you just don't want yaw to change which you can do by setting yaw rate of 0. The shorthand for this is AutonomySim.YawMode.Zero()
(or in C++: YawMode::Zero()
).
lookahead and adaptive_lookahead#
When you ask vehicle to follow a path, AutonomySim uses "carrot following" algorithm. This algorithm operates by looking ahead on path and adjusting its velocity vector. The parameters for this algorithm is specified by lookahead
and adaptive_lookahead
. For most of the time you want algorithm to auto-decide the values by simply setting lookahead = -1
and adaptive_lookahead = 0
.
Using APIs on Real Vehicles#
We want to be able to run same code that runs in simulation as on real vehicle. This allows you to test your code in simulator and deploy to real vehicle.
Generally speaking, APIs therefore shouldn't allow you to do something that cannot be done on real vehicle (for example, getting the ground truth). But, of course, simulator has much more information and it would be useful in applications that may not care about running things on real vehicle. For this reason, we clearly delineate between sim-only APIs by attaching sim
prefix, for example, simGetGroundTruthKinematics
. This way you can avoid using these simulation-only APIs if you care about running your code on real vehicles.
The AutonomyLib is self-contained library that you can put on an offboard computing module such as the Gigabyte barebone Mini PC. This module then can talk to the flight controllers such as PX4 using exact same code and flight controller protocol. The code you write for testing in the simulator remains unchanged. See AutonomyLib on custom drones.
Adding New APIs to AutonomySim#
See the Adding New APIs page
References and Examples#
- C++ API Examples
- Car Examples
- Multirotor Examples
- Computer Vision Examples
- Move on Path demo showing video of fast multirotor flight through Modular Neighborhood environment
- Building a Hexacopter
- Building Point Clouds
FAQ#
Unreal is slowed down dramatically when I run API#
If you see Unreal getting slowed down dramatically when Unreal Engine window loses focus then go to 'Edit->Editor Preferences' in Unreal Editor, in the 'Search' box type 'CPU' and ensure that the 'Use Less CPU when in Background' is unchecked.
Do I need anything else on Windows?#
You should install VS2019 with VC++, Windows SDK 10.0 and Python. To use Python APIs you will need Python 3.5 or later (install it using Anaconda).
Which version of Python should I use?#
We recommend Anaconda to get Python tools and libraries. Our code is tested with Python 3.5.3 :: Anaconda 4.4.0. This is important because older version have been known to have problems.
I get error on import cv2
#
You can install OpenCV using:
TypeError: unsupported operand type(s) for *: 'AsyncIOLoop' and 'float'#
This error happens if you install Jupyter, which somehow breaks the msgpackrpc library. Create a new python environment which the minimal required packages.