Quantcast
Channel: ROS Answers: Open Source Q&A Forum - RSS feed
Viewing all 1516 articles
Browse latest View live

Changing tf_prefix for a gazebo plugin?

$
0
0
Hello all, I'm working on a gazebo simulation for multiple husky robots and I'm running into some problems with my tf tree. To set up the simulation, I'm launching each robot in a unique namespace with a matching tf prefix. This works great for separating out ROS topics and most of the transforms, however, I'm having issues with the plugins for my odometry and IMU. The issues I'm having seem to be because the frame_id of the plugins called from the .xacro file do not inherit the tf_prefix of the launch file that called them. Here's an example: For the default simulation with one robot, the topic **imu/data** is broadcast and has the **frame_id: base_link**. When wrapped in the namespace and tf_prefix, I can edit the parameter to make the topic **huksy1/imu/data**. But I can't seem to change the frame id to be **frame_id: husky1/base_link**. Does anyone have an idea on how this could be done? Thanks all!

Any suggestions for combining scans of different frequencies?

$
0
0
I'm combining two scans, one extracted from a depth camera, and the other from a 2D LIDAR. But I'm getting some jitter from the depth camera's scan's contribution. I suppose it's to be expected when combining a 30Hz depth scan with a 40Hz LIDAR, but I was wondering if there was anything that could be done better. A qualitative example of the jitter artifact in RVIZ; red/LIDAR, green/depth-camera, yellow/combined. Note how combined lagges with the camera, and the LIDAR remains quite static. ![image description](https://cloud.githubusercontent.com/assets/2293573/13169970/f20197fa-d6b6-11e5-9bb6-e128d5a7444d.gif) The example you see here is a turtlebot simulation from gazebo, with a asus pointing forward, and a gpu_lidar pointing backwards, using this small PR of ira_laser_tools: [#7](https://github.com/iralabdisco/ira_laser_tools/pull/7). I made an issue [#8](https://github.com/iralabdisco/ira_laser_tools/issues/8) of this, but thought It could be general enough for a good old approach question here.

Footprint is inflated randomly on Gazebo

$
0
0
Hi everyone, My robot's footprint is inflating on some sides. In a screenshot below, it inflated from its left and back. At the beginning, `base_footprint` is normal on gazebo. Does anyone know, why it is happenning? Thank you. ![Rviz Screenshoot of three times](/upfiles/14562172923828583.png)

How to lock rotation of the object?

$
0
0
I'm developing quadrotor drone in Gazebo and now I want to limit rotation of my virtual drone. So I want to limit it's movement so that it could move through line or in plane. In other words I want to limit object movement by 4DoF or 5DoF: lock pitch roll or yaw. Is it possible in urdf? Or maybe in some other ways?

RTAB-Map with Turtlebot Navigation for a Known Map

$
0
0
Hi, I am using rtabmap_ros on a Turtlebot in Gazebo. I have successfully created the map with RTAB-Map. Now, I would like to do the navigation with a known map. You can see my launch file [here](https://github.com/MahsaP/turtlebot_rtabmap_simulation/blob/master/launch/turtlebot_rtabmap_navigation.launch). My problem is that, my map is just loaded for a second, and after that it disappears. I don't know what changes are necessary for my launch file to fix this problem. To run my files you can run the following commands: 1) Launch the turtlebot in Gazebo: > $ ROBOT_INITIAL_POSE="-x -2 -y -1"> roslaunch turtlebot_gazebo> turtlebot_world.launch 2) Load the saved map, run rtabmap_ros, run navigation, and visualize the robot on Rviz > $ roslaunch turtlebot_rtabmap_simulation> turtlebot_rtabmap_navigation.launch The distro of my ROS is indigo running on Ubuntu 14.04. Thank you.

ros_control fails (couldn't find the expected controller_manager)

$
0
0
Hi! I am trying to run a custom **humanoid robot (foo) in Gazebo**. I have the urdf running correctly (in Rviz and Gazebo the robot spawns without problems. However i am not able of doing the correct set up of the control in Gazebo. **When the robot spawns it falls to the ground**. The only warning i have is the following: [WARN] Controller Spawner couldn't find the expected controller_manager ROS interface. So i suppose the problem is that the **control is not loaded**. I have searched for similar problems, and i have made sure that the plugin is correctly configured. The output of **rosservice list | grep controller_manager** is /foo/controller_manager/list_controllers /foo/controller_manager/load_controller /foo/controller_manager/reload_controller_libraries /foo/controller_manager/switch_controller /foo/controller_manager/unload_controller I have also checked the **namespaces**, but i can not see anything wrong. I have the following code for the spawn of the robot: The following for the control: I think all the stuff is loaded in the **foo namespace**. If any of you have any clue about why this could be happenning, you will be welcome. Thank you in advance!

Problem with moveit_commander interface; trying to plan trajectories for ur5 simulation with Gazebo

$
0
0
Hello, I apologise in advance, for I am very new to programming with ROS. I have been trying to connect to a simulated ur5 through gazebo and control it in python using the moveit_commander interface. After attempting to write a simple script for controlling the ur5 gazebo simulation using moveit_commander interface I'm receiving the following errors, firstly in the script terminal window and secondly in the terminal running the moveit_planning _execution node: [ WARN] [1456503108.574669066, 548.823000000]: Joint values for monitored state are requested but the full state is not known joint_state: header: seq: 0 stamp: secs: 0 nsecs: 0 frame_id: /world name: ['shoulder_pan_joint', 'shoulder_lift_joint', 'elbow_joint', 'wrist_1_joint', 'wrist_2_joint', 'wrist_3_joint'] position: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0] velocity: [] effort: [] multi_dof_joint_state: header: seq: 0 stamp: secs: 0 nsecs: 0 frame_id: /world joint_names: [] transforms: [] twist: [] wrench: [] attached_collision_objects: [] is_diff: False ============ ============ Generating plan 1 [ WARN] [1456503113.598260823, 551.508000000]: Fail: ABORTED: No motion plan found. No execution attempted. ============ Waiting while RVIZ displays plan1... ============ Visualizing plan1 ============ Waiting while plan1 is visualized (again)... terminate called after throwing an instance of 'class_loader::LibraryUnloadException' what(): Attempt to unload library that class_loader is unaware of. Aborted (core dumped) * All is well! Everyone is happy! You can start planning now! [ INFO] [1456502653.160737839, 126.273000000]: Planning request received for MoveGroup action. Forwarding to planning pipeline. [ INFO] [1456502653.162058984, 126.273000000]: No planner specified. Using default. [ INFO] [1456502653.162381392, 126.273000000]: RRTConnect: Starting planning with 1 states already in datastructure [ERROR] [1456502658.169349483, 130.953000000]: RRTConnect: Unable to sample any valid states for goal tree [ INFO] [1456502658.170807752, 130.957000000]: RRTConnect: Created 1 states (1 start + 0 goal) [ INFO] [1456502658.171010925, 130.957000000]: No solution found after 5.008731 seconds [ INFO] [1456502658.209078107, 130.983000000]: Unable to solve the planning problem [ INFO] [1456502658.216058459, 130.990000000]: Received new trajectory execution service request... [ WARN] [1456502658.216106023, 130.991000000]: The trajectory to execute is empty [ INFO] [1456503008.235511782, 473.390000000]: Planning request received for MoveGroup action. Forwarding to planning pipeline. [ INFO] [1456503008.235972587, 473.390000000]: No planner specified. Using default. [ INFO] [1456503008.236136625, 473.390000000]: RRTConnect: Starting planning with 1 states already in datastructure [ERROR] [1456503013.247658046, 477.166000000]: RRTConnect: Unable to sample any valid states for goal tree [ INFO] [1456503013.249219738, 477.172000000]: RRTConnect: Created 1 states (1 start + 0 goal) [ INFO] [1456503013.249677542, 477.172000000]: No solution found after 5.013654 seconds [ INFO] [1456503013.253513681, 477.174000000]: Unable to solve the planning problem [ INFO] [1456503013.260155299, 477.178000000]: Received new trajectory execution service request... [ WARN] [1456503013.260188972, 477.179000000]: The trajectory to execute is empty [ INFO] [1456503108.575331207, 548.823000000]: Planning request received for MoveGroup action. Forwarding to planning pipeline. [ INFO] [1456503108.575958594, 548.823000000]: RRTConnect: Starting planning with 1 states already in datastructure [ERROR] [1456503113.584912230, 551.502000000]: RRTConnect: Unable to sample any valid states for goal tree [ INFO] [1456503113.587574652, 551.504000000]: RRTConnect: Created 1 states (1 start + 0 goal) [ INFO] [1456503113.591495319, 551.505000000]: No solution found after 5.015622 seconds [ INFO] [1456503113.596049476, 551.507000000]: Unable to solve the planning problem [ INFO] [1456503113.599612790, 551.508000000]: Received new trajectory execution service request... [ WARN] [1456503113.599910654, 551.508000000]: The trajectory to execute is empty ^C[move_group-1] killing on exit shutting down processing monitor... ... shutting down processing monitor complete done The important parts of both of these consoles, I believe are ============ Generating plan 1 [ WARN] [1456503113.598260823, 551.508000000]: Fail: ABORTED: No motion plan found. No execution attempted. and [ERROR] [1456503113.584912230, 551.502000000]: RRTConnect: Unable to sample any valid states for goal tree which are suggesting that something's not quite working with either setting the target trajectory/pose or generating the plan in my code?? Here is my code, which has basically been adjusted (perhaps not suitably) from the [moveit_commander tutorial with the pr2](https://github.com/ros-planning/moveit_pr2/blob/hydro-devel/pr2_moveit_tutorials/planning/scripts/move_group_python_interface_tutorial.py) provided by moveit. #!/usr/bin/env python import sys import rospy import moveit_commander import moveit_msgs.msg import geometry_msgs.msg from std_msgs.msg import String def test(): ##initialize moveit_commander and rospy print "============ Starting test setup" moveit_commander.roscpp_initialize(sys.argv) rospy.init_node('test2', anonymous=True) ## Instantiate a RobotCommander object. This object is an interface to ## the robot as a whole. robot = moveit_commander.RobotCommander() ## Instantiate a PlanningSceneInterface object. This object is an interface ## to the world surrounding the robot. scene = moveit_commander.PlanningSceneInterface() ## Instantiate a MoveGroupCommander object. This object is an interface ## to one group of joints. In this case the group is the joints in the left ## arm. This interface can be used to plan and execute motions on the left ## manipulator. group = moveit_commander.MoveGroupCommander("manipulator") ## We create this DisplayTrajectory publisher which is used below to publish ## trajectories for RVIZ to visualize. display_trajectory_publisher = rospy.Publisher( '/move_group/display_planned_path', moveit_msgs.msg.DisplayTrajectory, queue_size=10) ## Wait for RVIZ to initialize. This sleep is ONLY to allow Rviz to come up. print "============ Waiting for RVIZ..." rospy.sleep(10) print "============ Starting test " ## Getting Basic Information ## ^^^^^^^^^^^^^^^^^^^^^^^^^ ## ## We can get the name of the reference frame for this robot print "============ Reference frame: %s" % group.get_planning_frame() ## We can also print the name of the end-effector link for this group print "============ Reference frame: %s" % group.get_end_effector_link() ## We can get a list of all the groups in the robot print "============ Robot Groups:" print robot.get_group_names() ## Sometimes for debugging it is useful to print the entire state of the ## robot. print "============ Printing robot state" print robot.get_current_state() print "============" ## Planning to a Pose goal ## ^^^^^^^^^^^^^^^^^^^^^^^ ## We can plan a motion for this group to a desired pose for the ## end-effector print "============ Generating plan 1" pose_target = geometry_msgs.msg.Pose() pose_target.orientation.w = 1.0 pose_target.position.x = 0.7 pose_target.position.y = -0.05 pose_target.position.z = 1.1 group.set_pose_target(pose_target) ## Now, we call the planner to compute the plan ## and visualize it if successful ## Note that we are just planning, not asking move_group ## to actually move the robot plan1 = group.plan() ##execute the plan for gazebo the recieve group.execute(plan1) print "============ Waiting while RVIZ displays plan1..." rospy.sleep(5) ## You can ask RVIZ to visualize a plan (aka trajectory) for you. But the ## group.plan() method does this automatically so this is not that useful ## here (it just displays the same trajectory again). print "============ Visualizing plan1" display_trajectory = moveit_msgs.msg.DisplayTrajectory() display_trajectory.trajectory_start = robot.get_current_state() display_trajectory.trajectory.append(plan1) display_trajectory_publisher.publish(display_trajectory); print "============ Waiting while plan1 is visualized (again)..." rospy.sleep(5) if __name__=='__main__': try: test() except rospy.ROSInterruptException: pass So, I think the problem is coming at either the set_pose_target(), or plan(), functions? Do I need to execute the generated plan using execute() for gazebo to receive the target pose? What do you guys think I might be doing wrong? I could have missed loads of stuff, in which case sorry! Just a little more background: Initially, I'm setting up the ur5 gazebo, move_group, and rviz nodes using the following commands: roslaunch ur_gazebo ur5.launch roslaunch ur5_moveit_config ur5_moveit_planning_execution.launch sim:=true roslaunch ur5_moveit_config moveit_rviz.launch config:=true as described on the ur gitub readme doc. using Ros indigo desktop full, on ubuntu 14.04 vm, with ur5 packages downloaded from [here](https://github.com/ros-industrial/universal_robot). Thanks so much in advance!

Gazebo projectors only work on links connected with revolute joints?

$
0
0
I have a work-around for some very unexpected behavior with using projectors in Gazebo, I'm posting some findings here so searches can find it before probably creating a bug report. So far what is working mostly reliably is to place a projector on a link that is connected via a revolute joint to another link. Aiming the projector is accomplished by adjusting the joint xyz rpy, not the projector pose. The link needs to have inertial properties. This is the expected output for a projector pointing at the horizon: ![image description](https://raw.github.com/lucasw/testbot/master/testbot_description/pictures/projector_rev_joint_working.png) If the joint is fixed, it can cause the projected texture to disappear, or work in a glitchy way. If projector pose is adjusted it either has no effect at all, or produces glitchy output like seen below: ![image description](https://raw.github.com/lucasw/testbot/master/testbot_description/pictures/projector_pose_not_working.png) This is another case where sections of the image are projected backwards (the small version of the texture is a projector hooked up to the 'head' link of the robot, since it is attached to base_link by a fixed joint I can't aim it at all with the projector pose): ![image description](https://raw.github.com/lucasw/testbot/master/testbot_description/pictures/projector_revolute_vs_fixed.png) This is part of an investigation that also led to http://answers.gazebosim.org/question/5223/setting-projector-pose-vs-enclosing-link-pose/ and http://answers.gazebosim.org/question/5214/using-projectors-in-gazebo-19-ros-hydro/ (the latter is maybe partially solved though I need to document it). I thought I'd try out the ros site for this one since those two haven't generated any responses. There is an example robot in https://github.com/lucasw/testbot that demonstrates most of the behavior, `roslaunch testbot_gazebo testbot_gazebo.launch` should bring it up.

Setting an Odometry Offset for robot_localization

$
0
0
Hello all, I'm running a husky simulator with multiple robots, so each robot is starting in a different place. However, when I start each robot, the initial position is not reflected under the **odometry/filtered** topic. In other words, the odometry for all robots starts at (0,0,0) in a 3D coordinate system. I'd like to change this. So for a robot starting at (2,2,0), how can I get the odometry reading to match this? I can use the **set_pose** topic to give an initial value, but that takes an extra step. And to add complexity, I'm now using the navsat_transform_node. With this in use, the **set_pose** topic works for an instant, but then odometry returns back to the previous value. I'm happy to give any more information which might be helpful. Thanks all!

Problem with Gazebo initialization

$
0
0
I'm trying to simulate an UAV with Gazebo, but it "falls down" when I start the simulation. What might be the cause?

Apply a body wrench in a local frame in a link

$
0
0
Hi everybody, I am trying to apply a body wrench to a simple gazebo model written in SDF. I am running Ubuntu 14.4, with Gazebo 6 and ROS indigo. I am applying this wrench by calling the rosservice /gazebo/apply_body_wrench as explained in http://answers.ros.org/question/11047/applying-a-force-to-a-rigid-body/ . I'd like to express the wrench in a local frame, defined within a link for example, and not in the world frame of Gazebo. So to test this, I made a MWE you can see below (model in SDF, correctly spawned in Gazebo): 0 0 1 0 0 0false0 0 0 0 0 0.7853981631 1 11 1 1 A box is placed in the origin of the world frame on the ground (no gravity to see the apply force easily). Then I defined a local frame, named "frame1", in the link called "base" that is rotated by 45 degrees on the Z axis respect the wolrd frame. My goal is to apply a wrench to the base link always expressed in this "frame1", so in this case I'd like to apply a force in the frame1::X_axis and see the box moving at 45 degrees respect to gazebo world X and Y axes. So, once I started gazebo with rosrun and correctly spawned this simple model, I was able to call the rosservice with: rosservice call /gazebo/apply_body_wrench '{body_name: "my_robot_bw::base", reference_frame: "my_robot_bw::base", reference_point: { x: 0, y: 0, z: 0 }, wrench: { force: { x: 1, y: 0, z: 0 }, torque: { x: 0, y: 0, z: 0 } }, start_time: 0, duration: -1 }' Here two issues arise: 1) I get the error [ERROR] [1457096195.411824276, 7.968000000]: wrench defined as [base]:[1.000000 0.000000 0.000000, 0.000000 0.000000 0.000000] --> applied as [base]:[1.000000 0.000000 0.000000, 0.000000 0.000000 0.000000] which seems "normal", according to http://answers.ros.org/question/65077/errors-while-applying-force-on-a-model/ when you set a reference frame other than empty, map, or world. It's something strange, but it's not a big problem I guess, as long as the service works. 2) The force is applied as expressed in the world frame, so the box starts moving along the world frame X axes, instead of the frame1 X axes. This is the main problem I am having. So, finally, is there something I am doing wrong? Or should I just always use wrench expressed in the world frame and rotate it by "myself" to make it behaves like if it were expressed in the local frame "frame1" ? Thanks, Marco.

pr2 joint state not publishing

$
0
0
Hi everyone, I'm pretty new to ROS and I am trying to follow the tutorial on using the pr2 model with gazebo: http://wiki.ros.org/pr2_simulator/Tutorials/WorkingWithGazeboOverRos When I fire up the simulator and try to print the joint_states for pr2 I get the following error ~ $ rostopic echo /joint_states WARNING: no messages received and simulated time is active. Is /clock being published? I checked /clock and it is being published with no issues. Am I missing some step to get the joint_state messages? I am running ros indigo on linux mint Thanks!

homebrew: Failed to detect successful installation of [gazebo]

$
0
0
I had ROS Hydro installed and uninstalled in to install ROS Jade. During the install I'm getting an odd error. I'm being told that gazebo i already installed but that it can't be detected. How do I get around this? $ rosdep install --from-paths src --ignore-src --rosdistro jade -y executing command [brew install gazebo] Warning: osrf/simulation/gazebo1-1.9.7 already installed ERROR: the following rosdeps failed to install homebrew: Failed to detect successful installation of [gazebo] OS X El Capitan (10.11.3) ** EDIT ** $ brew tap homebrew/python homebrew/science homebrew/versions homebrew/x11 jlhonora/lsusb osrf/simulation ros/deps totakke/openni $ brew list | grep gazebo | xargs brew info osrf/simulation/gazebo1: stable 1.9.7, HEAD Gazebo robot simulator http://gazebosim.org /usr/local/Cellar/gazebo1/1.9.7 (738 files, 54.6M) * Built from source From: https://github.com/osrf/homebrew-simulation/blob/master/gazebo1.rb ==> Dependencies Build: cmake ✔, pkg-config ✔ Required: boost ✔, doxygen ✔, freeimage ✔, libtar ✔, ogre ✔, protobuf ✔, protobuf-c ✔, qt ✔, sdformat ✔, tbb ✔, tinyxml ✔ Optional: bullet ✔, ffmpeg ✘, gts ✘, player ✘, simbody ✔ ==> Options --with-bullet Build with bullet support --with-ffmpeg Build with ffmpeg support --with-gts Build with gts support --with-player Build with player support --with-simbody Build with simbody support --HEAD Install HEAD version osrf/simulation/gazebo5: stable 5.2.1, HEAD Gazebo robot simulator http://gazebosim.org /usr/local/Cellar/gazebo5/5.2.1 (962 files, 111.2M) * Built from source From: https://github.com/osrf/homebrew-simulation/blob/master/gazebo5.rb ==> Dependencies Build: cmake ✔, pkg-config ✔ Required: boost ✔, doxygen ✔, freeimage ✔, libtar ✔, ogre ✔, protobuf ✔, protobuf-c ✔, qt ✔, sdformat ✔, tbb ✔, tinyxml ✔ Recommended: bullet ✔, simbody ✔ Optional: dartsim/dart/dartsim4 ✘, ffmpeg ✘, gdal ✘, gts ✘, player ✘ ==> Options --with-dartsim4 Build with dartsim4 support --with-ffmpeg Build with ffmpeg support --with-gdal Build with gdal support --with-gts Build with gts support --with-player Build with player support --without-bullet Build without bullet support --without-simbody Build without simbody support --HEAD Install HEAD version

Gazebo with multiple odometery frames

$
0
0
I have been trying to put together a multiple (n=2) Husky robot simulation for testing purposes. I am struggling with getting the namespaces and tf_prefixes setup in a consistent way. Following previous discussions [here](http://answers.ros.org/question/226274/is-it-possible-to-use-robot_localization-for-multiple-robots/"title") which builds on the single Husky tutorials, I have been able to get almost there. At this point I believe the gist of my issue is how to tell Gazebo to use a tf_prefix for the "odom" frame. My understanding is that robot_localization (ekf_localization) takes the imu and odom data generated from Gazebo to generate the odometry messages on the odometry/filtered topic. For the imu I am able to edit the robot_description parameters in the xacro file to have a frameId for each robot. For example... /r1r1_tf/base_link ... This works as expected and the ekf_localization node is able to work with the imu observations. What I don't know how to do is to prescribe the same frame prefix (e.g., r1_tf) for the odometry generated by Gazebo. Is there a way to tell Gazebo what tf frame to use for the odometry messages?

How to use OpenCV to process image

$
0
0
Hi! Everyone! I'm a beginner of ROS, and now I've been focusing on imaging processing on ROS, My robot is Turtlebot, and I use the kinect as the camera, so basically what I need to do is to find the changes in the same background, so I followed the tutorials and started to use Gazebo now, and I tested the kinect all is fine. So my question is, I am pretty lost since there is no example or demo for image processing on wiki, I am wondering what is my first step? and what should I do after that. **If there is any good website for image processing on ROS to follow, that would be perfect! Thank you so much!**

A more risk taking move_base

$
0
0
I am using turtlebot gazebo simulation. I launched the turtlebot_world, demo_amcl and demo_gmapping launch files. I am sending goal positions to the turtlebot using a custom node that sends the turtlebot 1 metre forward. It is the most basic move_base client node that is found in tutorials available on the web. The difficulty I find in using that is that less than 50 % of the time does it successfully send the turtlebot to the desired location. And it does that swirling on its axis trying to map its environment This happens whether I use the goal_position_node or the 2DNav Goal Tool in Rviz. Is there a way to make the current move_base functionality better by a simple fine-tuning of parameters or there already exists a much more risk-taking alternative to move_base. Otherwise would using a custom node which publishes to cmd_vel using laser_scan data be a better option.

How to use Gazebo2 internal topics?

$
0
0
I want to use the internal gazebo topics for getting sensor data from the Gazebo simulator, but only links and model topics are available outside gazebo. How to use the gazebo's internal topics?

How to get gazebo topic using c++ and ros ?

$
0
0
Hi, I am using ROS, gazebo and c++ together to perform some tasks. In gazebo I launched "turtlebot_world.launch. It published " /gazebo/default/pose/info " which gives the information of robots position in gazebo. How can I take this topic in c++, does anyone knows ? Also I have the function below but doesn't take any information or topic: void subscribe_gazebo_pose() { ros::NodeHandle ndh; ros::ServiceClient gms_c = ndh.serviceClient("gazebo/default/pose/info"); //gms_c.waitForExistence(); gazebo_msgs::GetModelState getmodelstate; gms_c.call(getmodelstate); geometry_msgs::Pose pose; pose.position.x = getmodelstate.response.pose.position.x; pose.position.y = getmodelstate.response.pose.position.y; cout << pose.position.x << endl; }

SDF to URDF conversion

$
0
0
Hello everyone! I am starting from scratch with ROS (just recently installed Jade) and I wanted to ask for some guidance. I have followed all basic ROS tutorials already and my idea was to develop a simulation model with Gazebo that I can control through ROS nodes. I would afterwards move on to the real robot but only after the simulation model is complete and working properly. I followed Gazebo's basic tutorials as well (still have some to go through) but the first problem I encountered is my robot has mecanum wheels. I read that the easiest way to go is using the Planar Move Plugin so I looked it up in [here](http://gazebosim.org/tutorials?tut=ros_gzplugins) and apparently I should first develop a URDF file of my robot. I have been going through the tutorials (first had to learn tf) but I am starting to wonder. Do I really need the URDF file? Is there any way to work directly with SDF files both in ROS and gazebo or at least generate the URDF from the SDF I already have? I would really appreciate it if someone could clarify this for me.

URDF and SDF?

$
0
0
Hi All, I've seen a bit of discussion about URDF and SDF here and I thought I'd ask the broad question before I try to learn either of them. I am developing a natural language processor to split these out (and into simulations) and I should be aware of which one is better for which purpose (scene vs. robot, if that makes sense, even) or if one is being deprecated or something. Should I be learning one or the other or both? I'm all ears. Sincerely, -Todd
Viewing all 1516 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>