代做4CCSAITR Introduction to Robotics – coursework 2代写留学生Python程序
- 首页 >> Java编程EXAMINATION PERIOD: May 2025
MODULE CODE: 4CCSAITR
TITLE OF EXAMINATION: Introduction to Robotics – coursework 2
FORMAT OF EXAMINATION: Online submission
SUBMISSION DEADLINE: Thursday 22/04/2025 at 4pm
IMPLEMENT THE SPECIFICATION IN ALL SECTIONS
SUBMISSION PROCESS: Your work must be submitted as a zip file containing the full package second_coursework. Do not submit anything else.
Ensure you upload the correct file to the submission folder
ACADEMIC HONESTY AND INTEGRITY: Students at King’s are part of an academic community that values trust, fairness and respect and actively encourages students to act with honesty and integrity. It is a College policy that students take responsibility for their work and comply with the university’s standards and requirements.
By submitting this assignment, I confirm that this work is entirely my own, or is the work of an assigned or permitted group, of which I am a member, with exception to any content where the works of others have been acknowledged with appropriate referencing.
I also confirm that I have read and understood the College’s Academic Honesty & Integrity Policy:https://www.kcl.ac.uk/governancezone/assessment/academic-honesty-integrity
Misconduct regulations remain in place during this period and students can familiarise themselves with the procedures on the College website athttps://www.kcl.ac.uk/campuslife/acservices/academic-regulations/assets-20-21/g27.pdf
Background Story
Our robot, a TurtleBot3, needs to be set up on a new home. The robot knows the map of the house, but it doesn’t know which room is which. Your task is to develop a robot behaviour to build a “semantic map”, so that the robot knows which room is the Kitchen, Living Room, etc. After building the semantic map, the robot can be used to fetch objects in different rooms.
The map of the environment is as follows:
The robot will have its camera enabled (see camera simulator below), and different images will be published on the camera topic while it moves. The images will show scenes of the different rooms: bedroom, living room, kitchen, bathroom, garage.
Example of objects that appear in each room are:
• Kitchen: microwave, refrigerator, bottle, sink, fork, banana, sandwich, chair.
• Garage: bicycle, car, sports ball, skateboard, motorbike.
• Bedroom: bottle, laptop, handbag, tv monitor, sports ball, skateboard, chair, bed.
• Living Room: tv monitor, remote, book, clock, dining table, chair.
• Bathroom: toothbrush, toilet, sink.
Important note
We will mark based on the observed behaviour of the robot. This means that if we cannot run the nodes because they don’t follow the naming conventions specified below, or there are execution errors at launch, the mark will be 0 (or whatever mark has been obtained until the point of error).
Ros Fundamentals
The environment in the provided Stage simulator has the map above.
• Create a package called "second_coursework".
Main node
Create a main node to run your behaviour, in a file that must be called main_node.py. The node will have to provide a action called “/create_semantic_map”. The action definition file is provided on KEATS (called CreateSemanticMap.action). The goal will be a std_msgs/Empty. The result of the action will consist in six strings, called exactly A,B,C,D,E,F. For each string, the field will have the name of the room corresponding to the label.
Robot behaviour
The robot will have to perform the following behaviour within the service call:
• Visit every room at least once (4 marks per room, 24 if all rooms visited)
• Once a room has been recognized, use one of the TTS methods covered in class to make the robot say the name of the room out loud (10 marks)
• Use a SMACH State Machine, with different states and correct use of userdata (18 marks)
• Correctly identify the room where the robot is in based on the objects it can see. You must use the YOLO version used and taught in class. (3 marks per correct room, 18 marks if all rooms identified)
Using the semantic map
Once each room has been identified, the robot will have to go to the living room, where a person will give a spoken order to the robot. To implement this behaviour, create an action in main_node.py of type EmptyAction.action (available on KEATS) called “/get_help”. The action will implement the behaviour to go to the living room, interact with the person, and complete the request.
The order will be like: “Please bring a/the X to the Y”. Wording may be a bit different, but the structure of the sentence will be similar. Once the order is received and processed using the speech recognition method taught in the lectures, the robot will have to:
• Go to a room where the object X should be (i.e., a room where the object
belongs to. For instance, the bedroom or the living room for a tv monitor). Here, X comes from the spoken command) (10 marks)
• Wait for 10 seconds there.
• Go to room Y. (4 marks)
Grand Total: 84 marks
Important: Launch and semi-automatic marking
We will run all the submissions using the launch file provided. Therefore, you must make sure that the launch file works and runs your nodes, ideally starting from a clear workspace (with nothing else in it). If the launch file fails to run the nodes, we will deem the submission as “failed to run”.
The launch file requires a parameter video_files, which can be appended at the end of the launch command as: “video_files:=
Only code that runs in the default container is allowed, thus no external libraries or anything not used during the module is allowed.
The automatic marker will stop your code if either of the following conditions are met:
• The robot stays still (without moving at all, nor rotating in place) for more than 30 seconds.
• Your code runs for more than 15 minutes.
Do not modify any of the provided action files. Doing so may create errors when running your code, as it would change the API of the automatic marker. Do not modify the launch file either, as we will run our own (which includes everything included in the one submitted). Therefore, if you add nodes there, we will not run them.
Test videos
To facilitate the development of the coursework without access to the real robots or a webcam, we provide video/image files and a video node that will reproduce different videos when the robot is in certain locations.
The video behaviour will be as follows:
• When the node starts, it will randomly assign one kind of room to one of the available rooms (i.e., one run may assign a bedroom to room A and kitchen to room B, while another run may assign the garage to A and the kitchen to B). Room assignments will be printed on the terminal to help in debugging, but not published to any topic.
• When the robot enters one of the rooms, videos of objects will start to play, in a random order.
• The videos to be played are chosen at random. Every time the robot enters or leaves a room, the playlist will update. When the robot is not in one of the rooms, the video of a corridor will be played.
The videos will show the relevant object for at least 5 seconds.
To run the node, you can put it in your workspace, and run it as rosrun second_coursework itr_cw_2425AI --video_folder
--video_folder /home/kcl/videos
In order to facilitate testing, you can add an option “--seed” to make the random generator fixed, thus removing the random component (as every run will produce the same results). Example: “rosrun second_coursework itr_cw_2425AI --video_folder
Example command setting up the number of locations and the radius:
rosrun second_coursework itr_cw2425CS --video_folder /home/kcl/videos -seed 2587