Peer & expert reviews
The main form of assessment used in Turing College is in-person project reviews also known as graded tasks (or “corrections” previously). The reviewers can be peer learners, JTLs or STLs. These reviews are scheduled via our platform and are required to progress to new projects and modules.
Besides being a way to measure progress, peer reviews are also core a way to learn – both technical and soft skills. Research has shown that by assessing and guiding others, learners significantly increase their own understanding of a topic as well. Furthermore, reviewing the work of others is a daily task for many people working in the IT field (code reviews being the most common example of this type of task). Practicing this skill is another way for our learners to get experience that will be relevant to their jobs later.
In certain sprints, STL reviews are conducted by JTLs. This approach is taken because JTLs often possess extensive review experience in these specific sprints, allowing them to provide even more detailed and insightful feedback. However, the majority of sprints, especially those that require significant industry experience, are still handled by STLs. This division of responsibility ensures that learners receive the most relevant and experienced guidance appropriate to the complexity of the tasks they are working on.
To be reviewed by another learner, you will need to spend a review point. Everyone starts with 6 points initially and get additional review points by reviewing others. With this system, every single learner needs to both give and receive reviews to progress through the program. Seeing the review process from both sides is extremely important – both experiences will allow you to understand the topics that you are learning much more deeply.
Besides the information on this page, there is also a video prepared by one of our Junior Team Leads explaining reviews from a learner’s point of view: https://www.loom.com/share/c62b66a276b540a88b56a045b9fe9e4c
Submission and preparation
Once you have completed the work for your graded task, you should make sure that you have uploaded it to the correct GitHub repository (each sprint has a separate repository created) and submit it. Then, click the “I’m Done” button and you will get the ability to schedule reviews and get access to the next sprint immediately so that you can continue learning while waiting for the reviews to happen.
When you work is submitted, our platform allows you to see the times when others are available to review your work. These are shown as purple slots in the calendar once you click on “Peer review” or “STL review”. Just don’t forget – to book a review, you need to have review points available – if you don’t have any, you will only see a button to add timeslots to review others.
If a project requires both an STL and a peer review, there is no requirement for which you should book first. However, some learners prefer to have a peer review first so that they could practice and get feedback before the STL review.
When a review is scheduled, the reviewer is given access to the GitHub repository where the learner’s work should have been uploaded. The reviewer then prepares for the review by going over the work. If there’s something unclear for the reviewer (e.g., the work doesn’t seem to be uploaded to GitHub), they may contact the learner before the review via Discord. Always double-check that you have uploaded your project to the correct GitHub repository!
After you submit your project, you may still make updates to your project. However, if you make major changes, the reviewer might not notice them if they had already checked your code (remember, they can do it as soon as you schedule a review). In that case, the reviewer may choose to not review your code, as they will not be in a position to give quality feedback. Our recommendation is therefore to complete your project, get the first review (when there are 2 or more in total), make improvements based on the first review and then schedule the second one. If you do make major changes, be sure to inform the STL as soon as possible via Discord.
A review cannot be booked if there is less than 6 hours left until the timeslot. This is so that the reviewer can notice the review booked and spend some time preparing for it and reviewing the code before the meeting. Once you submit your project, however, you can immediately access the next sprint in “preview mode”. Preview mode means that you cannot do quizzes or submit the next project, but you can still view the content. This means that while you wait for your reviews, you do not need to stop your learning. A review may also not be booked if it is more than 7 days in the future.
Important: a reviewer usually spends quite some time preparing for the review – and they do it based on the work that it uploaded at the time when they open your repository. If there’s nothing uploaded and the reviewer feels like they won’t have enough time to prepare later, they are expected to cancel the review. Similarly, if you make significant updates after booking a review, the reviewer will usually grade your task based on the version that they have seen during their preparation – this is because for most projects, preparing for a review takes a significant amount of time and the reviewer is not expected to do it once more after the review (the 1-1 meeting itself may not be enough time to fully re-evaluate the work). It is therefore suggested to only book a review once you are not expecting to make any further changes. If you expect to make changes after your first review, we suggest to only book one review at a time – i.e., the second review should be booked only after you have made the changes since your first one.
The 1-1 review meeting
When the time for a review meeting arrives, the event in your Turing calendar will provide a Zoom meeting link to join.
At the beginning of the scheduled review meeting, the learner and the reviewer agree on the structure of the project presentation, specifically the order in which it will be presented.
The reviewer can ask questions to check whether the learner understands relevant parts of the project (participants can agree whether this is done during the presentation or after it) and assigns a score ranging from 1 to 5 stars to evaluate the quality of the project based on the provided requirements.
On average, a review like this takes around 45 minutes to complete, but the duration can vary depending on the type of project and the amount of feedback given. Additionally, learners are required to present their work using a computer rather than a mobile phone, tablet, or other devices. This ensures a more efficient and effective review process, as learners may be required to work on their code during the meeting.
Participation in reviews is one of the best methods to enhance your communication and feedback skills. During the review process, it is important to pay attention to how you give and receive feedback. Ensure you actively listen to the other person's perspective and seek to remain objective.
A sign of good reviewers is that they can give feedback in a way that doesn't make learners feel bad about their mistakes or issues they face. Instead, they help learners see these mistakes and issues as chances to improve and grow.
Think about some of the great teachers or mentors you have had and try to recall how they communicated. Did they highlight flaws in knowledge, or did they shift the focus entirely to the topic and how it could be understood better? When providing reviews for others, remember that the way you deliver feedback matters a lot. It's also important to mention that this process presents a learning opportunity for both learners and reviewers.
You should also remember that even though everyone has great intentions, it is impossible to always be perfect. Sometimes, a reviewer might be more tired, and more stressed than usual. This can happen to you and can also happen to your reviewer - both in Turing College and in real work. In such cases, how you react to the feedback can make a great difference - if you show that you are listening to the reviewer, take the feedback in a calm, objective manner without trying to turn the conversation into an argument, you can influence the conversation in a very positive way still. A mindset that you should avoid at all costs is thinking “The reviewer is doing a poor job – this therefore means that I can act negatively as well!”. Hopefully, it is obvious why such an approach will not lead to anything productive.
Review Guidelines:
Every project has a different amount of evaluation criteria, which are based on the project requirements. The assessment process involves assigning a rating of 1 to 5 stars for each evaluation criteria. Please find the star meaning below. These guidelines are present on each project evaluation form, too.
1 star (Unsatisfactory)
The criteria were not fulfilled at all
OR the learner can’t explain how any solution would address this criteria
OR almost no effort was made at all
2 stars (Below expectations)
There are major issues in the fulfillment of the criteria
OR the learner struggles to explain a large part of the solution
OR, while effort was made, the results are not useful
3 stars (Close but still below the Turing College standard)
There are some issues in the fulfillment of the criteria
OR the learner has some knowledge gaps in the solution application
4 stars (Good – meets the expectations)
There are minor issues in the fulfillment of the criteria
AND the learner demonstrates a good understanding of how their solution works
5 stars (Excellent – meets or exceeds the expectations)
The solution fully satisfies all criteria or even exceeds them
AND the learner shows a deep understanding of how their solution works and can reason about alternatives
How to interpret your score
Up to 50%: This is not a satisfactory result. If maintained throughout all sprints, it would place the learner’s skills at a level not enough for junior positions in the market.
50-69%: This is not a bad result, but it does not meet the Turing College standards. If maintained throughout all sprints, this skill level would place the learner’s skills at a roughly average level among juniors in the market.
70–85%: The learner should be proud of this result. This is the Turing College standard. Graduating the entire program with a score of ~80% in most tasks would mean their skills are above average for juniors in the industry.
85%+: The learner should be very proud of this result! Graduating with a score of close to 100% in most projects would put their skills among the best in the junior or even mid-level market.
Important note:
A learner should never appeal for a higher grade based solely on how much effort they put into the project. In real work settings, the goal is to achieve the best result with as little time and effort as possible. Knowing when to make compromises and gain 80% of the result with 20% of the effort is a very useful skill. The only exception to this rule is when a reviewer wants to give a one-star rating, assuming that no effort at all was put to meet the criteria. In such cases, the learner can explain how they actually did put effort into their solution.
Reviewing the reviewer
After the project review, both parties need to enter their feedback for the review to count as completed. The learner who is being reviewed rates their reviewer based on the following criteria:
The reviewer helped me learn useful new things
The reviewer was friendly and respectful, making me feel comfortable during the review
The reviewer was easy to understand
The reviewer was objective during the review, accurately noticing both my weak and strong points
The reviewer was on time
Currently, learners are not able to see how they were rated as reviewers (STLs can see it, detailed feedback included). However, features might be implemented in the future which would allow these ratings to be visible. Currently, this feedback is mostly used when choosing new JTLs – we expect good JTL candidates to leave useful reviews themselves and to get positive feedback from others.
Passing a review
At Turing College, our focus is on cultivating exceptional skills tailored for you and the dynamic job market. We maintain high learning standards to ensure you develop robust technical and soft skills, making your skillset highly attractive to current and future employers. Our curriculum emphasizes real-world readiness, often requiring independent research and problem-solving skills in projects. Remember, help is readily available – join open sessions, ask questions on Discord, or during standup.
We don't settle for average; therefore, completing a project means achieving a review score of 70% or higher in both - peer and STL - reviews. We believe that scoring between 70% to 85% in most projects already positions your skills favorably among what's required from junior professionals, while exceeding 85% places your skills among the best. Remember, try to strive for a higher grade based on your project results and not on the effort put into achieving it – much like in real-world scenarios where the goal is to achieve optimal outcomes with the least amount of effort and time.
Don't be afraid to fail; it is a great learning opportunity, and we expect our learners to encounter challenges during the learning process. An STL can also mark a project as failed regardless of the score if they see that the learner would strongly benefit from improving it – in that case, the score will show up as 0% in the platform.
Overall, scoring over 70 is a significant achievement already, so focus on gaining knowledge and be proud of your progress.
Important: after your reviewer submits their score, you will also need to rate the reviewer. Only after both you and the reviewer have left the feedback will the review count as completed. You can leave your feedback in the relevant project’s page.
If the score you get is less than 70%, you need to go back and make improvements to your project based on the feedback received in the reviews. When you have made these improvements, you will need to get reviewed again before you can progress to the next stage of the program. Besides spending review points, normally, there are no downsides to getting a failing mark and doing another review. So if you see that you are spending significantly more time on a project than you should be, it might be a good idea to consider scheduling a review to check if you have already done enough to pass it.
Also, as you progress through the program, please remember that the quality of your presentation, whether they are dashboards or other formats is crucial. Even if all other aspects such as queries and criteria are well executed, a poorly made presentation can still lead to failing the project review. It's important to keep in mind that in real-life scenarios, the accuracy of your work, such as data, and the quality of your code alone isn't enough. If you're unable to clearly present your main goals or insights, the message you intend to convey will not be effectively communicated.
Note that most learners are expected to fail at least one or even a couple of reviews during their studies – it’s a completely normal part of the learning experience! We encourage you not to get discouraged by these situations – in the long run, these failed reviews will not have any negative impact – it’s only a way for the reviewer to let you know that you can benefit from learning the topic more deeply. Keep reading
Reviews and deadlines
A deadline generally includes the full completion of the project, meaning the successful completion of reviews – not just submission. This is extremely important to keep in mind because you can never be sure if you will pass the reviews on your first try and when there will be slots available at a very short notice. Because of this, we very strongly advise all learners to always aim to complete their projects well before the deadlines.
Some additional things and exceptions to keep in mind:
If you submit your project at least a day before the deadline and there are no available timeslots until the deadline, book the nearest convenient timeslot after the deadline, when it becomes available. If the submission is made at least a day before the deadline and we see that in GitHub the final version of the project was submitted at that time, we will not count the deadline as missed. Remember that you should continue working on the next sprint while waiting, since you automatically get preview access to the next sprint and because you do not get extra days for completing the next sprint. In case you cannot see any times suitable for you, you should to ask for additional review times on Discord (#<program>_reviews_scheduling channel). If we notice learners who are consistently late with their deadlines, have their submissions on time, but who also never actively try to get the reviews on time, we might still consider them as having missed their deadlines.
The ‘leftover’ time from a deadline moves to the next sprint. So if you complete a project 5 days before the deadline, you will have 5 extra days for the next sprint.
If you complete the project after the deadline, you will have less time left for the next sprint – that’s why it is very important to request deadline extensions (via the deadline extension form in the platform).
Learners who have mandatory deadlines and have missed them may have their access to the platform suspended and asked to contact the support to discuss their individual situation.
How do I review others and gain review points?
In order to review others and earn review points, you should use the Turing College calendar to set your available times for reviews. Learners who need to be reviewed will see all the available times when someone can review them in their calendars. When they select a time-slot, the system will select a reviewer at random from all of the people who said they were available at that time. Learners with fewer review points are always prioritized.
Once someone books a review with you, you will get notified via Discord and email.
To sum up, a meeting is arranged when a learner picks a time that somebody capable of reviewing them has marked as available. Please make sure that you really are available at the times that you mark as available for reviewing. If somebody books a review with you and you don’t show up, you will be marked as “no show”. If this happens often, it may lead to suspension from the program.
For more details about how to use the platform to schedule reviews (both as a learner to be reviewed and the reviewer), check the Platform know-how page.
Below you can find a list of advice when performing reviews for others:
Take adequate time to prepare: The more prepared you are, the more likely you'll give effective feedback. During your preparation, take notes that you can refer to in the session. If you can, create a document from your notes for the learner to use in the future.
Develop trust: Trust is crucial. If the learner does not trust you, you've already lost. The learner has to trust in your competence and in your intention to be helpful. Without trust, they will not feel at ease with you. They may not absorb your advice and they may not learn from your knowledge. Before the learner presents, make GENIUNE positive comments about their project. This is one way to help the learner trust you. You show that you took time with their project, you see the good aspects of it, and you care. The entire session has to foster trust in one way or another.
Start with questions: At the beginning of the review session, break the ice with a few questions. For example, "How are things going?" "What are you proud of in this project?" "What gave you the biggest challenge ?" "How can I be most helpful to you today?"
Be empathetic: Put yourself in the learner's shoes. Listen and try to understand where the learner is coming from. Let them know that you're offering feedback because you genuinely want to see them improve. This will help them be more open to accepting your comments and will make it easier for them to put your advice into practice.
Be clear: The clearer and more specific you are, the less room there is for learners to misinterpret your advice. Avoid general statements and stick to observable facts that can help the learner improve. If your feedback is subjective, make sure you indicate that you are expressing an opinion and there are other ways of looking at the same thing.
Be humble: Don't assume that you know it all. State your intention that you want to be helpful. " Use "I" statements like, "I noticed that..." or "Based on what I know..." or "I feel that..." or "I know you put a lot of time into this project. I want to help you get the most out of this session and I will suggest some changes you may want to implement on your future projects."
Balance positive and constructive feedback: Positive feedback helps the learner be open to taking constructive feedback. That's why it's important to start with the positive. Remember to be specific with your praise and don't be afraid to give constructive feedback that will help the learner improve. Honesty always comes first. Don't praise the learner just for the sake of praising. If the project really has nothing good in it, make that clear. This is, in fact, one way to build trust. "I see my job as helping you succeed at Turing College and I would not be doing my job if I was not fully honest with you. I don't think you put enough effort in this project and I struggled to find something good in it. What do you think of my point of view?"
Focus on the future: Be focused on how you can help the learner get closer to his/her ultimate goals. Help them understand that with small and continuous improvements, they will achieve great results.
Steer clear of the word, "but": Sometimes it's tempting to say: “I think you did a good job but…” Instead, try to list your points separately. For example, "You did a great job on your visualizations! A suggestion for further improvement would be to make your conclusions more comprehensive. Include more points from your analysis."
Explain in simple language and use analogies to help the learner understand: Use simple language that a teenager can understand. If you don't fully understand something yourself, say so and don't confuse the learner. Using analogies is an effective way to explain complex topics.
Be considerate of the learner's level of knowledge : It's important to consider the learner's knowledge at that particular time. They may not know something because they are at a different stage in their learning. It's unfair to overwhelm them with complaints about things that they are not supposed to have learned.
Engage the learner in a dialogue: Encourage learners to share their perspectives. Be open to understanding their point of view. After giving feedback on a complex idea, check to see if they got it. A good way to check is by asking them to explain what they understood.
Teach learners something useful: Aim to teach learners at least one useful thing during a feedback session. If you cannot explain it simply, you may not really understand it yourself. Pick a topic you are fully familiar with.
Avoid replicating ineffective patterns: Resist the temptation to imitate the ineffective ways in which others provided feedback for you. Instead, focus on effective practices and aim to be a better coach.
Summarize the review session: Toward the end of the review session, bring the focus to the core ideas from the session. This summary should not overwhelm the learner. Focus on the essentials based on where the learner is in his/her level of knowledge. If you can, let the learner know if the project would pass or not.
End with questions: At the end of the review session, ask learners about how the session landed on them with the purpose of improving the way you give feedback. You can use questions like, "What’s one thing you think I should do more of?" “What's one thing that would have made this session better for you?"
Follow the guidelines above with your written feedback: In your final step as a reviewer, follow all the previous guidelines in your written feedback. Leave yourself enough time to do a good job in this step, which will remain the permanent record of the session. Balance positive and negative points in separate sections. Be considerate of the learner's level of knowledge. Focus on the future. Be specific, clear, fair and genuine.
Bonuses for having many review points
Some of you are or will be extremely active in performing peer reviews. Therefore we would like to reward you for your effort and encourage more learners to do peer reviews actively! You can exchange your review points for some sweet Turing College merchandise:
👉 25 points = TC T-shirt
👉 35 points = TC hoodie/sweater (you choose which one you prefer).
Couple of details:
You will lose the review points used for the rewards, so it might be a good idea to have at least a couple extra so that you can schedule your own reviews.
You can only receive one of each reward. Hoodie/Sweater both count as the same reward.
If you have enough points, you can redeem these rewards anytime by filling out this form: branded clothing
Some other things to keep in mind:
Please be sure to read the section about academic honesty here. Serious instances of plagiarism can result in termination of studies.
If you book a review and cancel it, you will not get the point refunded. This is because by booking a review, you are blocking others from taking it until your cancellation and because the other person is likely to start working on checking your submission.
When booking a peer review, you might be offered to choose an STL if there are no learners available. However, the same STL can never perform both reviews for the same learner for the same sprint.
Different evaluation criteria can have a different weight when calculating the final score.
Most projects will have a criteria “general understanding of the topic”. It allows the reviewer to evaluate not only whether the learner has completed the other criteria, but whether they actually fully grasp the concepts of the sprint. Sample questions that can be asked during a review to test this general understanding are given both to learners and reviewers - however, additional questions can be asked. This is also the criteria that is influenced the most by the 1–1 meeting itself instead of the work uploaded to GitHub. A learner who has all the correct answers in GitHub but is not able to explain any of it in the review is likely to get the minimum mark for this criteria. Similarly, if a learner makes mistakes but is able to explain them well during the meeting, it can lead to a higher mark in this criteria. This criteria usually has the greatest weight out of all criteria.
Different reviewers, whether they’re STLs, JTLs or regular learners, might focus their feedback on different parts of the projects. In some cases, the feedback of different people might even appear contradictory. If you get such feedback, keep in mind that this very likely reflects how your work might be evaluated in the real world - people from different backgrounds and different companies might place slightly different weight on different nuances. Use this as an opportunity to get a wider view. If something still feels unclear after a review - start a discussion about it in Discord where everyone can see it. Such discussions will allow you to understand whether there were actually multiple possible ways to approach something, or whether one specific suggestion is actually better than others. There’s a more detailed section about this further in this page called “Some additional notes on subjectivity of reviews“.
In some projects, there might be multiple different ways to correctly solve a project. Just because you have passed your reviews during which STLs/peers said that your solution was right, does not mean that a peer who has done it differently is necessarily wrong. When reviewing others, be open to different solutions. If you disagree with something, explain why you think it is incorrect and why your solution works better, but don’t just state it without a justification. Very importantly, simply saying “STL/JTL told me my solution was correct” is not a good explanation – if you feel this is your only justification, it is very likely that you did not fully understand the project yourself yet.
Disagreements in peer reviews are expected sometimes (and sometimes, even in STL reviews!) – just as they are in real workplaces and real code reviews. Focus on disagreeing politely, listening to the other side, justifying your views rationally, and being open to understanding your colleague. You don’t necessarily need to leave the review with a mutual agreement, you just need to have understood the other side as well as you could. Always thank your peer for sharing their opinion – it’s actually much harder, but many times much more productive, to disagree with someone in a peer review than simply stating that everything is great! Finally, if you later realize something new about the issue that you were discussing, you are encouraged to message the reviewer/reviewed learner with your new insights. Do it both when you realize that you were wrong and also when you find some information about why you were likely right. This will allow both parties to learn even more from each other.
If your project requires to work with an online file (e.g. Google Spreadsheets in the second sprint of the Data Analytics program), you can simply upload a .txt file to your repository with a link to the Google Spreadsheet containing your work. Make sure that the online file has correct permissions though - everyone with the link should have view access.
Additional notes on subjectivity of reviews
Subjective does not mean random. If you have a good answer but present it in an extremely non-confident, unconvincing way, the reviewer might subjectively rate you lower, believing that your lack of confidence indicates that you yourself are unsure whether the answer is correct or not. This is a common 'shortcut' for evaluating the work/level of knowledge of others, especially when the evaluator is not very knowledgeable in that topic (which can happen often in peer reviews). In real work, a technical specialists will often present their findings, conclusions and suggestions to completely non-technical people – subjectivity will play a huge role in these cases! Practice how to deal with it and even use it to your advantage.
Also, approach reviews with the goal of getting to know the different possible viewpoints in which others can be subjective. This can allow you to prepare even better next time, anticipating what opinions others might have. E.g. if you have something arbitrary X as your solution, and one reviewer says that Y would be better, while another says that Z is better, you might even want to think of a solution W incorporating all of the former for future reviews, which would demonstrate that you've thought about all viewpoints X, Y, Z in order to come up with W.
Keep in mind that ratings of different reviewers might differ significantly. This is because different reviewers might focus on different parts of your work and potentially notice some strong/weak points that the other did not. You should therefore not assume that passing one review automatically means that you deserve a pass in the other one (or that a very high mark in one review means you should get a high mark in the other one). If such a case happens, view it as an opportunity to learn the topic even more deeply, regardless of whether the failing grade was the more or less objective one.
Finally, try not to worry to much about the exact score that you get though – there isn't much difference if you get 80% or 90% - a passing score still means that your level of knowledge is good enough according to the reviewer.
Additional notes on open-endedness of reviews
Ambiguous, non-complete tasks are a daily occurrence in real work scenarios, whether it's in DA, DE, WD or DS or many other fields. Usually, it's not something that can be fixed by having "better" stakeholders who could describe problems more clearly. Instead, ambiguity and vast open-endedness is inherent in most real-world tasks when you are trying to find best solutions to non-trivial questions. Other times you might get vague task descriptions simply because your colleagues won't have enough time to write very detailed ones.
This almost unavoidable lack of precise requirements means that great specialists are expected to be very proactive about the tasks that they get by asking clarifying questions, introducing their own ideas, and generally really thinking about the real goal of the task, instead of simply following step-by-step instructions. For example, in web development industry, one most problematic and avoided kind of developers are those with the mindset of "I will do exactly what the instructions say, no questions asked, and nothing more". We want our learners to avoid this pitfall.
The way we teach a proactive mindset is by giving increasingly more open-ended projects as the program goes on. However, balancing the level of open-endedness is a tough task sometimes, so we are constantly adjusting the descriptions and the criteria in all our programs. The feedback you give to JTLs, STLs and TC staff helps a lot with this. If you see that some projects feel inadequately vague, please reach out! In some cases, it could be us missing some obvious things that we should have added in the project descriptions. A simple way to do it is via the support chat in the platform.
Showing up for reviews
Missing a review that you have scheduled yourself is usually considered a major issue, as it shows a lack of time planning skills and a disrespect towards the reviewer. It wastes a reviewer’s time and prevents others from getting a review at that time slot. If a student does this multiple times, it can be a reason for study contract to be terminated. A reviewer is allowed to mark a student as ‘no-show’ if the student is late for more than 5 minutes.
Similarly, missing a review when you are a reviewer is usually viewed in a similar manner. The only exception is someone books a review that is very close and it’s hard to notice in time. Even in such cases, however, if you expect that you won’t be able to check your email/discord in time, it’s strongly recommended to remove such review slots. E.g. if you have review slot added for 7am in the morning and don’t expect to wake up by 8am since nobody has booked a slot during the evening, you should remove the slot, since someone might take the slot during the night.
If we notice a missed review, we will likely contact you directly to identify the reasons and to discuss the exact terms going forward. We will not suspend access to the program without a warning first, however.
Besides successfully progressing through project reviews, another requirement that learners need to meet in order to be eligible for the Endorsement or to maintain their scholarship is attending at least 1 learning event per week (stand-up, open session, project review, Virtual Classroom), unless you inform us about a vacation beforehand. UZT-funded learners have additional requirements.
Related content
_______________
Turing College