Search results for: National Australian Built Environment Rating System
736 Pulsed-Wave Doppler Ultrasonographic Assessment of the Maximum Blood Velocity in Common Carotid Artery in Horses after Administration of Ketamine and Acepromazine
Authors: Saman Ahani, Aboozar Dehghan, Roham Vali, Hamid Salehian, Amin Ebrahimi
Abstract:
Pulsed-wave (PW) doppler ultrasonography is a non-invasive, relatively accurate imaging technique that can measure blood speed. The imaging could be obtained via the common carotid artery, as one of the main vessels supplying the blood of vital organs. In horses, factors such as susceptibility to depression of the cardiovascular system and their large muscular mass have rendered them vulnerable to changes in blood speed. One of the most important factors causing blood velocity changes is the administration of anesthetic drugs, including Ketamine and Acepromazine. Thus, in this study, the Pulsed-wave doppler technique was performed to assess the highest blood velocity in the common carotid artery following administration of Ketamine and Acepromazine. Six male and six female healthy Kurdish horses weighing 351 ± 46 kg (mean ± SD) and aged 9.2 ± 1.7 years (mean ± SD) were housed under animal welfare guidelines. After fasting for six hours, the normal blood flow velocity in the common carotid artery was measured using a Pulsed-wave doppler ultrasonography machine (BK Medical, Denmark), and a high-frequency linear transducer (12 MHz) without applying any sedative drugs as a control group. The same procedure was repeated after each individual received the following medications: 1.1, 2.2 mg/kg Ketamine (Pfizer, USA), and 0.5, 1 mg/kg Acepromizine (RACEHORSE MEDS, Ukraine), with an interval of 21 days between the administration of each dose and/or drug. The ultrasonographic study was done five (T5) and fifteen (T15) minutes after injecting each dose intravenously. Lastly, the statistical analysis was performed using SPSS software version 22 for Windows and a P value less than 0.05 was considered to be statistically significant. Five minutes after administration of Ketamine (1.1, 2.2 mg/kg) in both male and female horses, the blood velocity decreased to 38.44, 34.53 cm/s in males, and 39.06, 34.10 cm/s in females in comparison to the control group (39.59 and 40.39 cm/s in males and females respectively) while administration of 0.5 mg/kg Acepromazine led to a significant rise (73.15 and 55.80 cm/s in males and females respectively) (p<0.05). It means that the most drastic change in blood velocity, regardless of gender, refers to the latter dose/drug. In both medications and both genders, the increase in doses led to a decrease in blood velocity compared to the lower dose of the same drug. In all experiments in this study, the blood velocity approached its normal value at T15. In another study comparing the blood velocity changes affected by Ketamine and Acepromazine through femoral arteries, the most drastic changes were attributed to Ketamine; however, in this experiment, the maximum blood velocity was observed following administration of Acepromazine via the common carotid artery. Therefore, further experiments using the same medications are suggested using Pulsed-wave doppler measuring the blood velocity changes in both femoral and common carotid arteries simultaneously.Keywords: Acepromazine, common carotid artery, horse, ketamine, pulsed-wave doppler ultrasonography
Procedia PDF Downloads 128735 Material Use and Life Cycle GHG Emissions of Different Electrification Options for Long-Haul Trucks
Authors: Nafisa Mahbub, Hajo Ribberink
Abstract:
Electrification of long-haul trucks has been in discussion as a potential strategy to decarbonization. These trucks will require large batteries because of their weight and long daily driving distances. Around 245 million battery electric vehicles are predicted to be on the road by the year 2035. This huge increase in the number of electric vehicles (EVs) will require intensive mining operations for metals and other materials to manufacture millions of batteries for the EVs. These operations will add significant environmental burdens and there is a significant risk that the mining sector will not be able to meet the demand for battery materials, leading to higher prices. Since the battery is the most expensive component in the EVs, technologies that can enable electrification with smaller batteries sizes have substantial potential to reduce the material usage and associated environmental and cost burdens. One of these technologies is an ‘electrified road’ (eroad), where vehicles receive power while they are driving, for instance through an overhead catenary (OC) wire (like trolleybuses and electric trains), through wireless (inductive) chargers embedded in the road, or by connecting to an electrified rail in or on the road surface. This study assessed the total material use and associated life cycle GHG emissions of two types of eroads (overhead catenary and in-road wireless charging) for long-haul trucks in Canada and compared them to electrification using stationary plug-in fast charging. As different electrification technologies require different amounts of materials for charging infrastructure and for the truck batteries, the study included the contributions of both for the total material use. The study developed a bottom-up approach model comparing the three different charging scenarios – plug in fast chargers, overhead catenary and in-road wireless charging. The investigated materials for charging technology and batteries were copper (Cu), steel (Fe), aluminium (Al), and lithium (Li). For the plug-in fast charging technology, different charging scenarios ranging from overnight charging (350 kW) to megawatt (MW) charging (2 MW) were investigated. A 500 km of highway (1 lane of in-road charging per direction) was considered to estimate the material use for the overhead catenary and inductive charging technologies. The study considered trucks needing an 800 kWh battery under the plug-in charger scenario but only a 200 kWh battery for the OC and inductive charging scenarios. Results showed that overall the inductive charging scenario has the lowest material use followed by OC and plug-in charger scenarios respectively. The materials use for the OC and plug-in charger scenarios were 50-70% higher than for the inductive charging scenarios for the overall system including the charging infrastructure and battery. The life cycle GHG emissions from the construction and installation of the charging technology material were also investigated.Keywords: charging technology, eroad, GHG emissions, material use, overhead catenary, plug in charger
Procedia PDF Downloads 51734 Hydrological Challenges and Solutions in the Nashik Region: A Multi Tracer and Geochemistry Approach to Groundwater Management
Authors: Gokul Prasad, Pennan Chinnasamy
Abstract:
The degradation of groundwater resources, attributed to factors such as excessive abstraction and contamination, has emerged as a global concern. This study delves into the stable isotopes of water) in a hard-rock aquifer situated in the Upper Godavari watershed, an agriculturally rich region in India underlain by Basalt. The higher groundwater draft (> 90%) poses significant risks; comprehending groundwater sources, flow patterns, and their environmental impacts is pivotal for researchers and water managers. The region has faced five droughts in the past 20 years; four are categorized as medium. The recharge rates are variable and show a very minimum contribution to groundwater. The rainfall pattern shows vast variability, with the region receiving seasonal monsoon rainfall for just four months and the rest of the year experiencing minimal rainfall. This research closely monitored monsoon precipitation inputs and examined spatial and temporal fluctuations in δ18O and δ2H in both groundwater and precipitation. By discerning individual recharge events during monsoons, it became possible to identify periods when evaporation led to groundwater quality deterioration, characterized by elevated salinity and stable isotope values in the return flow. The locally derived meteoric water line (LMWL) (δ2H = 6.72 * δ18O + 1.53, r² = 0.6) provided valuable insights into the groundwater system. The leftward shift of the Nashik LMWL in relation to the GMWL and LMWL indicated groundwater evaporation (-33 ‰), supported by spatial variations in electrical conductivity (EC) data. Groundwater in the eastern and northern watershed areas exhibited higher salinity > 3000uS/cm, expanding > 40% of the area compared to the western and southern regions due to geological disparities (alluvium vs basalt). The findings emphasize meteoric precipitation as the primary groundwater source in the watershed. However, spatial variations in isotope values and chemical constituents indicate other contributing factors, including evaporation, groundwater source type, and natural or anthropogenic (specifically agricultural and industrial) contaminants. Therefore, the study recommends focused hydro geochemistry and isotope analysis in areas with strong agricultural and industrial influence for the development of holistic groundwater management plans for protecting the groundwater aquifers' quantity and quality.Keywords: groundwater quality, stable isotopes, salinity, groundwater management, hard-rock aquifer
Procedia PDF Downloads 47733 Collateral Impact of Water Resources Development in an Arsenic Affected Village of Patna District
Authors: Asrarul H. Jeelani
Abstract:
Arsenic contamination of groundwater and its’ health implications in lower Gangetic plain of Indian states started reporting in the 1980s. The same period was declared as the first water decade (1981-1990) to achieve ‘water for all.’ To fulfill the aim, the Indian government, with the support of international agencies installed millions of hand-pumps through water resources development programs. The hand-pumps improve the accessibility if the groundwater, but over-extraction of it increases the chances of mixing of trivalent arsenic which is more toxic than pentavalent arsenic of dug well water in Gangetic plain and has different physical manifestations. Now after three decades, Bihar (middle Gangetic plain) is also facing arsenic contamination of groundwater and its’ health implications. Objective: This interdisciplinary research attempts to understand the health and social implications of arsenicosis among different castes in Haldi Chhapra village and to find the association of ramifications with water resources development. Methodology: The Study used concurrent quantitative dominant mix method (QUAN+qual). The researcher had employed household survey, social mapping, interviews, and participatory interactions. However, the researcher used secondary data for retrospective analysis of hand-pumps and implications of arsenicosis. Findings: The study found 88.5% (115) household have hand-pumps as a source of water however 13.8% uses purified supplied water bottle and 3.6% uses combinations of hand-pump, bottled water and dug well water for drinking purposes. Among the population, 3.65% of individuals have arsenicosis, and 2.72% of children between the age group of 5 to 15 years are affected. The caste variable has also emerged through quantitative as well as geophysical locations analysis as 5.44% of arsenicosis manifested individual belong to scheduled caste (SC), 3.89% to extremely backward caste (EBC), 2.57% to backward caste (BC) and 3% to other. Among three clusters of arsenic poisoned locations, two belong to SC and EBC. The village as arsenic affected is being discriminated, whereas the affected individual is also facing discrimination, isolation, stigma, and problem in getting married. The forceful intervention to install hand-pumps in the first water decades and later restructuring of the dug well destroyed a conventional method of dug well cleaning. Conclusion: The common manifestation of arsenicosis has increased by 1.3% within six years of span in the village. This raised the need for setting up a proper surveillance system in the village. It is imperative to consider the social structure for arsenic mitigation program as this research reveals caste as a significant factor. The health and social implications found in the study; retrospectively analyzed as the collateral impact of water resource development programs in the village.Keywords: arsenicosis, caste, collateral impact, water resources
Procedia PDF Downloads 108732 Convention Refugees in New Zealand: Being Trapped in Immigration Limbo without the Right to Obtain a Visa
Authors: Saska Alexandria Hayes
Abstract:
Multiple Convention Refugees in New Zealand are stuck in a state of immigration limbo due to a lack of defined immigration policies. The Refugee Convention of 1951 does not give the right to be issued a permanent right to live and work in the country of asylum. A gap in New Zealand's immigration law and policy has left Convention Refugees without the right to obtain a resident or temporary entry visa. The significant lack of literature on this topic suggests that the lack of visa options for Convention Refugees in New Zealand is a widely unknown or unacknowledged issue. Refugees in New Zealand enjoy the right of non-refoulement contained in Article 33 of the Refugee Convention 1951, whether lawful or unlawful. However, a number of rights contained in the Refugee Convention 1951, such as the right to gainful employment and social security, are limited to refugees who maintain lawful immigration status. If a Convention Refugee is denied a resident visa, the only temporary entry visa a Convention Refugee can apply for in New Zealand is discretionary. The appeal cases heard at the Immigration Protection Tribunal establish that Immigration New Zealand has declined resident and discretionary temporary entry visa applications by Convention Refugees for failing to meet the health or character immigration instructions. The inability of a Convention Refugee to gain residency in New Zealand creates a dependence on the issue of discretionary temporary entry visas to maintain lawful status. The appeal cases record that this reliance has led to Convention Refugees' lawful immigration status being in question, temporarily depriving them of the rights contained in the Refugee Convention 1951 of lawful refugees. In one case, the process of applying for a discretionary temporary entry visa led to a lawful Convention Refugee being temporarily deprived of the right to social security, breaching Article 24 of the Refugee Convention 1951. The judiciary has stated a constant reliance on the issue of discretionary temporary entry visas for Convention Refugees can lead to a breach of New Zealand's international obligations under Article 7 of the International Covenant on Civil and Political Rights. The appeal cases suggest that, despite successful judicial proceedings, at least three persons have been made to rely on the issue of discretionary temporary entry visas potentially indefinitely. The appeal cases establish that a Convention Refugee can be denied a discretionary temporary entry visa and become unlawful. Unlawful status could ultimately breach New Zealand's obligations under Article 33 of the Refugee Convention 1951 as it would procedurally deny Convention Refugees asylum. It would force them to choose between the right of non-refoulement or leaving New Zealand to seek the ability to access all the human rights contained in the Universal Declaration of Human Rights elsewhere. This paper discusses how the current system has given rise to these breaches and emphasizes a need to create a designated temporary entry visa category for Convention Refugees.Keywords: domestic policy, immigration, migration, New Zealand
Procedia PDF Downloads 102731 The Current Home Hemodialysis Practices and Patients’ Safety Related Factors: A Case Study from Germany
Authors: Ilyas Khan. Liliane Pintelon, Harry Martin, Michael Shömig
Abstract:
The increasing costs of healthcare on one hand, and the rise in aging population and associated chronic disease, on the other hand, are putting increasing burden on the current health care system in many Western countries. For instance, chronic kidney disease (CKD) is a common disease and in Europe, the cost of renal replacement therapy (RRT) is very significant to the total health care cost. However, the recent advancement in healthcare technology, provide the opportunity to treat patients at home in their own comfort. It is evident that home healthcare offers numerous advantages apparently, low costs and high patients’ quality of life. Despite these advantages, the intake of home hemodialysis (HHD) therapy is still low in particular in Germany. Many factors are accounted for the low number of HHD intake. However, this paper is focusing on patients’ safety-related factors of current HHD practices in Germany. The aim of this paper is to analyze the current HHD practices in Germany and to identify risks related factors if any exist. A case study has been conducted in a dialysis center which consists of four dialysis centers in the south of Germany. In total, these dialysis centers have 350 chronic dialysis patients, of which, four patients are on HHD. The centers have 126 staff which includes six nephrologists and 120 other staff i.e. nurses and administration. The results of the study revealed several risk-related factors. Most importantly, these centers do not offer allied health services at the pre-dialysis stage, the HHD training did not have an established curriculum; however, they have just recently developed the first version. Only a soft copy of the machine manual is offered to patients. Surprisingly, the management was not aware of any standard available for home assessment and installation. The home assessment is done by a third party (i.e. the machines and equipment provider) and they may not consider the hygienic quality of the patient’s home. The type of machine provided to patients at home is similar to the one in the center. The model may not be suitable at home because of its size and complexity. Even though portable hemodialysis machines, which are specially designed for home use, are available in the market such as the NxStage series. Besides the type of machine, no assistance is offered for space management at home in particular for placing the machine. Moreover, the centers do not offer remote assistance to patients and their carer at home. However, telephonic assistance is available. Furthermore, no alternative is offered if a carer is not available. In addition, the centers are lacking medical staff including nephrologists and renal nurses.Keywords: home hemodialysis, home hemodialysis practices, patients’ related risks in the current home hemodialysis practices, patient safety in home hemodialysis
Procedia PDF Downloads 119730 Socioeconomic Burden of Life Long Disease: A Case of Diabetes Care in Bangladesh
Authors: Samira Humaira Habib
Abstract:
Diabetes has profound effects on individuals and their families. If diabetes is not well monitored and managed, then it leads to long-term complications and a large and growing cost to the health care system. Prevalence and socioeconomic burden of diabetes and relative return of investment for the elimination or the reduction of the burden are much more important regarding its cost burden. Various studies regarding the socioeconomic cost burden of diabetes are well explored in developed countries but almost absent in developing countries like Bangladesh. The main objective of the study is to estimate the total socioeconomic burden of diabetes. It is a prospective longitudinal follow up study which is analytical in nature. Primary and secondary data are collected from patients who are undergoing treatment for diabetes at the out-patient department of Bangladesh Institute of Research & Rehabilitation in Diabetes, Endocrine & Metabolic Disorders (BIRDEM). Of the 2115 diabetic subjects, females constitute around 50.35% of the study subject, and the rest are male (49.65%). Among the subjects, 1323 are controlled, and 792 are uncontrolled diabetes. Cost analysis of 2115 diabetic patients shows that the total cost of diabetes management and treatment is US$ 903018 with an average of US$ 426.95 per patient. In direct cost, the investigation and medical treatment at hospital along with investigation constitute most of the cost in diabetes. The average cost of a hospital is US$ 311.79, which indicates an alarming warn for diabetic patients. The indirect cost shows that cost of productivity loss (US$ 51110.1) is higher among the all indirect item. All constitute total indirect cost as US$ 69215.7. The incremental cost of intensive management of uncontrolled diabetes is US$ 101.54 per patient and event-free time gained in this group is 0.55 years and the life years gain is 1.19 years. The incremental cost per event-free year gained is US$ 198.12. The incremental cost of intensive management of the controlled group is US$ 89.54 per patient and event-free time gained is 0.68 years, and the life year gain is 1.12 years. The incremental cost per event-free year gained is US$ 223.34. The EuroQoL difference between the groups is found to be 64.04. The cost-effective ratio is found to be US$ 1.64 cost per effect in case of controlled diabetes and US$ 1.69 cost per effect in case of uncontrolled diabetes. So management of diabetes is much more cost-effective. Cost of young type 1 diabetic patient showed upper socioeconomic class, and with the increase of the duration of diabetes, the cost increased also. The dietary pattern showed macronutrients intake and cost are significantly higher in the uncontrolled group than their counterparts. Proper management and control of diabetes can decrease the cost of care for the long term.Keywords: cost, cost-effective, chronic diseases, diabetes care, burden, Bangladesh
Procedia PDF Downloads 147729 Methylphenidate Use by Canadian Children and Adolescents and the Associated Adverse Reactions
Authors: Ming-Dong Wang, Abigail F. Ruby, Michelle E. Ross
Abstract:
Methylphenidate is a first-line treatment drug for attention deficit hyperactivity disorder (ADHD), a common mental health disorder in children and adolescents. Over the last several decades, the rate of children and adolescents using ADHD medication has been increasing in many countries. A recent study found that the prevalence of ADHD medication use among children aged 3-18 years increased in 13 different world regions between 2001 and 2015, where the absolute increase ranged from 0.02 to 0.26% per year. The goal of this study was to examine the use of methylphenidate in Canadian children and its associated adverse reactions. Methylphenidate use information among young Canadians aged 0-14 years was extracted from IQVIA data on prescriptions dispensed by pharmacies between April 2014 and June 2020. The adverse reaction information associated with methylphenidate use was extracted from the Canada Vigilance database for the same time period. Methylphenidate use trends were analyzed based on sex, age group (0-4 years, 5-9 years, and 10-14 years), and geographical location (province). The common classes of adverse reactions associated with methylphenidate use were sorted, and the relative risks associated with methylphenidate use as compared with two second-line amphetamine medications for ADHD were estimated. This study revealed that among Canadians aged 0-14 years, every 100 people used about 25 prescriptions (or 23,000 mg) of methylphenidate per year during the study period, and the use increased with time. Boys used almost three times more methylphenidate than girls. The amount of drug used was inversely associated with age: Canadians aged 10-14 years used nearly three times as many drugs compared to those aged 5-9 years. Seasonal methylphenidate use patterns were apparent among young Canadians, but the seasonal trends differed among the three age groups. Methylphenidate use varied from region to region, and the highest methylphenidate use was observed in Quebec, where the use of methylphenidate was at least double that of any other province. During the study period, Health Canada received 304 adverse reaction reports associated with the use of methylphenidate for Canadians aged 0-14 years. The number of adverse reaction reports received for boys was 3.5 times higher than that for girls. The three most common adverse reaction classes were psychiatric disorders, nervous system disorders and injury, poisoning procedural complications. The number one commonly reported adverse reaction for boys was aggression (11.2%), while for girls, it was a tremor (9.6%). The safety profile in terms of adverse reaction classes associated with methylphenidate use was similar to that of the selected control products. Methylphenidate is a commonly used pharmaceutical product in young Canadians, particularly in the province of Quebec. Boys used approximately three times more of this product as compared to girls. Future investigation is needed to determine what factors are associated with the observed geographic variations in Canada.Keywords: adverse reaction risk, methylphenidate, prescription trend, use variation
Procedia PDF Downloads 160728 Forensic Investigation: The Impact of Biometric-Based Solution in Combatting Mobile Fraud
Authors: Mokopane Charles Marakalala
Abstract:
Research shows that mobile fraud has grown exponentially in South Africa during the lockdown caused by the COVID-19 pandemic. According to the South African Banking Risk Information Centre (SABRIC), fraudulent online banking and transactions resulted in a sharp increase in cybercrime since the beginning of the lockdown, resulting in a huge loss to the banking industry in South Africa. While the Financial Intelligence Centre Act, 38 of 2001, regulate financial transactions, it is evident that criminals are making use of technology to their advantage. Money-laundering ranks among the major crimes, not only in South Africa but worldwide. This paper focuses on the impact of biometric-based solutions in combatting mobile fraud at the South African Risk Information. SABRIC had the challenges of a successful mobile fraud; cybercriminals could hijack a mobile device and use it to gain access to sensitive personal data and accounts. Cybercriminals are constantly looting the depths of cyberspace in search of victims to attack. Millions of people worldwide use online banking to do their regular bank-related transactions quickly and conveniently. This was supported by the SABRIC, who regularly highlighted incidents of mobile fraud, corruption, and maladministration in SABRIC, resulting in a lack of secure their banking online; they are vulnerable to falling prey to fraud scams such as mobile fraud. Criminals have made use of digital platforms since the development of technology. In 2017, 13 438 instances involving banking apps, internet banking, and mobile banking caused the sector to suffer gross losses of more than R250,000,000. The final three parties are forced to point fingers at one another while the fraudster makes off with the money. A non-probability sampling (purposive sampling) was used in selecting these participants. These included telephone calls and virtual interviews. The results indicate that there is a relationship between remote online banking and the increase in money-laundering as the system allows transactions to take place with limited verification processes. This paper highlights the significance of considering the development of prevention mechanisms, capacity development, and strategies for both financial institutions as well as law enforcement agencies in South Africa to reduce crime such as money-laundering. The researcher recommends that strategies to increase awareness for bank staff must be harnessed through the provision of requisite training and to be provided adequate training.Keywords: biometric-based solution, investigation, cybercrime, forensic investigation, fraud, combatting
Procedia PDF Downloads 101727 Vulnerability Assessment of Groundwater Quality Deterioration Using PMWIN Model
Authors: A. Shakoor, M. Arshad
Abstract:
The utilization of groundwater resources in irrigation has significantly increased during the last two decades due to constrained canal water supplies. More than 70% of the farmers in the Punjab, Pakistan, depend directly or indirectly on groundwater to meet their crop water demands and hence, an unchecked paradigm shift has resulted in aquifer depletion and deterioration. Therefore, a comprehensive research was carried at central Punjab-Pakistan, regarding spatiotemporal variation in groundwater level and quality. Processing MODFLOW for window (PMWIN) and MT3D (solute transport model) models were used for existing and future prediction of groundwater level and quality till 2030. The comprehensive data set of aquifer lithology, canal network, groundwater level, groundwater salinity, evapotranspiration, groundwater abstraction, recharge etc. were used in PMWIN model development. The model was thus, successfully calibrated and validated with respect to groundwater level for the periods of 2003 to 2007 and 2008 to 2012, respectively. The coefficient of determination (R2) and model efficiency (MEF) for calibration and validation period were calculated as 0.89 and 0.98, respectively, which argued a high level of correlation between the calculated and measured data. For solute transport model (MT3D), the values of advection and dispersion parameters were used. The model used for future scenario up to 2030, by assuming that there would be no uncertain change in climate and groundwater abstraction rate would increase gradually. The model predicted results revealed that the groundwater would decline from 0.0131 to 1.68m/year during 2013 to 2030 and the maximum decline would be on the lower side of the study area, where infrastructure of canal system is very less. This lowering of groundwater level might cause an increase in the tubewell installation and pumping cost. Similarly, the predicted total dissolved solids (TDS) of the groundwater would increase from 6.88 to 69.88mg/L/year during 2013 to 2030 and the maximum increase would be on lower side. It was found that in 2030, the good quality would reduce by 21.4%, while marginal and hazardous quality water increased by 19.28 and 2%, respectively. It was found from the simulated results that the salinity of the study area had increased due to the intrusion of salts. The deterioration of groundwater quality would cause soil salinity and ultimately the reduction in crop productivity. It was concluded from the predicted results of groundwater model that the groundwater deteriorated with the depth of water table i.e. TDS increased with declining groundwater level. It is recommended that agronomic and engineering practices i.e. land leveling, rainwater harvesting, skimming well, ASR (Aquifer Storage and Recovery Wells) etc. should be integrated to meliorate management of groundwater for higher crop production in salt affected soils.Keywords: groundwater quality, groundwater management, PMWIN, MT3D model
Procedia PDF Downloads 378726 Improving Student Learning in a Math Bridge Course through Computer Algebra Systems
Authors: Alejandro Adorjan
Abstract:
Universities are motivated to understand the factor contributing to low retention of engineering undergraduates. While precollege students for engineering increases, the number of engineering graduates continues to decrease and attrition rates for engineering undergraduates remains high. Calculus 1 (C1) is the entry point of most undergraduate Engineering Science and often a prerequisite for Computing Curricula courses. Mathematics continues to be a major hurdle for engineering students and many students who drop out from engineering cite specifically Calculus as one of the most influential factors in that decision. In this context, creating course activities that increase retention and motivate students to obtain better final results is a challenge. In order to develop several competencies in our students of Software Engineering courses, Calculus 1 at Universidad ORT Uruguay focuses on developing several competencies such as capacity of synthesis, abstraction, and problem solving (based on the ACM/AIS/IEEE). Every semester we try to reflect on our practice and try to answer the following research question: What kind of teaching approach in Calculus 1 can we design to retain students and obtain better results? Since 2010, Universidad ORT Uruguay offers a six-week summer noncompulsory bridge course of preparatory math (to bridge the math gap between high school and university). Last semester was the first time the Department of Mathematics offered the course while students were enrolled in C1. Traditional lectures in this bridge course lead to just transcribe notes from blackboard. Last semester we proposed a Hands On Lab course using Geogebra (interactive geometry and Computer Algebra System (CAS) software) as a Math Driven Development Tool. Students worked in a computer laboratory class and developed most of the tasks and topics in Geogebra. As a result of this approach, several pros and cons were found. It was an excessive amount of weekly hours of mathematics for students and, as the course was non-compulsory; the attendance decreased with time. Nevertheless, this activity succeeds in improving final test results and most students expressed the pleasure of working with this methodology. This teaching technology oriented approach strengthens student math competencies needed for Calculus 1 and improves student performance, engagement, and self-confidence. It is important as a teacher to reflect on our practice, including innovative proposals with the objective of engaging students, increasing retention and obtaining better results. The high degree of motivation and engagement of participants with this methodology exceeded our initial expectations, so we plan to experiment with more groups during the summer so as to validate preliminary results.Keywords: calculus, engineering education, PreCalculus, Summer Program
Procedia PDF Downloads 290725 Event Data Representation Based on Time Stamp for Pedestrian Detection
Authors: Yuta Nakano, Kozo Kajiwara, Atsushi Hori, Takeshi Fujita
Abstract:
In association with the wave of electric vehicles (EV), low energy consumption systems have become more and more important. One of the key technologies to realize low energy consumption is a dynamic vision sensor (DVS), or we can call it an event sensor, neuromorphic vision sensor and so on. This sensor has several features, such as high temporal resolution, which can achieve 1 Mframe/s, and a high dynamic range (120 DB). However, the point that can contribute to low energy consumption the most is its sparsity; to be more specific, this sensor only captures the pixels that have intensity change. In other words, there is no signal in the area that does not have any intensity change. That is to say, this sensor is more energy efficient than conventional sensors such as RGB cameras because we can remove redundant data. On the other side of the advantages, it is difficult to handle the data because the data format is completely different from RGB image; for example, acquired signals are asynchronous and sparse, and each signal is composed of x-y coordinate, polarity (two values: +1 or -1) and time stamp, it does not include intensity such as RGB values. Therefore, as we cannot use existing algorithms straightforwardly, we have to design a new processing algorithm to cope with DVS data. In order to solve difficulties caused by data format differences, most of the prior arts make a frame data and feed it to deep learning such as Convolutional Neural Networks (CNN) for object detection and recognition purposes. However, even though we can feed the data, it is still difficult to achieve good performance due to a lack of intensity information. Although polarity is often used as intensity instead of RGB pixel value, it is apparent that polarity information is not rich enough. Considering this context, we proposed to use the timestamp information as a data representation that is fed to deep learning. Concretely, at first, we also make frame data divided by a certain time period, then give intensity value in response to the timestamp in each frame; for example, a high value is given on a recent signal. We expected that this data representation could capture the features, especially of moving objects, because timestamp represents the movement direction and speed. By using this proposal method, we made our own dataset by DVS fixed on a parked car to develop an application for a surveillance system that can detect persons around the car. We think DVS is one of the ideal sensors for surveillance purposes because this sensor can run for a long time with low energy consumption in a NOT dynamic situation. For comparison purposes, we reproduced state of the art method as a benchmark, which makes frames the same as us and feeds polarity information to CNN. Then, we measured the object detection performances of the benchmark and ours on the same dataset. As a result, our method achieved a maximum of 7 points greater than the benchmark in the F1 score.Keywords: event camera, dynamic vision sensor, deep learning, data representation, object recognition, low energy consumption
Procedia PDF Downloads 97724 Through Additive Manufacturing. A New Perspective for the Mass Production of Made in Italy Products
Authors: Elisabetta Cianfanelli, Paolo Pupparo, Maria Claudia Coppola
Abstract:
The recent evolutions in the innovation processes and in the intrinsic tendencies of the product development process, lead to new considerations on the design flow. The instability and complexity that contemporary life describes, defines new problems in the production of products, stimulating at the same time the adoption of new solutions across the entire design process. The advent of Additive Manufacturing, but also of IOT and AI technologies, continuously puts us in front of new paradigms regarding design as a social activity. The totality of these technologies from the point of view of application describes a whole series of problems and considerations immanent to design thinking. Addressing these problems may require some initial intuition and the use of some provisional set of rules or plausible strategies, i.e., heuristic reasoning. At the same time, however, the evolution of digital technology and the computational speed of new design tools describe a new and contrary design framework in which to operate. It is therefore interesting to understand the opportunities and boundaries of the new man-algorithm relationship. The contribution investigates the man-algorithm relationship starting from the state of the art of the Made in Italy model, the most known fields of application are described and then focus on specific cases in which the mutual relationship between man and AI becomes a new driving force of innovation for entire production chains. On the other hand, the use of algorithms could engulf many design phases, such as the definition of shape, dimensions, proportions, materials, static verifications, and simulations. Operating in this context, therefore, becomes a strategic action, capable of defining fundamental choices for the design of product systems in the near future. If there is a human-algorithm combination within a new integrated system, quantitative values can be controlled in relation to qualitative and material values. The trajectory that is described therefore becomes a new design horizon in which to operate, where it is interesting to highlight the good practices that already exist. In this context, the designer developing new forms can experiment with ways still unexpressed in the project and can define a new synthesis and simplification of algorithms, so that each artifact has a signature in order to define in all its parts, emotional and structural. This signature of the designer, a combination of values and design culture, will be internal to the algorithms and able to relate to digital technologies, creating a generative dialogue for design purposes. The result that is envisaged indicates a new vision of digital technologies, no longer understood only as of the custodians of vast quantities of information, but also as a valid integrated tool in close relationship with the design culture.Keywords: decision making, design euristics, product design, product design process, design paradigms
Procedia PDF Downloads 119723 Extrudable Foamed Concrete: General Benefits in Prefabrication and Comparison in Terms of Fresh Properties and Compressive Strength with Classic Foamed Concrete
Authors: D. Falliano, G. Ricciardi, E. Gugliandolo
Abstract:
Foamed concrete belongs to the category of lightweight concrete. It is characterized by a density which is generally ranging from 200 to 2000 kg/m³ and typically comprises cement, water, preformed foam, fine sand and eventually fine particles such as fly ash or silica fume. The foam component mixed with the cement paste give rise to the development of a system of air-voids in the cementitious matrix. The peculiar characteristics of foamed concrete elements are summarized in the following aspects: 1) lightness which allows reducing the dimensions of the resisting frame structure and is advantageous in the scope of refurbishment or seismic retrofitting in seismically vulnerable areas; 2) thermal insulating properties, especially in the case of low densities; 3) the good resistance against fire as compared to ordinary concrete; 4) the improved workability; 5) cost-effectiveness due to the usage of rather simple constituting elements that are easily available locally. Classic foamed concrete cannot be extruded, as the dimensional stability is not permitted in the green state and this severely limits the possibility of industrializing them through a simple and cost-effective process, characterized by flexibility and high production capacity. In fact, viscosity enhancing agents (VEA) used to extrude traditional concrete, in the case of foamed concrete cause the collapsing of air bubbles, so that it is impossible to extrude a lightweight product. These requirements have suggested the study of a particular additive that modifies the rheology of foamed concrete fresh paste by increasing cohesion and viscosity and, at the same time, stabilizes the bubbles into the cementitious matrix, in order to allow the dimensional stability in the green state and, consequently, the extrusion of a lightweight product. There are plans to submit the additive’s formulation to patent. In addition to the general benefits of using the extrusion process, extrudable foamed concrete allow other limits to be exceeded: elimination of formworks, expanded application spectrum, due to the possibility of extrusion in a range varying between 200 and 2000 kg/m³, which allows the prefabrication of both structural and non-structural constructive elements. Besides, this contribution aims to present the significant differences regarding extrudable and classic foamed concrete fresh properties in terms of slump. Plastic air content, plastic density, hardened density and compressive strength have been also evaluated. The outcomes show that there are no substantial differences between extrudable and classic foamed concrete compression resistances.Keywords: compressive strength, extrusion, foamed concrete, fresh properties, plastic air content, slump.
Procedia PDF Downloads 174722 An Inquiry of the Impact of Flood Risk on Housing Market with Enhanced Geographically Weighted Regression
Authors: Lin-Han Chiang Hsieh, Hsiao-Yi Lin
Abstract:
This study aims to determine the impact of the disclosure of flood potential map on housing prices. The disclosure is supposed to mitigate the market failure by reducing information asymmetry. On the other hand, opponents argue that the official disclosure of simulated results will only create unnecessary disturbances on the housing market. This study identifies the impact of the disclosure of the flood potential map by comparing the hedonic price of flood potential before and after the disclosure. The flood potential map used in this study is published by Taipei municipal government in 2015, which is a result of a comprehensive simulation based on geographical, hydrological, and meteorological factors. The residential property sales data of 2013 to 2016 is used in this study, which is collected from the actual sales price registration system by the Department of Land Administration (DLA). The result shows that the impact of flood potential on residential real estate market is statistically significant both before and after the disclosure. But the trend is clearer after the disclosure, suggesting that the disclosure does have an impact on the market. Also, the result shows that the impact of flood potential differs by the severity and frequency of precipitation. The negative impact for a relatively mild, high frequency flood potential is stronger than that for a heavy, low possibility flood potential. The result indicates that home buyers are of more concern to the frequency, than the intensity of flood. Another contribution of this study is in the methodological perspective. The classic hedonic price analysis with OLS regression suffers from two spatial problems: the endogeneity problem caused by omitted spatial-related variables, and the heterogeneity concern to the presumption that regression coefficients are spatially constant. These two problems are seldom considered in a single model. This study tries to deal with the endogeneity and heterogeneity problem together by combining the spatial fixed-effect model and geographically weighted regression (GWR). A series of literature indicates that the hedonic price of certain environmental assets varies spatially by applying GWR. Since the endogeneity problem is usually not considered in typical GWR models, it is arguable that the omitted spatial-related variables might bias the result of GWR models. By combing the spatial fixed-effect model and GWR, this study concludes that the effect of flood potential map is highly sensitive by location, even after controlling for the spatial autocorrelation at the same time. The main policy application of this result is that it is improper to determine the potential benefit of flood prevention policy by simply multiplying the hedonic price of flood risk by the number of houses. The effect of flood prevention might vary dramatically by location.Keywords: flood potential, hedonic price analysis, endogeneity, heterogeneity, geographically-weighted regression
Procedia PDF Downloads 290721 The Sea Striker: The Relevance of Small Assets Using an Integrated Conception with Operational Performance Computations
Authors: Gaëtan Calvar, Christophe Bouvier, Alexis Blasselle
Abstract:
This paper presents the Sea Striker, a compact hydrofoil designed with the goal to address some of the issues raised by the recent evolutions of naval missions, threats and operation theatres in modern warfare. Able to perform a wide range of operations, the Sea Striker is a 40-meter stealth surface combatant equipped with a gas turbine and aft and forward foils to reach high speeds. The Sea Striker's stealthiness is enabled by the combination of composite structure, exterior design, and the advanced integration of sensors. The ship is fitted with a powerful and adaptable combat system, ensuring a versatile and efficient response to modern threats. Lightly Manned with a core crew of 10, this hydrofoil is highly automated and can be remoted pilote for special force operation or transit. Such a kind of ship is not new: it has been used in the past by different navies, for example, by the US Navy with the USS Pegasus. Nevertheless, the recent evolutions in science and technologies on the one hand, and the emergence of new missions, threats and operation theatres, on the other hand, put forward its concept as an answer to nowadays operational challenges. Indeed, even if multiples opinions and analyses can be given regarding the modern warfare and naval surface operations, general observations and tendencies can be drawn such as the major increase in the sensors and weapons types and ranges and, more generally, capacities; the emergence of new versatile and evolving threats and enemies, such as asymmetric groups, swarm drones or hypersonic missile; or the growing number of operation theatres located in more coastal and shallow waters. These researches were performed with a complete study of the ship after several operational performance computations in order to justify the relevance of using ships like the Sea Striker in naval surface operations. For the selected scenarios, the conception process enabled to measure the performance, namely a “Measure of Efficiency” in the NATO framework for 2 different kinds of models: A centralized, classic model, using large and powerful ships; and A distributed model relying on several Sea Strikers. After this stage, a was performed. Lethal, agile, stealth, compact and fitted with a complete set of sensors, the Sea Striker is a new major player in modern warfare and constitutes a very attractive response between the naval unit and the combat helicopter, enabling to reach high operational performances at a reduced cost.Keywords: surface combatant, compact, hydrofoil, stealth, velocity, lethal
Procedia PDF Downloads 117720 The Influence of Nutritional and Immunological Status on the Prognosis of Head and Neck Cancer
Authors: Ching-Yi Yiu, Hui-Chen Hsu
Abstract:
Objectives: Head and neck cancer (HNC) is a big global health problem in the world. Despite the development of diagnosis and treatment, the overall survival of HNC is still low. The well recognition of the interaction of the host immune system and cancer cells has led to realizing the processes of tumor initiation, progression and metastasis. Many systemic inflammatory responses have been shown to play a crucial role in cancer progression. The pre and post-treatment nutritional and immunological status of HNC patients is a reliable prognostic indicator of tumor outcomes and survivors. Methods: Between July 2020 to June 2022, We have enrolled 60 HNC patients, including 59 males and 1 female, in Chi Mei Medical Center, Liouying, Taiwan. The age distribution was from 37 to 81 years old (y/o), with a mean age of 57.6 y/o. We evaluated the pre-and post-treatment nutritional and immunological status of these HNC patients with body weight, body weight loss, body mass index (BMI), whole blood count including hemoglobin (Hb), lymphocyte, neutrophil and platelet counts, biochemistry including prealbumin, albumin, c-reactive protein (CRP), with the time period of before treatment, post-treatment 3 and 6 months. We calculated the neutrophil-to-lymphocyte ratio (NLR) and platelet-to-lymphocyte ratio (PLR) to assess how these biomarkers influence the outcomes of HNC patients. Results: We have carcinoma of the hypopharynx in 21 cases with 35%, carcinoma of the larynx in 9 cases, carcinoma of the tonsil and tongue every 6 cases, carcinoma soft palate and tongue base every 5 cases, carcinoma of buccal mucosa, retromolar trigone and mouth floor every 2 cases, carcinoma of the hard palate and low lip each 1 case. There were stage I 15 cases, stage II 13 cases, stage III 6 cases, stage IVA 10 cases, and stage IVB 16 cases. All patients have received surgery, chemoradiation therapy or combined therapy. We have wound infection in 6 cases, 2 cases of pharyngocutaneous fistula, flap necrosis in 2 cases, and mortality in 6 cases. In the wound infection group, the average BMI is 20.4 kg/m2; the average Hb is 12.9 g/dL, the average albumin is 3.5 g/dL, the average NLR is 6.78, and the average PLR is 243.5. In the PC fistula and flap necrosis group, the average BMI is 21.65 kg/m2; the average Hb is 11.7 g/dL, the average albumin is 3.15 g/dL, average NLR is 13.28, average PLR is 418.84. In the mortality group, the average BMI is 22.3 kg/m2; the average Hb is 13.58 g/dL, the average albumin is 3.77 g/dL, the average NLR is 6.06, and the average PLR is 275.5. Conclusion: HNC is a big challenging public health problem worldwide, especially in the high prevalence of betel nut consumption area Taiwan. Besides the definite risk factors of smoking, drinking and betel nut related, the other biomarkers may play significant prognosticators in the HNC outcomes. We concluded that the average BMI is less than 22 kg/m2, the average Hb is low than 12.0 g/dL, the average albumin is low than 3.3 g/dL, the average NLR is low than 3, and the average PLR is more than 170, the surgical complications and mortality will be increased, and the prognosis is poor in HNC patients.Keywords: nutritional, immunological, neutrophil-to-lymphocyte ratio, paltelet-to-lymphocyte ratio.
Procedia PDF Downloads 79719 Safety Validation of Black-Box Autonomous Systems: A Multi-Fidelity Reinforcement Learning Approach
Authors: Jared Beard, Ali Baheri
Abstract:
As autonomous systems become more prominent in society, ensuring their safe application becomes increasingly important. This is clearly demonstrated with autonomous cars traveling through a crowded city or robots traversing a warehouse with heavy equipment. Human environments can be complex, having high dimensional state and action spaces. This gives rise to two problems. One being that analytic solutions may not be possible. The other is that in simulation based approaches, searching the entirety of the problem space could be computationally intractable, ruling out formal methods. To overcome this, approximate solutions may seek to find failures or estimate their likelihood of occurrence. One such approach is adaptive stress testing (AST) which uses reinforcement learning to induce failures in the system. The premise of which is that a learned model can be used to help find new failure scenarios, making better use of simulations. In spite of these failures AST fails to find particularly sparse failures and can be inclined to find similar solutions to those found previously. To help overcome this, multi-fidelity learning can be used to alleviate this overuse of information. That is, information in lower fidelity can simulations can be used to build up samples less expensively, and more effectively cover the solution space to find a broader set of failures. Recent work in multi-fidelity learning has passed information bidirectionally using “knows what it knows” (KWIK) reinforcement learners to minimize the number of samples in high fidelity simulators (thereby reducing computation time and load). The contribution of this work, then, is development of the bidirectional multi-fidelity AST framework. Such an algorithm, uses multi-fidelity KWIK learners in an adversarial context to find failure modes. Thus far, a KWIK learner has been used to train an adversary in a grid world to prevent an agent from reaching its goal; thus demonstrating the utility of KWIK learners in an AST framework. The next step is implementation of the bidirectional multi-fidelity AST framework described. Testing will be conducted in a grid world containing an agent attempting to reach a goal position and adversary tasked with intercepting the agent as demonstrated previously. Fidelities will be modified by adjusting the size of a time-step, with higher-fidelity effectively allowing for more responsive closed loop feedback. Results will compare the single KWIK AST learner with the multi-fidelity algorithm with respect to number of samples, distinct failure modes found, and relative effect of learning after a number of trials.Keywords: multi-fidelity reinforcement learning, multi-fidelity simulation, safety validation, falsification
Procedia PDF Downloads 157718 Using Lysosomal Immunogenic Cell Death to Target Breast Cancer via Xanthine Oxidase/Micro-Antibody Fusion Protein
Authors: Iulianna Taritsa, Kuldeep Neote, Eric Fossel
Abstract:
Lysosome-induced immunogenic cell death (LIICD) is a powerful mechanism of targeting cancer cells that kills circulating malignant cells and primes the host’s immune cells against future remission. Current immunotherapies for cancer are limited in preventing recurrence – a gap that can be bridged by training the immune system to recognize cancer neoantigens. Lysosomal leakage can be induced therapeutically to traffic antigens from dying cells to dendritic cells, which can later present those tumorigenic antigens to T cells. Previous research has shown that oxidative agents administered in the tumor microenvironment can initiate LIICD. We generated a fusion protein between an oxidative agent known as xanthine oxidase (XO) and a mini-antibody specific for EGFR/HER2-sensitive breast tumor cells. The anti-EGFR single domain antibody fragment is uniquely sourced from llama, which is functional without the presence of a light chain. These llama micro-antibodies have been shown to be better able to penetrate tissues and have improved physicochemical stability as compared to traditional monoclonal antibodies. We demonstrate that the fusion protein created is stable and can induce early markers of immunogenic cell death in an in vitro human breast cancer cell line (SkBr3). Specifically, we measured overall cell death, as well as surface-expressed calreticulin, extracellular ATP release, and HMGB1 production. These markers are consensus indicators of ICD. Flow cytometry, luminescence assays, and ELISA were used respectively to quantify biomarker levels between treated versus untreated cells. We also included a positive control group of SkBr3 cells dosed with doxorubicin (a known inducer of LIICD) and a negative control dosed with cisplatin (a known inducer of cell death, but not of the immunogenic variety). We looked at each marker at various time points after cancer cells were treated with the XO/antibody fusion protein, doxorubicin, and cisplatin. Upregulated biomarkers after treatment with the fusion protein indicate an immunogenic response. We thus show the potential for this fusion protein to induce an anticancer effect paired with an adaptive immune response against EGFR/HER2+ cells. Our research in human cell lines here provides evidence for the success of the same therapeutic method for patients and serves as the gateway to developing a new treatment approach against breast cancer.Keywords: apoptosis, breast cancer, immunogenic cell death, lysosome
Procedia PDF Downloads 199717 Revealing the Nitrogen Reaction Pathway for the Catalytic Oxidative Denitrification of Fuels
Authors: Michael Huber, Maximilian J. Poller, Jens Tochtermann, Wolfgang Korth, Andreas Jess, Jakob Albert
Abstract:
Aside from the desulfurisation, the denitrogenation of fuels is of great importance to minimize the environmental impact of transport emissions. The oxidative reaction pathway of organic nitrogen in the catalytic oxidative denitrogenation could be successfully elucidated. This is the first time such a pathway could be traced in detail in non-microbial systems. It was found that the organic nitrogen is first oxidized to nitrate, which is subsequently reduced to molecular nitrogen via nitrous oxide. Hereby, the organic substrate serves as a reducing agent. The discovery of this pathway is an important milestone for the further development of fuel denitrogenation technologies. The United Nations aims to counteract global warming with Net Zero Emissions (NZE) commitments; however, it is not yet foreseeable when crude oil-based fuels will become obsolete. In 2021, more than 50 million barrels per day (mb/d) were consumed for the transport sector alone. Above all, heteroatoms such as sulfur or nitrogen produce SO₂ and NOx during combustion in the engines, which is not only harmful to the climate but also to health. Therefore, in refineries, these heteroatoms are removed by hy-drotreating to produce clean fuels. However, this catalytic reaction is inhibited by the basic, nitrogenous reactants (e.g., quinoline) as well as by NH3. The ion pair of the nitrogen atom forms strong pi-bonds to the active sites of the hydrotreating catalyst, which dimin-ishes its activity. To maximize the desulfurization and denitrogenation effectiveness in comparison to just extraction and adsorption, selective oxidation is typically combined with either extraction or selective adsorption. The selective oxidation produces more polar compounds that can be removed from the non-polar oil in a separate step. The extraction step can also be carried out in parallel to the oxidation reaction, as a result of in situ separation of the oxidation products (ECODS; extractive catalytic oxidative desulfurization). In this process, H8PV5Mo7O40 (HPA-5) is employed as a homogeneous polyoxometalate (POM) catalyst in an aqueous phase, whereas the sulfur containing fuel components are oxidized after diffusion from the organic fuel phase into the aqueous catalyst phase, to form highly polar products such as H₂SO₄ and carboxylic acids, which are thereby extracted from the organic fuel phase and accumulate in the aqueous phase. In contrast to the inhibiting properties of the basic nitrogen compounds in hydrotreating, the oxidative desulfurization improves with simultaneous denitrification in this system (ECODN; extractive catalytic oxidative denitrogenation). The reaction pathway of ECODS has already been well studied. In contrast, the oxidation of nitrogen compounds in ECODN is not yet well understood and requires more detailed investigations.Keywords: oxidative reaction pathway, denitrogenation of fuels, molecular catalysis, polyoxometalate
Procedia PDF Downloads 180716 The Recognition of Exclusive Choice of Court Agreements: United Arab Emirates Perspective and the 2005 Hague Convention on Choice of Court Agreements
Authors: Hasan Alrashid
Abstract:
The 2005 Hague Convention seeks to ensure legal certainty and predictability between parties in international business transactions. It harmonies exclusive choice of court agreements at the international level between parties to commercial transactions and to govern the recognition and enforcement of judgments resulting from proceedings based on such agreements to promote international trade and investment. Although the choice of court agreements is significant in international business transactions, the United Arab Emirates refuse to recognise it by Article 24 of the Federal Law No. 11 of 1992 of the Civil Procedure Code. A review of judicial judgments in United Arab Emirates up to the present day has revealed that several cases appeared before the Court in different states of United Arab Emirates regarding the recognition of exclusive choice of court agreements. In all the cases, the courts regarded the exclusive choice of court agreements as a direct assault on state authority and sovereignty and refused categorically to recognize choice of court agreements by refusing to stay proceedings in favor of the foreign chosen court. This has created uncertainty and unpredictability in international business transaction in the United Arab Emirates. In June 2011, the first Gulf Judicial Seminar on Cross-Frontier Legal Cooperation in Civil and Commercial Matters was held in Doha, Qatar. The Permanent Bureau of the Hague Conference attended the conference and invited the states of the Gulf Cooperation Council (GCC) namely, The United Arab Emirates, Bahrain, Saudi Arabia, Oman, Qatar and Kuwait to adopt some of the Hague Conventions, one of which was the Hague Convention on Choice of Court Agreements. One of the recommendations of the conference was that the GCC states should research ‘the benefits of predictability and legal certainty provided by the 2005 Convention on Choice of Court Agreements and its resulting advantages for cross-border trade and investment’ for possible adoption of the Hague Convention. Up to today, no further step has been taken by the any of the GCC states to adapt the Hague Convention nor did they conduct research on the benefits of predictability and legal certainty in international business transactions. This paper will argue that the approach regarding the recognition of choice of court agreements in United Arab Emirates states can be improved in order to help the parties in international business transactions avoid parallel litigation and ensure legal certainty and predictability. The focus will be the uncertainty and gaps regarding the choice of court agreements in the United Arab Emirates states. The Hague Convention on choice of court agreements and the importance of harmonisation of the rules of choice of court agreements at international level will also be discussed. Finally, The feasibility and desirability of recognizing choice of court agreements in United Arab Emirates legal system by becoming a party to the Hague Convention will be evaluated.Keywords: choice of court agreements, party autonomy, public authority, sovereignty
Procedia PDF Downloads 246715 The Correspondence between Self-regulated Learning, Learning Efficiency and Frequency of ICT Use
Authors: Maria David, Tunde A. Tasko, Katalin Hejja-Nagy, Laszlo Dorner
Abstract:
The authors have been concerned with research on learning since 1998. Recently, the focus of our interest is how prevalent use of information and communication technology (ICT) influences students' learning abilities, skills of self-regulated learning and learning efficiency. Nowadays, there are three dominant theories about the psychic effects of ICT use: According to social optimists, modern ICT devices have a positive effect on thinking. As to social pessimists, this effect is rather negative. And, regarding the views of biological optimists, the change is obvious, but these changes can fit into the mankind's evolved neurological system as did writing long ago. Mentality of 'digital natives' differ from that of elder people. They process information coming from the outside world in an other way, and different experiences result in different cerebral conformation. In this regard, researchers report about both positive and negative effects of ICT use. According to several studies, it has a positive effect on cognitive skills, intelligence, school efficiency, development of self-regulated learning, and self-esteem regarding learning. It is also proven, that computers improve skills of visual intelligence such as spacial orientation, iconic skills and visual attention. Among negative effects of frequent ICT use, researchers mention the decrease of critical thinking, as permanent flow of information does not give scope for deeper cognitive processing. Aims of our present study were to uncover developmental characteristics of self-regulated learning in different age groups and to study correlations of learning efficiency, the level of self-regulated learning and frequency of use of computers. Our subjects (N=1600) were primary and secondary school students and university students. We studied four age groups (age 10, 14, 18, 22), 400 subjects of each. We used the following methods: the research team developed a questionnaire for measuring level of self-regulated learning and a questionnaire for measuring ICT use, and we used documentary analysis to gain information about grade point average (GPA) and results of competence-measures. Finally, we used computer tasks to measure cognitive abilities. Data is currently under analysis, but as to our preliminary results, frequent use of computers results in shorter response time regarding every age groups. Our results show that an ordinary extent of ICT use tend to increase reading competence, and had a positive effect on students' abilities, though it didn't show relationship with school marks (GPA). As time passes, GPA gets worse along with the learning material getting more and more difficult. This phenomenon draws attention to the fact that students are unable to switch from guided to independent learning, so it is important to consciously develop skills of self-regulated learning.Keywords: digital natives, ICT, learning efficiency, reading competence, self-regulated learning
Procedia PDF Downloads 361714 The Reliability and Shape of the Force-Power-Velocity Relationship of Strength-Trained Males Using an Instrumented Leg Press Machine
Authors: Mark Ashton Newman, Richard Blagrove, Jonathan Folland
Abstract:
The force-velocity profile of an individual has been shown to influence success in ballistic movements, independent of the individuals' maximal power output; therefore, effective and accurate evaluation of an individual’s F-V characteristics and not solely maximal power output is important. The relatively narrow range of loads typically utilised during force-velocity profiling protocols due to the difficulty in obtaining force data at high velocities may bring into question the accuracy of the F-V slope along with predictions pertaining to the maximum force that the system can produce at a velocity of null (F₀) and the theoretical maximum velocity against no load (V₀). As such, the reliability of the slope of the force-velocity profile, as well as V₀, has been shown to be relatively poor in comparison to F₀ and maximal power, and it has been recommended to assess velocity at loads closer to both F₀ and V₀. The aim of the present study was to assess the relative and absolute reliability of an instrumented novel leg press machine which enables the assessment of force and velocity data at loads equivalent to ≤ 10% of one repetition maximum (1RM) through to 1RM during a ballistic leg press movement. The reliability of maximal and mean force, velocity, and power, as well as the respective force-velocity and power-velocity relationships and the linearity of the force-velocity relationship, were evaluated. Sixteen male strength-trained individuals (23.6 ± 4.1 years; 177.1 ± 7.0 cm; 80.0 ± 10.8 kg) attended four sessions; during the initial visit, participants were familiarised with the leg press, modified to include a mounted force plate (Type SP3949, Force Logic, Berkshire, UK) and a Micro-Epsilon WDS-2500-P96 linear positional transducer (LPT) (Micro-Epsilon, Merseyside, UK). Peak isometric force (IsoMax) and a dynamic 1RM, both from a starting position of 81% leg length, were recorded for the dominant leg. Visits two to four saw the participants carry out the leg press movement at loads equivalent to ≤ 10%, 30%, 50%, 70%, and 90% 1RM. IsoMax was recorded during each testing visit prior to dynamic F-V profiling repetitions. The novel leg press machine used in the present study appears to be a reliable tool for measuring F and V-related variables across a range of loads, including velocities closer to V₀ when compared to some of the findings within the published literature. Both linear and polynomial models demonstrated good to excellent levels of reliability for SFV and F₀ respectively, with reliability for V₀ being good using a linear model but poor using a 2nd order polynomial model. As such, a polynomial regression model may be most appropriate when using a similar unilateral leg press setup to predict maximal force production capabilities due to only a 5% difference between F₀ and obtained IsoMax values with a linear model being best suited to predict V₀.Keywords: force-velocity, leg-press, power-velocity, profiling, reliability
Procedia PDF Downloads 58713 Investigating the Influences of Long-Term, as Compared to Short-Term, Phonological Memory on the Word Recognition Abilities of Arabic Readers vs. Arabic Native Speakers: A Word-Recognition Study
Authors: Insiya Bhalloo
Abstract:
It is quite common in the Muslim faith for non-Arabic speakers to be able to convert written Arabic, especially Quranic Arabic, into a phonological code without significant semantic or syntactic knowledge. This is due to prior experience learning to read the Quran (a religious text written in Classical Arabic), from a very young age such as via enrolment in Quranic Arabic classes. As compared to native speakers of Arabic, these Arabic readers do not have a comprehensive morpho-syntactic knowledge of the Arabic language, nor can understand, or engage in Arabic conversation. The study seeks to investigate whether mere phonological experience (as indicated by the Arabic readers’ experience with Arabic phonology and the sound-system) is sufficient to cause phonological-interference during word recognition of previously-heard words, despite the participants’ non-native status. Both native speakers of Arabic and non-native speakers of Arabic, i.e., those individuals that learned to read the Quran from a young age, will be recruited. Each experimental session will include two phases: An exposure phase and a test phase. During the exposure phase, participants will be presented with Arabic words (n=40) on a computer screen. Half of these words will be common words found in the Quran while the other half will be words commonly found in Modern Standard Arabic (MSA) but either non-existent or prevalent at a significantly lower frequency within the Quran. During the test phase, participants will then be presented with both familiar (n = 20; i.e., those words presented during the exposure phase) and novel Arabic words (n = 20; i.e., words not presented during the exposure phase. ½ of these presented words will be common Quranic Arabic words and the other ½ will be common MSA words but not Quranic words. Moreover, ½ the Quranic Arabic and MSA words presented will be comprised of nouns, while ½ the Quranic Arabic and MSA will be comprised of verbs, thereby eliminating word-processing issues affected by lexical category. Participants will then determine if they had seen that word during the exposure phase. This study seeks to investigate whether long-term phonological memory, such as via childhood exposure to Quranic Arabic orthography, has a differential effect on the word-recognition capacities of native Arabic speakers and Arabic readers; we seek to compare the effects of long-term phonological memory in comparison to short-term phonological exposure (as indicated by the presentation of familiar words from the exposure phase). The researcher’s hypothesis is that, despite the lack of lexical knowledge, early experience with converting written Quranic Arabic text into a phonological code will help participants recall the familiar Quranic words that appeared during the exposure phase more accurately than those that were not presented during the exposure phase. Moreover, it is anticipated that the non-native Arabic readers will also report more false alarms to the unfamiliar Quranic words, due to early childhood phonological exposure to Quranic Arabic script - thereby causing false phonological facilitatory effects.Keywords: modern standard arabic, phonological facilitation, phonological memory, Quranic arabic, word recognition
Procedia PDF Downloads 357712 Reaching the Goals of Routine HIV Screening Programs: Quantifying and Implementing an Effective HIV Screening System in Northern Nigeria Facilities Based on Optimal Volume Analysis
Authors: Folajinmi Oluwasina, Towolawi Adetayo, Kate Ssamula, Penninah Iutung, Daniel Reijer
Abstract:
Objective: Routine HIV screening has been promoted as an essential component of efforts to reduce incidence, morbidity, and mortality. The objectives of this study were to identify the optimal annual volume needed to realize the public health goals of HIV screening in the AIDS Healthcare Foundation supported hospitals and establish an implementation process to realize that optimal annual volume. Methods: Starting in 2011 a program was established to routinize HIV screening within communities and government hospitals. In 2016 Five-years of HIV screening data were reviewed to identify the optimal annual proportions of age-eligible patients screened to realize the public health goals of reducing new diagnoses and ending late-stage diagnosis (tracked as concurrent HIV/AIDS diagnosis). Analysis demonstrated that rates of new diagnoses level off when 42% of age-eligible patients were screened, providing a baseline for routine screening efforts; and concurrent HIV/AIDS diagnoses reached statistical zero at screening rates of 70%. Annual facility based targets were re-structured to meet these new target volumes. Restructuring efforts focused on right-sizing HIV screening programs to align and transition programs to integrated HIV screening within standard medical care and treatment. Results: Over one million patients were screened for HIV during the five years; 16, 033 new HIV diagnoses and access to care and treatment made successfully for 82 % (13,206), and concurrent diagnosis rates went from 32.26% to 25.27%. While screening rates increased by 104.7% over the 5-years, volume analysis demonstrated that rates need to further increase by 62.52% to reach desired 20% baseline and more than double to reach optimal annual screening volume. In 2011 facility targets for HIV screening were increased to reflect volume analysis, and in that third year, 12 of the 19 facilities reached or exceeded new baseline targets. Conclusions and Recommendation: Quantifying targets against routine HIV screening goals identified optimal annual screening volume and allowed facilities to scale their program size and allocate resources accordingly. The program transitioned from utilizing non-evidence based annual volume increases to establishing annual targets based on optimal volume analysis. This has allowed efforts to be evaluated on the ability to realize quantified goals related to the public health value of HIV screening. Optimal volume analysis helps to determine the size of an HIV screening program. It is a public health tool, not a tool to determine if an individual patient should receive screening.Keywords: HIV screening, optimal volume, HIV diagnosis, routine
Procedia PDF Downloads 263711 Biomechanical Evaluation for Minimally Invasive Lumbar Decompression: Unilateral Versus Bilateral Approaches
Authors: Yi-Hung Ho, Chih-Wei Wang, Chih-Hsien Chen, Chih-Han Chang
Abstract:
Unilateral laminotomy and bilateral laminotomies were successful decompressions methods for managing spinal stenosis that numerous studies have reported. Thus, unilateral laminotomy was rated technically much more demanding than bilateral laminotomies, whereas the bilateral laminotomies were associated with a positive benefit to reduce more complications. There were including incidental durotomy, increased radicular deficit, and epidural hematoma. However, no relative biomechanical analysis for evaluating spinal instability treated with unilateral and bilateral laminotomies. Therefore, the purpose of this study was to compare the outcomes of different decompressions methods by experimental and finite element analysis. Three porcine lumbar spines were biomechanically evaluated for their range of motion, and the results were compared following unilateral or bilateral laminotomies. The experimental protocol included flexion and extension in the following procedures: intact, unilateral, and bilateral laminotomies (L2–L5). The specimens in this study were tested in flexion (8 Nm) and extension (6 Nm) of pure moment. Spinal segment kinematic data was captured by using the motion tracking system. A 3D finite element lumbar spine model (L1-S1) containing vertebral body, discs, and ligaments were constructed. This model was used to simulate the situation of treating unilateral and bilateral laminotomies at L3-L4 and L4-L5. The bottom surface of S1 vertebral body was fully geometrically constrained in this study. A 10 Nm pure moment also applied on the top surface of L1 vertebral body to drive lumbar doing different motion, such as flexion and extension. The experimental results showed that in the flexion, the ROMs (±standard deviation) of L3–L4 were 1.35±0.23, 1.34±0.67, and 1.66±0.07 degrees of the intact, unilateral, and bilateral laminotomies, respectively. The ROMs of L4–L5 were 4.35±0.29, 4.06±0.87, and 4.2±0.32 degrees, respectively. No statistical significance was observed in these three groups (P>0.05). In the extension, the ROMs of L3–L4 were 0.89±0.16, 1.69±0.08, and 1.73±0.13 degrees, respectively. In the L4-L5, the ROMs were 1.4±0.12, 2.44±0.26, and 2.5±0.29 degrees, respectively. Significant differences were observed among all trials, except between the unilateral and bilateral laminotomy groups. At the simulation results portion, the similar results were discovered with the experiment. No significant differences were found at L4-L5 both flexion and extension in each group. Only 0.02 and 0.04 degrees variation were observed during flexion and extension between the unilateral and bilateral laminotomy groups. In conclusions, the present results by finite element analysis and experimental reveal that no significant differences were observed during flexion and extension between unilateral and bilateral laminotomies in short-term follow-up. From a biomechanical point of view, bilateral laminotomies seem to exhibit a similar stability as unilateral laminotomy. In clinical practice, the bilateral laminotomies are likely to reduce technical difficulties and prevent perioperative complications; this study proved this benefit through biomechanical analysis. The results may provide some recommendations for surgeons to make the final decision.Keywords: unilateral laminotomy, bilateral laminotomies, spinal stenosis, finite element analysis
Procedia PDF Downloads 399710 Pharmacovigilance in Hospitals: Retrospective Study at the Pharmacovigilance Service of UHE-Oran, Algeria
Authors: Nadjet Mekaouche, Hanane Zitouni, Fatma Boudia, Habiba Fetati, A. Saleh, A. Lardjam, H. Geniaux, A. Coubret, H. Toumi
Abstract:
Medicines have undeniably played a major role in prolonging shelf life and improving quality. The absolute efficacy of the drug remains a lever for innovation, its benefit/risk balance is not always assured and it does not always have the expected effects. Prior to marketing, knowledge about adverse drug reactions is incomplete. Once on the market, phase IV drug studies begin. For years, the drug was prescribed with less care to a large number of very heterogeneous patients and often in combination with other drugs. It is at this point that previously unknown adverse effects may appear, hence the need for the implementation of a pharmacovigilance system. Pharmacovigilance represents all methods for detecting, evaluating, informing and preventing the risks of adverse drug reactions. The most severe adverse events occur frequently in hospital and that a significant proportion of adverse events result in hospitalizations. In addition, the consequences of hospital adverse events in terms of length of stay, mortality and costs are considerable. It, therefore, appears necessary to develop ‘hospital pharmacovigilance’ aimed at reducing the incidence of adverse reactions in hospitals. The most widely used monitoring method in pharmacovigilance is spontaneous notification. However, underreporting of adverse drug reactions is common in many countries and is a major obstacle to pharmacovigilance assessment. It is in this context that this study aims to describe the experience of the pharmacovigilance service at the University Hospital of Oran (EHUO). This is a retrospective study extending from 2011 to 2017, carried out on archived records of declarations collected at the level of the EHUO Pharmacovigilance Department. Reporting was collected by two methods: ‘spontaneous notification’ and ‘active pharmacovigilance’ targeting certain clinical services. We counted 217 statements. It involved 56% female patients and 46% male patients. Age ranged from 5 to 78 years with an average of 46 years. The most common adverse reaction was drug toxidermy. For the drugs in question, they were essentially according to the ATC classification of anti-infectives followed by anticancer drugs. As regards the evolution of declarations by year, a low rate of notification was noted in 2011. That is why we decided to set up an active approach at the level of some services where a resident of reference attended the staffs every week. This has resulted in an increase in the number of reports. The declarations came essentially from the services where the active approach was installed. This highlights the need for ongoing communication between all relevant health actors to stimulate reporting and secure drug treatments.Keywords: adverse drug reactions, hospital, pharmacovigilance, spontaneous notification
Procedia PDF Downloads 175709 Effect of Fermented Orange Juice Intake on Urinary 6‑Sulfatoxymelatonin in Healthy Volunteers
Authors: I. Cerrillo, A. Carrillo-Vico, M. A. Ortega, B. Escudero-López, N. Álvarez-Sánchez, F. Martín, M. S. Fernández-Pachón
Abstract:
Melatonin is a bioactive compound involved in multiple biological activities such as glucose tolerance, circadian rhythm regulation, antioxidant defense or immune system action. In elderly subjects the intake of foods and drinks rich in melatonin is very important due to its endogenous level decreases with age. Alcoholic fermentation is a process carried out in fruits, vegetables and legumes to obtain new products with improved bioactive compounds profile in relation to original substrates. Alcoholic fermentation process carried out by Saccharomycetaceae var. Pichia kluyveri induces an important synthesis of melatonin in orange juice. A novel beverage derived of fermented orange juice could be a promising source of this bioactive compound. The aim of the present study was to determine whether the acute intake of fermented orange juice increase the levels of urinary 6-sulfatoxymelatonin in healthy humans. Nine healthy volunteers (7 women and 2 men), aged between 20 and 25 years old and BMI of 21.1 2.4 kg/m2, were recruited. On the study day, participants ingested 500 mL of fermented orange juice. The first urine collection was made before fermented orange juice consumption (basal). The rest of urine collections were made in the following time intervals after fermented orange juice consumption: 0-2, 2-5, 5-10, 10- 15 and 15-24 hours. During the experimental period only the consumption of water was allowed. At lunch time a meal was provided (60 g of white bread, two slices of ham, a slice of cheese, 125 g of sweetened natural yoghurt and water). The subjects repeated the protocol with orange juice following a 2-wk washout period between both types of beverages. The levels of 6-sulfatoxymelatonin (6-SMT) were measured in urine recollected at different time points using the Melatonin-Sulfate Urine ELISA (IBL International GMBH, Hamburg, Germany). Levels of 6-SMT were corrected to those of creatinine for each sample. A significant (p < 0.05) increase in urinary 6-SMT levels was observed between 2-5 hours after fermented orange juice ingestion with respect to basal values (increase of 67,8 %). The consumption of orange juice did not induce any significant change in urinary 6-SMT levels. In addition, urinary 6-SMT levels obtained between 2-5 hours after fermented orange juice ingestion (115,6 ng/mg) were significantly different (p < 0.05) from those of orange juice (42,4 ng/mg). The enhancement of urinary 6-SMT after the ingestion of 500 mL of fermented orange juice in healthy humans compared to orange juice could be an important advantage of this novel product as an excellent source of melatonin. Fermented orange juice could be a new functional food, and its consumption could exert a potentially positive effect on health in both the maintenance of health status and the prevention of chronic diseases.Keywords: fermented orange juice, functional beverage, healthy human, melatonin
Procedia PDF Downloads 405708 Industrial Hemp Agronomy and Fibre Value Chain in Pakistan: Current Progress, Challenges, and Prospects
Authors: Saddam Hussain, Ghadeer Mohsen Albadrani
Abstract:
Pakistan is one of the most vulnerable countries to climate change. Being a country where 23% of the country’s GDP relies on agriculture, this is a serious cause of concern. Introducing industrial hemp in Pakistan can help build climate resilience in the agricultural sector of the country, as hemp has recently emerged as a sustainable, eco-friendly, resource-efficient, and climate-resilient crop globally. Hemp has the potential to absorb huge amounts of CO₂, nourish the soil, and be used to create various biodegradable and eco-friendly products. Hemp is twice as effective as trees at absorbing and locking up carbon, with 1 hectare (2.5 acres) of hemp reckoned to absorb 8 to 22 tonnes of CO₂ a year, more than any woodland. Along with its high carbon-sequestration ability, it produces higher biomass and can be successfully grown as a cover crop. Hemp can grow in almost all soil conditions and does not require pesticides. It has fast-growing qualities and needs only 120 days to be ready for harvest. Compared with cotton, hemp requires 50% less water to grow and can produce three times higher fiber yield with a lower ecological footprint. Recently, the Government of Pakistan has allowed the cultivation of industrial hemp for industrial and medicinal purposes, making it possible for hemp to be reinserted into the country’s economy. Pakistan’s agro-climatic and edaphic conditions are well-suitable to produce industrial hemp, and its cultivation can bring economic benefits to the country. Pakistan can enter global markets as a new exporter of hemp products. The production of hemp in Pakistan can be most exciting to the workforce, especially for farmers participating in hemp markets. The minimum production cost of hemp makes it affordable to small holding farmers, especially those who need their cropping system to be as highly sustainable as possible. Dr. Saddam Hussain is leading the first pilot project of Industrial Hemp in Pakistan. In the past three years, he has been able to recruit high-impact research grants on industrial hemp as Principal Investigator. He has already screened the non-toxic hemp genotypes, tested the adaptability of exotic material in various agroecological conditions, formulated the production agronomy, and successfully developed the complete value chain. He has developed prototypes (fabric, denim, knitwear) using hemp fibre in collaboration with industrial partners and has optimized the indigenous fibre processing techniques. In this lecture, Dr. Hussain will talk on hemp agronomy and its complete fibre value chain. He will discuss the current progress, and will highlight the major challenges and future research direction on hemp research.Keywords: industrial hemp, agricultural sustainability, agronomic evaluation, hemp value chain
Procedia PDF Downloads 81707 Solar Cell Packed and Insulator Fused Panels for Efficient Cooling in Cubesat and Satellites
Authors: Anand K. Vinu, Vaishnav Vimal, Sasi Gopalan
Abstract:
All spacecraft components have a range of allowable temperatures that must be maintained to meet survival and operational requirements during all mission phases. Due to heat absorption, transfer, and emission on one side, the satellite surface presents an asymmetric temperature distribution and causes a change in momentum, which can manifest in spinning and non-spinning satellites in different manners. This problem can cause orbital decays in satellites which, if not corrected, will interfere with its primary objective. The thermal analysis of any satellite requires data from the power budget for each of the components used. This is because each of the components has different power requirements, and they are used at specific times in an orbit. There are three different cases that are run, one is the worst operational hot case, the other one is the worst non-operational cold case, and finally, the operational cold case. Sunlight is a major source of heating that takes place on the satellite. The way in which it affects the spacecraft depends on the distance from the Sun. Any part of a spacecraft or satellite facing the Sun will absorb heat (a net gain), and any facing away will radiate heat (a net loss). We can use the state-of-the-art foldable hybrid insulator/radiator panel. When the panels are opened, that particular side acts as a radiator for dissipating the heat. Here the insulator, in our case, the aerogel, is sandwiched with solar cells and radiator fins (solar cells outside and radiator fins inside). Each insulated side panel can be opened and closed using actuators depending on the telemetry data of the CubeSat. The opening and closing of the panels are dependent on the special code designed for this particular application, where the computer calculates where the Sun is relative to the satellites. According to the data obtained from the sensors, the computer decides which panel to open and by how many degrees. For example, if the panels open 180 degrees, the solar panels will directly face the Sun, in turn increasing the current generator of that particular panel. One example is when one of the corners of the CubeSat is facing or if more than one side is having a considerable amount of sun rays incident on it. Then the code will analyze the optimum opening angle for each panel and adjust accordingly. Another means of cooling is the passive way of cooling. It is the most suitable system for a CubeSat because of its limited power budget constraints, low mass requirements, and less complex design. Other than this fact, it also has other advantages in terms of reliability and cost. One of the passive means is to make the whole chase act as a heat sink. For this, we can make the entire chase out of heat pipes and connect the heat source to this chase with a thermal strap that transfers the heat to the chassis.Keywords: passive cooling, CubeSat, efficiency, satellite, stationary satellite
Procedia PDF Downloads 100