Search results for: real cases
7492 Applications of Forensics/DNA Tools in Combating Gender-Based Violence: A Case Study in Nigeria
Authors: Edeaghe Ehikhamenor, Jennifer Nnamdi
Abstract:
Introduction: Gender-based violence (GBV) was a well-known global crisis before the COVID-19 pandemic. The pandemic burden only intensified the crisis. With prevailing lockdowns, increased poverty due to high unemployment, especially affecting females, and other mobility restrictions that have left many women trapped with their abusers, plus isolation from social contact and support networks, GBV cases spiraled out of control. Prevalence of economic with cultural disparity, which is greatly manifested in Nigeria, is a major contributory factor to GBV. This is made worst by religious adherents where the females are virtually relegated to the background. Our societal approaches to investigations and sanctions to culprits have not sufficiently applied forensic/DNA tools in combating these major vices. Violence against women or some rare cases against men can prevent them from carrying out their duties regardless of the position they hold. Objective: The main objective of this research is to highlight the origin of GBV, the victims, types, contributing factors, and the applications of forensics/DNA tools and remedies so as to minimize GBV in our society. Methods: Descriptive information was obtained through the search on our daily newspapers, electronic media, google scholar websites, other authors' observations and personal experiences, plus anecdotal reports. Results: Findings from our exploratory searches revealed a high incidence of GBV with very limited or no applications of Forensics/DNA tools as an intervening mechanism to reduce GBV in Nigeria. Conclusion: Nigeria needs to develop clear-cut policies on forensics/DNA tools in terms of institutional framework to develop a curriculum for the training of all stakeholders to fast-track justice for victims of GBV so as to serve as a deterrent to other culprits.Keywords: gender-based violence, forensics, DNA, justice
Procedia PDF Downloads 847491 Comparisonal Study of Succinylation and Glutarylation of Jute Fiber: Study of Mechanical Properties of Modified Fiber Reinforced Epoxy Composites
Authors: R. Vimal, K. Hari Hara Subramaniyan, C. Aswin, B. Logeshwaran, M. Ramesh
Abstract:
Due to several environmental concerns, natural fibers have greatly replaced the synthetic fibers as a reinforcing material in polymer matrix composites. Among the natural fibers, jute fibers are the most abundant plant fibers which are manufactured mainly in countries like India. In recent years, modification of plant fibers with range of chemicals to increase various mechanical and thermal properties has been focused greatly. Among that, some of the plant fibers were modified using succinic anhydride. In the present study, Jute fibers have been modified chemically by treatment with succinic anhydride and glutaric anhydride at different concentrations of 5%, 10%, 20%, 30% and 40%. The fiber modification was done under retting condition at various retention times of 3, 6, 12, 24, 36, and 48 hours. The modification of fiber structure in both the cases is confirmed with Infrared Spectroscopy. The degree of modification increases with increase in retention time, but higher retention time has damaged the fiber structure which is common in both the cases. Comparatively, treatment of fibers with glutaric anhydride has shown efficient output than that of succinic anhydride. The unmodified fibers, succinylated fibers and glutarylated fibers at different retention times are reinforced with epoxy matrix at various volume fractions of fiber under room temperature. The composite made using unmodified fiber is used as a standard material. The tensile strength and flexural strength of the composites are analyzed in detail. Among these, the composite made with glutarylated fiber has shown good mechanical properties when compared to those made of succinylated and unmodified fiber.Keywords: flexural strength, glutarylation, jute fibers, succinylation, tensile strength
Procedia PDF Downloads 5087490 Sex Difference of the Incidence of Sudden Cardiac Arrest/Death in Athletes: A Systematic Review and Meta-analysis
Authors: Lingxia Li, Frédéric Schnell, Shuzhe Ding, Solène Le Douairon Lahaye
Abstract:
Background: The risk of sudden cardiac arret/death (SCA/D) in athletes is controversial. There is a lack of meta-analyses assessing the sex differences in the risk of SCA/D in competitive athletes. Purpose: The aim of the present study was to evaluate sex differences in the incidence of SCA/D in competitive athletes using meta-analyses. Methods: The systematic review was registered in the PROSPERO database (registration ID: CRD42023432022) and was conducted according to the PRISMA guidelines. PubMed, Embase, Scopus, SPORT Discus and Cochrane Library were searched up to July 2023. To avoid systematic bias in data pooling, only studies with data for both sexes were included. Results: From the 18 included studies, 2028 cases of SCA/D were observed (males 1821 (89.79%), females 207 (10.21%)). The age ranges from the adolescents (<26 years) to the elderly (>45 years). The incidence in male athletes was 1.32/100,000 AY (95% CI: [0.90, 1.93]) and in females was 0.26/100,000 AY (95% CI: [0.16, 0.43]), the incidence rate ratio (IRR) was 6.43 (95% CI: [4.22, 9.79]). The subgroup synthesis showed a higher incidence in males than in females in both age groups <25 years and ≤35 years, the IRR was 5.86 (95% CI: [4.69, 7.32]) and 5.79 (95% CI: [4.73, 7.09]), respectively. When considering the events, the IRR was 6.73 (95%CI: [3.06, 14.78]) among studies involving both SCA/D events and 7.16 (95% CI: [4.93, 10.40]) among studies including only cases of SCD. The available clinical evidence showed that cardiac events were most frequently seen in long-distance running races (26, 35.1%), marathon (16, 21.6%) and soccer (10, 13.5%). Coronary artery disease (14, 18.9%), hypertrophic cardiomyopathy (8, 10.8%), and arrhythmogenic right ventricular cardiomyopathy (7, 9.5%) are the most common causes of SCA/D in competitive athletes. Conclusion: The meta-analysis provides evidence of sex differences in the incidence of SCA/D in competitive athletes. The incidence of SCA/D in male athletes was 6 to 7 times higher than in females. Identifying the reasons for this difference may have implications for targeted the prevention of fatal evets in athletes.Keywords: incidence, sudden cardiac arrest, sudden cardiac death, sex difference, athletes
Procedia PDF Downloads 647489 Numerical Simulation of Different Configurations for a Combined Gasification/Carbonization Reactors
Authors: Mahmoud Amer, Ibrahim El-Sharkawy, Shinichi Ookawara, Ahmed Elwardany
Abstract:
Gasification and carbonization are two of the most common ways for biomass utilization. Both processes are using part of the waste to be accomplished, either by incomplete combustion or for heating for both gasification and carbonization, respectively. The focus of this paper is to minimize the part of the waste that is used for heating biomass for gasification and carbonization. This will occur by combining both gasifiers and carbonization reactors in a single unit to utilize the heat in the product biogas to heating up the wastes in the carbonization reactors. Three different designs are proposed for the combined gasification/carbonization (CGC) reactor. These include a parallel combination of two gasifiers and carbonized syngas, carbonizer and combustion chamber, and one gasifier, carbonizer, and combustion chamber. They are tested numerically using ANSYS Fluent Computational Fluid Dynamics to ensure homogeneity of temperature distribution inside the carbonization part of the CGC reactor. 2D simulations are performed for the three cases after performing both mesh-size and time-step independent solutions. The carbonization part is common among the three different cases, and the difference among them is how this carbonization reactor is heated. The simulation results showed that the first design could provide only partial homogeneous temperature distribution, not across the whole reactor. This means that the produced carbonized biomass will be reduced as it will only fill a specified height of the reactor. To keep the carbonized product production high, a series combination is proposed. This series configuration resulted in a uniform temperature distribution across the whole reactor as it has only one source for heat with no temperature distribution on any surface of the carbonization section. The simulations provided a satisfactory result that either the first parallel combination of gasifier and carbonization reactor could be used with a reduced carbonized amount or a series configuration to keep the production rate high.Keywords: numerical simulation, carbonization, gasification, biomass, reactor
Procedia PDF Downloads 1027488 A Rare Case of Dissection of Cervical Portion of Internal Carotid Artery, Diagnosed Postpartum
Authors: Bidisha Chatterjee, Sonal Grover, Rekha Gurung
Abstract:
Postpartum dissection of the internal carotid artery is a relatively rare condition and is considered as an underlying aetiology in 5% to 25% of strokes under the age of 30 to 45 years. However, 86% of these cases recover completely and 14% have mild focal neurological symptoms. Prognosis is generally good with early intervention. The risk quoted for a repeat carotid artery dissection in subsequent pregnancies is less than 2%. 36-year Caucasian primipara presented on postnatal day one of forceps delivery with tachycardia. In the intrapartum period she had a history of prolonged rupture of membranes and developed intrapartum sepsis and was treated with antibiotics. Postpartum ECG showed septal inferior T wave inversion and a troponin level of 19. Subsequently Echocardiogram ruled out post-partum cardiomyopathy. Repeat ECG showed improvement of the previous changes and in the absence of symptoms no intervention was warranted. On day 4 post-delivery, she had developed symptoms of droopy right eyelid, pain around the right eye and itching in the right ear. On examination, she had developed right sided ptosis, unequal pupils (Rt miotic pupil). Cranial nerve examination, reflexes, sensory examination and muscle power was normal. Apart from migraine, there was no medical or family history of note. In view of Horner’s on the right, she had a CT Angiogram and subsequently MR/MRA and was diagnosed with dissection of the cervical portion of the right internal carotid artery. She was discharged on a course of Aspirin 75mg. By 6 week post-natal follow up patient had recovered significantly with occasional episodes of unequal pupils and tingling of right toes which resolved spontaneously. Cervical artery dissection, including VAD and carotid artery dissection, are rare complications of pregnancy with an estimated annual incidence of 2.6–3 per 100,000 pregnancy hospitalizations. Aetiology remains unclear though trauma during straining at labour, underlying arterial disease and preeclampsia have been implicated. Hypercoagulable state during pregnancy and puerperium could also be an important factor. 60-90% cases present with severe headache and neck pain and generally precede neurological symptoms like ipsilateral Horner’s syndrome, retroorbital pain, tinnitus and cranial nerve palsy. Although rare, the consequences of delayed diagnosis and management can lead to severe and permanent neurological deficits. Patients with a strong index of suspicion should undergo an MRI or MRA of head and neck. Antithrombotic and antiplatelet therapy forms the mainstay of therapy with selected cases needing endovascular stenting. Long term prognosis is favourable with either complete resolution or minimal deficit if treatment is prompt. Patients should be counselled about the recurrence risk and possibility of stroke in future pregnancy. Coronary artery dissection is rare and treatable but needs early diagnosis and treatment. Post-partum headache and neck pain with neurological symptoms should prompt urgent imaging followed by antithrombotic and /or antiplatelet therapy. Most cases resolve completely or with minimal sequelae.Keywords: postpartum, dissection of internal carotid artery, magnetic resonance angiogram, magnetic resonance imaging, antiplatelet, antithrombotic
Procedia PDF Downloads 987487 The Influence of Environmental Factors on Honey Bee Activities: A Quantitative Analysis
Authors: Hung-Jen Lin, Chien-Hao Wang, Chien-Peng Huang, Yu-Sheng Tseng, En-Cheng Yang, Joe-Air Jiang
Abstract:
Bees’ incoming and outgoing behavior is a decisive index which can indicate the health condition of a colony. Traditional methods for monitoring the behavior of honey bees (Apis mellifera) take too much time and are highly labor-intensive, and the lack of automation and synchronization disables researchers and beekeepers from obtaining real-time information of beehives. To solve these problems, this study proposes to use an Internet of Things (IoT)-based system for counting honey bees’ incoming and outgoing activities using an infrared interruption technique, while environmental factors are recorded simultaneously. The accuracy of the established system is verified by comparing the counting results with the outcomes of manual counting. Moreover, this highly -accurate device is appropriate for providing quantitative information regarding honey bees’ incoming and outgoing behavior. Different statistical analysis methods, including one-way ANOVA and two-way ANOVA, are used to investigate the influence of environmental factors, such as temperature, humidity, illumination and ambient pressure, on bees’ incoming and outgoing behavior. With the real-time data, a standard model is established using the outcomes from analyzing the relationship between environmental factors and bees’ incoming and outgoing behavior. In the future, smart control systems, such as a temperature control system, can also be combined with the proposed system to create an appropriate colony environment. It is expected that the proposed system will make a considerable contribution to the apiculture and researchers.Keywords: ANOVA, environmental factors, honey bee, incoming and outgoing behavior
Procedia PDF Downloads 3687486 Creativity in the Dark: A Qualitative Study of Cult’s Members Battle between True and False Self in Heterotopia
Authors: Shirly Bar-Lev, Michal Morag
Abstract:
Cults are usually thought of as suppressive organizations, where creativity is systematically stifled. Except for few scholars, creativity in cults remains an uncharted terrain (Boeri and Pressley, 2010). This paperfocuses on how cult members sought real and imaginary spaces to express themselves and even used their bodies as canvases on which to assert their individuality, resistance, devotion, pain, and anguish. We contend that cult members’ creativity paves their way out of the cult. This paper is part of a larger study into the experiences of former members of cults and cult-like NewReligiousMmovements (NRM). The research is based on in-depth interviews conducted with thirtyIsraeli men and women, aged 24 to 50, who either joined an NRM or were born into one. Their stories reveal that creativity is both emplaced and embedded in power relations. That is why Foucault’s idea of Heterotopia and Winnicott’s idea of the battle between True and False self canbenefit our understanding of how cult members creatively assert their autonomy over their bodies and thoughts while in the cult. Cults’ operate on a complex tension between submission and autonomy. On the one hand, they act as heterotopias byallowing for a ‘simultaneousmythic and real contestation of the space in which we live. Ascounter-hegemonic sites, they serve as‘the greatest reserve of theimagination’, to use Foucault’s words. Cults definitely possesselements of mystery, danger, and transgression where an alternative social ordering can emerge. On the other hand, cults are set up to format alternative identities. Often, the individuals who inhibit these spaces look for spiritual growth, self-reflection, and self-actualization. They might willingly relinquish autonomy over vast aspects of their lives in pursuit of self-improvement. In any case, cultsclaim the totality of their members’ identities and absolute commitment and compliance with the cult’s regimes. It, therefore, begs the question how the paradox between autonomy and submissioncan spur instances of creativity. How can cult members escape processes of performative regulation to assert their creative self? Both Foucault and Winnicott recognize the possibility of an authentic self – one that is spontaneous and creative. Both recognize that only the true self can feel real andmust never comply. Both note the disciplinary regimes that push the true self into hiding, as well as the social and psychological mechanisms that individuals develop to protect their true self. But while Foucault spoke of the power of critic as a way of salvaging the true self, Winnicott spoke of recognition and empathy - feeling known by others. Invitinga dialogue between the two theorists can yield a productive discussion on how cult members assert their ‘true self’ to cultivate a creative self within the confines of the cult.Keywords: cults, creativity, heterotopia, true and false self
Procedia PDF Downloads 887485 Medical Imaging Fusion: A Teaching-Learning Simulation Environment
Authors: Cristina Maria Ribeiro Martins Pereira Caridade, Ana Rita Ferreira Morais
Abstract:
The use of computational tools has become essential in the context of interactive learning, especially in engineering education. In the medical industry, teaching medical image processing techniques is a crucial part of training biomedical engineers, as it has integrated applications with healthcare facilities and hospitals. The aim of this article is to present a teaching-learning simulation tool developed in MATLAB using a graphical user interface for medical image fusion that explores different image fusion methodologies and processes in combination with image pre-processing techniques. The application uses different algorithms and medical fusion techniques in real time, allowing you to view original images and fusion images, compare processed and original images, adjust parameters, and save images. The tool proposed in an innovative teaching and learning environment consists of a dynamic and motivating teaching simulation for biomedical engineering students to acquire knowledge about medical image fusion techniques and necessary skills for the training of biomedical engineers. In conclusion, the developed simulation tool provides real-time visualization of the original and fusion images and the possibility to test, evaluate and progress the student’s knowledge about the fusion of medical images. It also facilitates the exploration of medical imaging applications, specifically image fusion, which is critical in the medical industry. Teachers and students can make adjustments and/or create new functions, making the simulation environment adaptable to new techniques and methodologies.Keywords: image fusion, image processing, teaching-learning simulation tool, biomedical engineering education
Procedia PDF Downloads 1327484 Outcome of Naive SGLT2 Inhibitors Among ICU Admitted Acute Stroke with T2DM Patients a Prospective Cohort Study in NCMultispecialty Hospital, Biratnagar, Nepal
Authors: Birendra Kumar Bista, Rhitik Bista, Prafulla Koirala, Lokendra Mandal, Nikrsh Raj Shrestha, Vivek Kattel
Abstract:
Introduction: Poorly controlled diabetes is associated with cause and poor outcome of stroke. High blood sugar reduces cerebral blood flow, increases intracranial pressure, cerebral edema and neuronal death, especially among patients with poorly controlled diabetes.1 SGLT2 inhibitors are associated with 50% reduction in hemorrhagic stroke compared with placebo. SGLT2 inhibitors decrease cardiovascular events via reducing glucose, blood pressure, weight, arteriosclerosis, albuminuria and reduction of atrial fibrillation.2,3 No study has been documented in low income countries to see the role of post stroke SGLT2 inhibitors on diabetic patients at and after ICU admission. Aims: The aim of the study was to measure the 12 months outcome of diabetic patients with acute stroke admitted in ICU set up with naïve SGLT2 inhibitors add on therapy. Method: It was prospective cohort study carried out in a 250 bedded tertiary neurology care hospital at the province capital Biratnagar Nepal. Diabetic patient with acute stroke admitted in ICU from 1st January 2022 to 31st December 2022 who were not under SGLT2 inhibitors were included in the study. These patients were managed as per hospital protocol. Empagliflozin was added to the alternate enrolled patients. Empagliflozin was continued at the time of discharged and during follow up unless contraindicated. These patients were followed up for 12 months. Outcome measured were mortality, morbidity requiring readmission or hospital visit other than regular follow up, SGLT2 inhibitors related adverse events, neuropsychiatry comorbidity, functional status and biochemical parameters. Ethical permission was taken from hospital administration and ethical board. Results: Among 147 diabetic cases 68 were not treated with empagliflozin whereas 67 cases were started the SGLT2 inhibitors. HbA1c level and one year mortality was significantly low among patients on empaglifozin arm. Over a period of 12 months 427 acute stroke patients were admitted in the ICU. Out of them 44% were female, 61% hypertensive, 34% diabetic, 57% dyslipidemia, 26% smoker and with median age of 45 years. Among 427 cases 4% required neurosurgical interventions and 76% had hemorrhagic CVA. The most common reason for ICU admission was GCS<8 (51%). The median ICU stay was 5 days. ICU mortality was 21% whereas 1 year mortality was 41% with most common reason being pneumonia. Empaglifozin related adverse effect was seen in 11% most commonly lower urinary tract infection in 6%. Conclusion: Empagliflozin can safely be started among acute stroke with better Hba1C control and low mortality outcome compared to treatment without SGLT2 inhibitor.Keywords: diabetes, ICU, mortality, SGLT2 inhibitors, stroke
Procedia PDF Downloads 607483 Comparison of Phynotypic Traits of Three Arabian Horse Strains
Authors: Saria Almarzook, Monika Reissmann, Gudrun Brockmann
Abstract:
Due to its history, occurrence in different ecosystems and diverse using, the modern horse (Equus caballus) shows large variability in size, appearance, behavior and habits. At all times, breeders try to create groups (breeds, strains) representing high homology but showing clear differences in comparison to other groups. A great interest of analyzing phenotypic and genetic traits looking for real diversity and genetic uniqueness existents for Arabian horses in Syria. 90 Arabian horses from governmental research center of Arabian horses in Damascus were included. The horses represent three strains (Kahlawi, Saklawi, Hamdani) originated from different geographical zones. They were raised on the same farm, under stable conditions. Twelve phenotypic traits were measured: wither height (WH), croup width (CW), croup height (CH), neck girth (NG), thorax girth (TG), chest girth (ChG), chest depth (ChD), chest width (ChW), back line length (BLL), body length (BL), fore cannon length (FCL) and hind cannon length (HCL). The horses were divided into groups according to age (less than 2 years, 2-4 years, 4-9 years, over 9 years) and to sex (male, female). The statistical analyzes show that age has significant influence of WH while the strain has only a very limited effect. On CW, NG, BLL, FCL and HCL, there is only a significant influence of sex. Age has significant effect on CH and BL. All sources of classes have a significant effect on TG, ChG, ChD and ChW. Strain has a significant effect on the BL. These results provide first information for real biodiversity in and between the strains and can be used to develop the breeding work in the Arabian horse breed.Keywords: Arabian horse, phenotypic traits, strains, Syria
Procedia PDF Downloads 3917482 Applying Kinect on the Development of a Customized 3D Mannequin
Authors: Shih-Wen Hsiao, Rong-Qi Chen
Abstract:
In the field of fashion design, 3D Mannequin is a kind of assisting tool which could rapidly realize the design concepts. While the concept of 3D Mannequin is applied to the computer added fashion design, it will connect with the development and the application of design platform and system. Thus, the situation mentioned above revealed a truth that it is very critical to develop a module of 3D Mannequin which would correspond with the necessity of fashion design. This research proposes a concrete plan that developing and constructing a system of 3D Mannequin with Kinect. In the content, ergonomic measurements of objective human features could be attained real-time through the implement with depth camera of Kinect, and then the mesh morphing can be implemented through transformed the locations of the control-points on the model by inputting those ergonomic data to get an exclusive 3D mannequin model. In the proposed methodology, after the scanned points from the Kinect are revised for accuracy and smoothening, a complete human feature would be reconstructed by the ICP algorithm with the method of image processing. Also, the objective human feature could be recognized to analyze and get real measurements. Furthermore, the data of ergonomic measurements could be applied to shape morphing for the division of 3D Mannequin reconstructed by feature curves. Due to a standardized and customer-oriented 3D Mannequin would be generated by the implement of subdivision, the research could be applied to the fashion design or the presentation and display of 3D virtual clothes. In order to examine the practicality of research structure, a system of 3D Mannequin would be constructed with JAVA program in this study. Through the revision of experiments the practicability-contained research result would come out.Keywords: 3D mannequin, kinect scanner, interactive closest point, shape morphing, subdivision
Procedia PDF Downloads 3067481 Can (E-)Mentoring Be a Tool for the Career of Future Translators?
Authors: Ana Sofia Saldanha
Abstract:
The answer is yes. Globalization is changing the translation world day after day, year after year. The need to know more about new technologies, clients, companies, project management and social networks is becoming more and more demanding and increasingly competitive. The great majority of the recently graduated Translators do not know where to go, what to do or even who to contact to start their careers in translation. It is well known that there are innumerous webinars, books, blogs and webpages with the so-called “tips do become a professional translator” indicating for example, what to do, what not to do, rates, how your resume should look like, etc. but are these pieces of advice coming from real translators? Translators who work daily with clients, who understand their demands, requests, questions? As far as today`s trends, the answer is no. Most of these pieces of advice are just theoretical and coming from “brilliant minds” who are more interested in spreading their word and winning “likes” to become, in some way, “important people in some area. Mentoring is, indeed, a highly important tool to help and guide new translators starting their career. An effective and well oriented Mentoring is a powerful way to orient these translators on how to create their resumes, where to send resumes, how to approach clients, how to answer emails and how to negotiate rates in an efficient way. Mentoring is a crucial tool and even some kind of “psychological trigger”, when properly delivered by professional and experienced translators, to help in the so aimed career development. The advice and orientation sessions which can bem 100% done online, using Skype for example, are almost a “weapon” to destroy the barriers created by opinions, by influences or even by universities. This new orientation trend is the future path for new translators and is the future of the Translation industry and professionals and Universities who must update their way of approaching the real translation world, therefore, minds and spirits need to be opened and engaged in this new trend of developing skills.Keywords: mentoring, orientation, professional follow-up, translation
Procedia PDF Downloads 1157480 A Study for Area-level Mosquito Abundance Prediction by Using Supervised Machine Learning Point-level Predictor
Authors: Theoktisti Makridou, Konstantinos Tsaprailis, George Arvanitakis, Charalampos Kontoes
Abstract:
In the literature, the data-driven approaches for mosquito abundance prediction relaying on supervised machine learning models that get trained with historical in-situ measurements. The counterpart of this approach is once the model gets trained on pointlevel (specific x,y coordinates) measurements, the predictions of the model refer again to point-level. These point-level predictions reduce the applicability of those solutions once a lot of early warning and mitigation actions applications need predictions for an area level, such as a municipality, village, etc... In this study, we apply a data-driven predictive model, which relies on public-open satellite Earth Observation and geospatial data and gets trained with historical point-level in-Situ measurements of mosquito abundance. Then we propose a methodology to extract information from a point-level predictive model to a broader area-level prediction. Our methodology relies on the randomly spatial sampling of the area of interest (similar to the Poisson hardcore process), obtaining the EO and geomorphological information for each sample, doing the point-wise prediction for each sample, and aggregating the predictions to represent the average mosquito abundance of the area. We quantify the performance of the transformation from the pointlevel to the area-level predictions, and we analyze it in order to understand which parameters have a positive or negative impact on it. The goal of this study is to propose a methodology that predicts the mosquito abundance of a given area by relying on point-level prediction and to provide qualitative insights regarding the expected performance of the area-level prediction. We applied our methodology to historical data (of Culex pipiens) of two areas of interest (Veneto region of Italy and Central Macedonia of Greece). In both cases, the results were consistent. The mean mosquito abundance of a given area can be estimated with similar accuracy to the point-level predictor, sometimes even better. The density of the samples that we use to represent one area has a positive effect on the performance in contrast to the actual number of sampling points which is not informative at all regarding the performance without the size of the area. Additionally, we saw that the distance between the sampling points and the real in-situ measurements that were used for training did not strongly affect the performance.Keywords: mosquito abundance, supervised machine learning, culex pipiens, spatial sampling, west nile virus, earth observation data
Procedia PDF Downloads 1487479 Autoimmune Diseases Associated with Primary Biliary Cirrhosis: A Retrospective Study of 51 Patients
Authors: Soumaya Mrabet, Imen Akkari, Amira Atig, Elhem Ben Jazia
Abstract:
Introduction: Primary biliary cirrhosis (PBC) is a cholestatic cholangitis of unknown etiology. It is frequently associated with autoimmune diseases, which explains their systematic screening. The aim of our study was to determine the prevalence and the type of autoimmune disorders associated with PBC and to assess their impact on the prognosis of the disease. Material and methods: It is a retrospective study over a period of 16 years (2000-2015) including all patients followed for PBC. In all these patients we have systematically researched: dysthyroidism (thyroid balance, antithyroid autoantibodies), type 1 diabetes, dry syndrome (ophthalmologic examination, Schirmer test and lip biopsy in case of Presence of suggestive clinical signs), celiac disease(celiac disease serology and duodenal biopsies) and dermatological involvement (clinical examination). Results: Fifty-one patients (50 women and one men) followed for PBC were collected. The Mean age was 54 years (37-77 years). Among these patients, 30 patients(58.8%) had at least one autoimmune disease associated with PBC. The discovery of these autoimmune diseases preceded the diagnosis of PBC in 8 cases (26.6%) and was concomitant, through systematic screening, in the remaining cases. Autoimmune hepatitis was found in 12 patients (40%), defining thus an overlap syndrome. Other diseases were Hashimoto's thyroiditis (n = 10), dry syndrome (n = 7), Gougerot Sjogren syndrome (n=6), celiac disease (n = 3), insulin-dependent diabetes (n = 1), scleroderma (n = 1), rheumatoid arthritis (n = 1), Biermer Anemia (n=1) and Systemic erythematosus lupus (n=1). The two groups of patients with PBC with or without associated autoimmune disorders were comparable for bilirubin levels, Child-Pugh score, and response to treatment. Conclusion: In our series, the prevalence of autoimmune diseases in PBC was 58.8%. These diseases were dominated by autoimmune hepatitis and Hashimoto's thyroiditis. Even if their association does not seem to alter the prognosis, screening should be systematic in order to institute an early and adequate management.Keywords: autoimmune diseases, autoimmune hepatitis, primary biliary cirrhosis, prognosis
Procedia PDF Downloads 2767478 Synthetic Data-Driven Prediction Using GANs and LSTMs for Smart Traffic Management
Authors: Srinivas Peri, Siva Abhishek Sirivella, Tejaswini Kallakuri, Uzair Ahmad
Abstract:
Smart cities and intelligent transportation systems rely heavily on effective traffic management and infrastructure planning. This research tackles the data scarcity challenge by generating realistically synthetic traffic data from the PeMS-Bay dataset, enhancing predictive modeling accuracy and reliability. Advanced techniques like TimeGAN and GaussianCopula are utilized to create synthetic data that mimics the statistical and structural characteristics of real-world traffic. The future integration of Spatial-Temporal Generative Adversarial Networks (ST-GAN) is anticipated to capture both spatial and temporal correlations, further improving data quality and realism. Each synthetic data generation model's performance is evaluated against real-world data to identify the most effective models for accurately replicating traffic patterns. Long Short-Term Memory (LSTM) networks are employed to model and predict complex temporal dependencies within traffic patterns. This holistic approach aims to identify areas with low vehicle counts, reveal underlying traffic issues, and guide targeted infrastructure interventions. By combining GAN-based synthetic data generation with LSTM-based traffic modeling, this study facilitates data-driven decision-making that improves urban mobility, safety, and the overall efficiency of city planning initiatives.Keywords: GAN, long short-term memory (LSTM), synthetic data generation, traffic management
Procedia PDF Downloads 147477 Real Estate Trend Prediction with Artificial Intelligence Techniques
Authors: Sophia Liang Zhou
Abstract:
For investors, businesses, consumers, and governments, an accurate assessment of future housing prices is crucial to critical decisions in resource allocation, policy formation, and investment strategies. Previous studies are contradictory about macroeconomic determinants of housing price and largely focused on one or two areas using point prediction. This study aims to develop data-driven models to accurately predict future housing market trends in different markets. This work studied five different metropolitan areas representing different market trends and compared three-time lagging situations: no lag, 6-month lag, and 12-month lag. Linear regression (LR), random forest (RF), and artificial neural network (ANN) were employed to model the real estate price using datasets with S&P/Case-Shiller home price index and 12 demographic and macroeconomic features, such as gross domestic product (GDP), resident population, personal income, etc. in five metropolitan areas: Boston, Dallas, New York, Chicago, and San Francisco. The data from March 2005 to December 2018 were collected from the Federal Reserve Bank, FBI, and Freddie Mac. In the original data, some factors are monthly, some quarterly, and some yearly. Thus, two methods to compensate missing values, backfill or interpolation, were compared. The models were evaluated by accuracy, mean absolute error, and root mean square error. The LR and ANN models outperformed the RF model due to RF’s inherent limitations. Both ANN and LR methods generated predictive models with high accuracy ( > 95%). It was found that personal income, GDP, population, and measures of debt consistently appeared as the most important factors. It also showed that technique to compensate missing values in the dataset and implementation of time lag can have a significant influence on the model performance and require further investigation. The best performing models varied for each area, but the backfilled 12-month lag LR models and the interpolated no lag ANN models showed the best stable performance overall, with accuracies > 95% for each city. This study reveals the influence of input variables in different markets. It also provides evidence to support future studies to identify the optimal time lag and data imputing methods for establishing accurate predictive models.Keywords: linear regression, random forest, artificial neural network, real estate price prediction
Procedia PDF Downloads 1037476 Barrier to Implementing Public-Private Mix Approach for Tuberculosis Case Management in Nepal
Authors: R. K. Yadav, S. Baral, H. R. Paudel, R. Basnet
Abstract:
The Public-Private Mix (PPM) approach is a strategic initiative that involves engaging all private and public healthcare providers in the fight against tuberculosis using international healthcare standards. For tuberculosis control in Nepal, the PPM approach could be a milestone. This study aimed to explore the barriers to a public-private mix approach in the management of tuberculosis cases in Nepal. A total of 20 respondents participated in the study. Barriers to PPM were identified in the following three themes: 1) Obstacles related to TB case detection, 2) Obstacles related to patients, and 3) Obstacles related to the healthcare system. PPM implementation was challenged by following subthemes that included staff turnover, low private sector participation in workshops, a lack of training, poor recording and reporting, insufficient joint monitoring and supervision, poor financial benefit, lack of coordination and collaboration, and non-supportive TB-related policies and strategies. The study concludes that numerous barriers exist in the way of effective implementation of the PPM approach, including TB cases detection barriers such as knowledge of TB diagnosis and treatment, HW attitude, workload, patient-related barriers such as knowledge of TB, self-medication practice, stigma and discrimination, financial status, and health system-related barriers such as staff turnover and poor engagement of the private sector in workshops, training, recording, and re-evaluation. Government stakeholders must work together with private sector stakeholders to perform joint monitoring and supervision. Private practitioners should receive training and orientation, and presumptive TB patients should be given adequate time and counseling as well as motivation to visit a government health facility.Keywords: barrier, tuberculosis, case finding, PPM, nepal
Procedia PDF Downloads 1107475 Object Recognition System Operating from Different Type Vehicles Using Raspberry and OpenCV
Authors: Maria Pavlova
Abstract:
In our days, it is possible to put the camera on different vehicles like quadcopter, train, airplane and etc. The camera also can be the input sensor in many different systems. That means the object recognition like non separate part of monitoring control can be key part of the most intelligent systems. The aim of this paper is to focus of the object recognition process during vehicles movement. During the vehicle’s movement the camera takes pictures from the environment without storage in Data Base. In case the camera detects a special object (for example human or animal), the system saves the picture and sends it to the work station in real time. This functionality will be very useful in emergency or security situations where is necessary to find a specific object. In another application, the camera can be mounted on crossroad where do not have many people and if one or more persons come on the road, the traffic lights became the green and they can cross the road. In this papers is presented the system has solved the aforementioned problems. It is presented architecture of the object recognition system includes the camera, Raspberry platform, GPS system, neural network, software and Data Base. The camera in the system takes the pictures. The object recognition is done in real time using the OpenCV library and Raspberry microcontroller. An additional feature of this library is the ability to display the GPS coordinates of the captured objects position. The results from this processes will be sent to remote station. So, in this case, we can know the location of the specific object. By neural network, we can learn the module to solve the problems using incoming data and to be part in bigger intelligent system. The present paper focuses on the design and integration of the image recognition like a part of smart systems.Keywords: camera, object recognition, OpenCV, Raspberry
Procedia PDF Downloads 2187474 Estimation of Constant Coefficients of Bourgoyne and Young Drilling Rate Model for Drill Bit Wear Prediction
Authors: Ahmed Z. Mazen, Nejat Rahmanian, Iqbal Mujtaba, Ali Hassanpour
Abstract:
In oil and gas well drilling, the drill bit is an important part of the Bottom Hole Assembly (BHA), which is installed and designed to drill and produce a hole by several mechanisms. The efficiency of the bit depends on many drilling parameters such as weight on bit, rotary speed, and mud properties. When the bit is pulled out of the hole, the evaluation of the bit damage must be recorded very carefully to guide engineers in order to select the bits for further planned wells. Having a worn bit for hole drilling may cause severe damage to bit leading to cutter or cone losses in the bottom of hole, where a fishing job will have to take place, and all of these will increase the operating cost. The main factor to reduce the cost of drilling operation is to maximize the rate of penetration by analyzing real-time data to predict the drill bit wear while drilling. There are numerous models in the literature for prediction of the rate of penetration based on drilling parameters, mostly based on empirical approaches. One of the most commonly used approaches is Bourgoyne and Young model, where the rate of penetration can be estimated by the drilling parameters as well as a wear index using an empirical correlation, provided all the constants and coefficients are accurately determined. This paper introduces a new methodology to estimate the eight coefficients for Bourgoyne and Young model using the gPROMS parameters estimation GPE (Version 4.2.0). Real data collected form similar formations (12 ¼’ sections) in two different fields in Libya are used to estimate the coefficients. The estimated coefficients are then used in the equations and applied to nearby wells in the same field to predict the bit wear.Keywords: Bourgoyne and Young model, bit wear, gPROMS, rate of penetration
Procedia PDF Downloads 1547473 'Explainable Artificial Intelligence' and Reasons for Judicial Decisions: Why Justifications and Not Just Explanations May Be Required
Authors: Jacquelyn Burkell, Jane Bailey
Abstract:
Artificial intelligence (AI) solutions deployed within the justice system face the critical task of providing acceptable explanations for decisions or actions. These explanations must satisfy the joint criteria of public and professional accountability, taking into account the perspectives and requirements of multiple stakeholders, including judges, lawyers, parties, witnesses, and the general public. This research project analyzes and integrates two existing literature on explanations in order to propose guidelines for explainable AI in the justice system. Specifically, we review three bodies of literature: (i) explanations of the purpose and function of 'explainable AI'; (ii) the relevant case law, judicial commentary and legal literature focused on the form and function of reasons for judicial decisions; and (iii) the literature focused on the psychological and sociological functions of these reasons for judicial decisions from the perspective of the public. Our research suggests that while judicial ‘reasons’ (arguably accurate descriptions of the decision-making process and factors) do serve similar explanatory functions as those identified in the literature on 'explainable AI', they also serve an important ‘justification’ function (post hoc constructions that justify the decision that was reached). Further, members of the public are also looking for both justification and explanation in reasons for judicial decisions, and that the absence of either feature is likely to contribute to diminished public confidence in the legal system. Therefore, artificially automated judicial decision-making systems that simply attempt to document the process of decision-making are unlikely in many cases to be useful to and accepted within the justice system. Instead, these systems should focus on the post-hoc articulation of principles and precedents that support the decision or action, especially in cases where legal subjects’ fundamental rights and liberties are at stake.Keywords: explainable AI, judicial reasons, public accountability, explanation, justification
Procedia PDF Downloads 1267472 Ventriculo-Gallbladder Shunt: Case Series and Literature Review
Authors: Sandrieli Afornali, Adriano Keijiro Maeda, Renato Fedatto Beraldo, Carlos Alberto Mattozo, Ricardo Nascimento Brito
Abstract:
BACKGROUND: The most used variety in hydrocephalus treatment is the ventriculoperitoneal shunt (VPS). However, it may fails in 20 to 70% of cases. It makes necessary to have alternative cavities for the implantation of the distal catheter. Ventriculo-atrial shunting (VAS) is described as the second option. To our knowledge, there are 121 reported cases of VGB shunt in children until 2020 and a highly variable success rate, from 25 to 100%, with an average of 63% of patients presenting good long-term results. Our goal is to evaluate the epidemiological profile of patients submitted to ventriculo-gallbladder (VGB) shunt and, through a review of literature, to compare our results with others series. METHODS: a retrospective cross-sectional observational study of a case series of nine patients. The medical records of all patients were reviewed, who underwent VGB shunt at the Hospital Pequeno Príncipe from Curitiba, Paraná, Brazil, from January 2014 to October 2022. The inclusion criteria were: patients under 17 years of age with hydrocephalus of any etiology, currently using or prior to VGB shunt. RESULTS: There were 6 (66,7%) male and 3 (33,3%) female. The average age of 73.6 months or 6.1 years at the time of surgery. They were submitted on average 5.1 VPS reviews previous to VGB shunt. Five (55,5%) had complications of VGB shunt: infection (11.1%), atony (11.1%), hypodrainage due to kinking the distal catheter in the solution (11.1%) and ventriculoenteric fistula (22.2%); all these patients were cured at surgical reapproach, and in 2 of them the VGB shunt was reimplanted. Two patients died (22.2%), and five (55,5%) patients maintained the use of VGB shunt in the follow-up period; and in 4 (44.4%) there was never need for review. CONCLUSION: VGB shunt tends to be underestimated because it is still unconventional and little publicized in literature. Our article shows a lower risk of death and similar risk of complications when compared to others altenatives shunts. We emphasize VGB shunt as a safe procedure to be the second option when VPS fails or has contraindications.Keywords: hydrocephalus, ventricular-gallbladder shunt, VGB shunt, VPS, ventriculoperitoneal shunt, ventriculoatrial shunt
Procedia PDF Downloads 727471 Machine Learning Techniques for COVID-19 Detection: A Comparative Analysis
Authors: Abeer A. Aljohani
Abstract:
COVID-19 virus spread has been one of the extreme pandemics across the globe. It is also referred to as coronavirus, which is a contagious disease that continuously mutates into numerous variants. Currently, the B.1.1.529 variant labeled as omicron is detected in South Africa. The huge spread of COVID-19 disease has affected several lives and has surged exceptional pressure on the healthcare systems worldwide. Also, everyday life and the global economy have been at stake. This research aims to predict COVID-19 disease in its initial stage to reduce the death count. Machine learning (ML) is nowadays used in almost every area. Numerous COVID-19 cases have produced a huge burden on the hospitals as well as health workers. To reduce this burden, this paper predicts COVID-19 disease is based on the symptoms and medical history of the patient. This research presents a unique architecture for COVID-19 detection using ML techniques integrated with feature dimensionality reduction. This paper uses a standard UCI dataset for predicting COVID-19 disease. This dataset comprises symptoms of 5434 patients. This paper also compares several supervised ML techniques to the presented architecture. The architecture has also utilized 10-fold cross validation process for generalization and the principal component analysis (PCA) technique for feature reduction. Standard parameters are used to evaluate the proposed architecture including F1-Score, precision, accuracy, recall, receiver operating characteristic (ROC), and area under curve (AUC). The results depict that decision tree, random forest, and neural networks outperform all other state-of-the-art ML techniques. This achieved result can help effectively in identifying COVID-19 infection cases.Keywords: supervised machine learning, COVID-19 prediction, healthcare analytics, random forest, neural network
Procedia PDF Downloads 927470 Effect of Exit Annular Area on the Flow Field Characteristics of an Unconfined Premixed Annular Swirl Burner
Authors: Vishnu Raj, Chockalingam Prathap
Abstract:
The objective of this study was to explore the impact of variation in the exit annular area on the local flow field features and the flame stability of an annular premixed swirl burner (unconfined) operated with premixed n-butane air mixture at equivalence ratio (ϕ) = 1, 1 bar, and 300K. A swirl burner with an axial swirl generator having a swirl number of 1.5 was used. Three different burner heads were chosen to have the exit area increased from 100%, 160%, and 220% resulting in inner and outer diameters and cross-sectional areas as (1) 10mm&15mm, 98mm2 (2) 17.5mm&22.5mm, 157mm2 and (3) 25mm & 30mm, 216mm2. The bulk velocity and Reynolds number based on the hydraulic diameter and unburned gas properties were kept constant at 12 m/s and 4000. (i) Planar PIV with TiO2 seeding particles and (ii) OH* chemiluminescence were used to measure the velocity fields and reaction zones of the swirl flames at 5Hz, respectively. Velocity fields and the jet spreading rates measured at the isothermal and reactive conditions revealed that the presence of a flame significantly altered the flow field in the radial direction due to the gas expansion. Important observations from the flame measurements were: the height and maximum width of the recirculation bubbles normalized by the hydraulic diameter, and the jet spreading angles for the flames for the three exit area cases were: (a) 4.52, 1.95, 28ᵒ, (b) 6.78, 2.37, 34ᵒ, and (c) 8.73, 2.32, 37ᵒ. The lean blowout was also measured, and the respective equivalence ratios were: 0.80, 0.92, and 0.82. LBO was relatively narrow for the 157mm2 case. For this case, particle image velocimetry (PIV) measurements showed that Turbulent Kinetic Energy and turbulent intensity were relatively high compared to the other two cases, resulting in higher stretch rates and narrower lean blowout (LBO).Keywords: chemiluminescence, jet spreading rate, lean blowout, swirl flow
Procedia PDF Downloads 677469 Techno-Economic Analysis of 1,3-Butadiene and ε-Caprolactam Production from C6 Sugars
Authors: Iris Vural Gursel, Jonathan Moncada, Ernst Worrell, Andrea Ramirez
Abstract:
In order to achieve the transition from a fossil to bio-based economy, biomass needs to replace fossil resources in meeting the world’s energy and chemical needs. This calls for development of biorefinery systems allowing cost-efficient conversion of biomass to chemicals. In biorefinery systems, feedstock is converted to key intermediates called platforms which are converted to wide range of marketable products. The C6 sugars platform stands out due to its unique versatility as precursor for multiple valuable products. Among the different potential routes from C6 sugars to bio-based chemicals, 1,3-butadiene and ε-caprolactam appear to be of great interest. Butadiene is an important chemical for the production of synthetic rubbers, while caprolactam is used in production of nylon-6. In this study, ex-ante techno-economic performance of 1,3-butadiene and ε-caprolactam routes from C6 sugars were assessed. The aim is to provide insight from an early stage of development into the potential of these new technologies, and the bottlenecks and key cost-drivers. Two cases for each product line were analyzed to take into consideration the effect of possible changes on the overall performance of both butadiene and caprolactam production. Conceptual process design for the processes was developed using Aspen Plus based on currently available data from laboratory experiments. Then, operating and capital costs were estimated and an economic assessment was carried out using Net Present Value (NPV) as indicator. Finally, sensitivity analyses on processing capacity and prices was done to take into account possible variations. Results indicate that both processes perform similarly from an energy intensity point of view ranging between 34-50 MJ per kg of main product. However, in terms of processing yield (kg of product per kg of C6 sugar), caprolactam shows higher yield by a factor 1.6-3.6 compared to butadiene. For butadiene production, with the economic parameters used in this study, for both cases studied, a negative NPV (-642 and -647 M€) was attained indicating economic infeasibility. For the caprolactam production, one of the cases also showed economic infeasibility (-229 M€), but the case with the higher caprolactam yield resulted in a positive NPV (67 M€). Sensitivity analysis indicated that the economic performance of caprolactam production can be improved with the increase in capacity (higher C6 sugars intake) reflecting benefits of the economies of scale. Furthermore, humins valorization for heat and power production was considered and found to have a positive effect. Butadiene production was found sensitive to the price of feedstock C6 sugars and product butadiene. However, even at 100% variation of the two parameters, butadiene production remained economically infeasible. Overall, the caprolactam production line shows higher economic potential in comparison to that of butadiene. The results are useful in guiding experimental research and providing direction for further development of bio-based chemicals.Keywords: bio-based chemicals, biorefinery, C6 sugars, economic analysis, process modelling
Procedia PDF Downloads 1527468 Morphological and Molecular Evaluation of Dengue Virus Serotype 3 Infection in BALB/c Mice Lungs
Authors: Gabriela C. Caldas, Fernanda C. Jacome, Arthur da C. Rasinhas, Ortrud M. Barth, Flavia B. dos Santos, Priscila C. G. Nunes, Yuli R. M. de Souza, Pedro Paulo de A. Manso, Marcelo P. Machado, Debora F. Barreto-Vieira
Abstract:
The establishment of animal models for studies of DENV infections has been challenging, since circulating epidemic viruses do not naturally infect nonhuman species. Such studies are of great relevance to the various areas of dengue research, including immunopathogenesis, drug development and vaccines. In this scenario, the main objective of this study is to verify possible morphological changes, as well as the presence of antigens and viral RNA in lung samples from BALB/c mice experimentally infected with an epidemic and non-neuroadapted DENV-3 strain. Male BALB/c mice, 2 months old, were inoculated with DENV-3 by intravenous route. After 72 hours of infection, the animals were euthanized and the lungs were collected. Part of the samples was processed by standard technique for analysis by light and transmission electronic microscopies and another part was processed for real-time PCR analysis. Morphological analyzes of lungs from uninfected mice showed preserved tissue areas. In mice infected with DENV-3, the analyzes revealed interalveolar septum thickening with presence of inflammatory infiltrate, foci of alveolar atelectasis and hyperventilation, bleeding foci in the interalveolar septum and bronchioles, peripheral capillary congestion, accumulation of fluid in the blood capillary, signs of interstitial cell necrosis presence of platelets and mononuclear inflammatory cells circulating in the capillaries and/or adhered to the endothelium. In addition, activation of endothelial cells, platelets, mononuclear inflammatory cell and neutrophil-type polymorphonuclear inflammatory cell evidenced by the emission of cytoplasmic membrane prolongation was observed. DEN-like particles were seen in the cytoplasm of endothelial cells. The viral genome was recovered from 3 in 12 lung samples. These results demonstrate that the BALB / c mouse represents a suitable model for the study of the histopathological changes induced by DENV infection in the lung, with tissue alterations similar to those observed in human cases of DEN.Keywords: BALB/c mice, dengue, histopathology, lung, ultrastructure
Procedia PDF Downloads 2537467 Good Governance: An Effective Public Participation Approach for Urban Development of City Centers
Authors: Lojaine Okacha
Abstract:
In the past half-century, researchers started paying attention to enhancing the performance of urban spaces. Their idea of performance comprised urban climate performance, space synthesis, economic performance, and enhancing the quality of life in space. However, they all agreed that the key to achieving any of the previously mentioned development projects is good governance. Having good governance allows citizens to participate freely in the urbanization or development projects within cities. Consequently, using the city resources and assets as efficiently as possible, and ensures the fulfillment of the users’ needs and requests. This paper aims to propose an effective participation framework to help citizens have their voices heard and participate in the decisions that will affect their living situation. The framework allows governments to use their public resources to their best. However, this study focuses on public participation in third-world countries with unitary decentralized governance systems such as Egypt. It summarizes the challenges facing the participation practices, identifies the keys to a successful participation process, and draws on dominant effective participation practice lying on the relationship between the levels of participation, stakeholders participating, the urban development stages, the city-systems, and participation process. These components are integrated to create a real-world effective participation Framework. The results of the analysis were incredible and produced a functional and progressive approach for effective public participation to introduce to the governments. The model itself is combined with additional principles allowing the best practice to the process. The framework is finally compared with a real case of urban development.Keywords: public participation, good governance, urban development, city systems
Procedia PDF Downloads 1957466 Integrating Radar Sensors with an Autonomous Vehicle Simulator for an Enhanced Smart Parking Management System
Authors: Mohamed Gazzeh, Bradley Null, Fethi Tlili, Hichem Besbes
Abstract:
The burgeoning global ownership of personal vehicles has posed a significant strain on urban infrastructure, notably parking facilities, leading to traffic congestion and environmental concerns. Effective parking management systems (PMS) are indispensable for optimizing urban traffic flow and reducing emissions. The most commonly deployed systems nowadays rely on computer vision technology. This paper explores the integration of radar sensors and simulation in the context of smart parking management. We concentrate on radar sensors due to their versatility and utility in automotive applications, which extends to PMS. Additionally, radar sensors play a crucial role in driver assistance systems and autonomous vehicle development. However, the resource-intensive nature of radar data collection for algorithm development and testing necessitates innovative solutions. Simulation, particularly the monoDrive simulator, an internal development tool used by NI the Test and Measurement division of Emerson, offers a practical means to overcome this challenge. The primary objectives of this study encompass simulating radar sensors to generate a substantial dataset for algorithm development, testing, and, critically, assessing the transferability of models between simulated and real radar data. We focus on occupancy detection in parking as a practical use case, categorizing each parking space as vacant or occupied. The simulation approach using monoDrive enables algorithm validation and reliability assessment for virtual radar sensors. It meticulously designed various parking scenarios, involving manual measurements of parking spot coordinates, orientations, and the utilization of TI AWR1843 radar. To create a diverse dataset, we generated 4950 scenarios, comprising a total of 455,400 parking spots. This extensive dataset encompasses radar configuration details, ground truth occupancy information, radar detections, and associated object attributes such as range, azimuth, elevation, radar cross-section, and velocity data. The paper also addresses the intricacies and challenges of real-world radar data collection, highlighting the advantages of simulation in producing radar data for parking lot applications. We developed classification models based on Support Vector Machines (SVM) and Density-Based Spatial Clustering of Applications with Noise (DBSCAN), exclusively trained and evaluated on simulated data. Subsequently, we applied these models to real-world data, comparing their performance against the monoDrive dataset. The study demonstrates the feasibility of transferring models from a simulated environment to real-world applications, achieving an impressive accuracy score of 92% using only one radar sensor. This finding underscores the potential of radar sensors and simulation in the development of smart parking management systems, offering significant benefits for improving urban mobility and reducing environmental impact. The integration of radar sensors and simulation represents a promising avenue for enhancing smart parking management systems, addressing the challenges posed by the exponential growth in personal vehicle ownership. This research contributes valuable insights into the practicality of using simulated radar data in real-world applications and underscores the role of radar technology in advancing urban sustainability.Keywords: autonomous vehicle simulator, FMCW radar sensors, occupancy detection, smart parking management, transferability of models
Procedia PDF Downloads 817465 Problem-Based Learning for Hospitality Students. The Case of Madrid Luxury Hotels and the Recovery after the Covid Pandemic
Authors: Caridad Maylin-Aguilar, Beatriz Duarte-Monedero
Abstract:
Problem-based learning (PBL) is a useful tool for adult and practice oriented audiences, as University students. As a consequence of the huge disruption caused by the COVID pandemic in the hospitality industry, hotels of all categories closed down in Spain from March 2020. Since that moment, the luxury segment was blooming with optimistic prospects for new openings. Hence, Hospitality students were expecting a positive situation in terms of employment and career development. By the beginning of the 2020-21 academic year, these expectations were seriously harmed. By October 2020, only 9 of the 32 hotels in the luxury segment were opened with an occupation rate of 9%. Shortly after, the evidence of a second wave affecting especially Spain and the homelands of incoming visitors bitterly smashed all forecasts. In accordance with the situation, a team of four professors and practitioners, from four different subject areas, developed a real case, inspired in one of these hotels, the 5-stars Emperatriz by Barceló. Students in their 2nd course were provided with real information as marketing plans, profit and losses and operational accounts, employees profiles and employment costs. The challenge for them was to act as consultants, identifying potential courses of action, related to best, base and worst case. In order to do that, they were organized in teams and supported by 4th course students. Each professor deployed the problem in their subject; thus, research on the customers behavior and feelings were necessary to review, as part of the marketing plan, if the current offering of the hotel was clear enough to guarantee and to communicate a safe environment, as well as the ranking of other basic, supporting and facilitating services. Also, continuous monitoring of competitors’ activity was necessary to understand what was the behavior of the open outlets. The actions designed after the diagnose were ranked in accordance with their impact and feasibility in terms of time and resources. Also they must be actionable by the current staff of the hotel and their managers and a vision of internal marketing was appreciated. After a process of refinement, seven teams presented their conclusions to Emperatriz general manager and the rest of professors. Four main ideas were chosen, and all the teams, irrespectively of authorship, were asked to develop them to the state of a minimum viable product, with estimations of impacts and costs. As the process continues, students are nowadays accompanying the hotel and their staff in the prudent reopening of facilities, almost one year after the closure. From a professor’s point of view, key learnings were 1.- When facing a real problem, a holistic view is needed. Therefore, the vision of subjects as silos collapses, 2- When educating new professionals, providing them with the resilience and resistance necessaries to deal with a problem is always mandatory, but now seems more relevant and 3.- collaborative work and contact with real practitioners in such an uncertain and changing environment is a challenge, but it is worth when considering the learning result and its potential.Keywords: problem-based learning, hospitality recovery, collaborative learning, resilience
Procedia PDF Downloads 1837464 Comparison of Statins Dose Intensity on HbA1c Control in Outpatients with Type 2 Diabetes: A Prospective Cohort Study
Authors: Mohamed A. Hammad, Dzul Azri Mohamed Noor, Syed Azhar Syed Sulaiman, Ahmed A. Khamis, Abeer Kharshid, Nor Azizah Aziz
Abstract:
The effect of statins dose intensity (SDI) on glycemic control in patients with existing diabetes is unclear. Also, there are many contradictory findings were reported in the literature; thus, it is limiting the possibility to draw conclusions. This project was designed to compare the effect of SDI on glycated hemoglobin (HbA1c%) control in outpatients with Type 2 diabetes in the endocrine clinic at Hospital Pulau Pinang, Malaysia, between July 2015 and August 2016. A prospective cohort study was conducted, where records of 345 patients with Type 2 diabetes (Moderate-SDI group 289 patients and high-SDI cohort 56 patients) were reviewed to identify demographics and laboratory tests. The target of glycemic control (HbA1c < 7% for patient < 65 years, and < 8% for patient ≥ 65 years) was estimated, and the results were presented as descriptive statistics. From 289 moderate-SDI cohorts with a mean age of 57.3 ± 12.4 years, only 86 (29.8%) cases were shown to have controlled glycemia, while there were 203 (70.2%) cases with uncontrolled glycemia with confidence interval (CI) of 95% (6.2–10.8). On the other hand, the high-SDI group of 56 patients with Type 2 diabetes with a mean age 57.7±12.4 years is distributed among 11 (19.6%) patients with controlled diabetes, and 45 (80.4%) of them had uncontrolled glycemia, CI: 95% (7.1–11.9). The study has demonstrated that the relative risk (RR) of uncontrolled glycemia in patients with Type 2 diabetes that used high-SDI is 1.15, and the excessive relative risk (ERR) is 15%. The absolute risk (AR) is 10.2%, and the number needed to harm (NNH) is 10. Outpatients with Type 2 diabetes who use high-SDI of statin have a higher risk of uncontrolled glycemia than outpatients who had been treated with a moderate-SDI.Keywords: cohort study, diabetes control, dose intensity, HbA1c, Malaysia, statin, type 2 diabetes mellitus, uncontrolled glycemia
Procedia PDF Downloads 3067463 Fake Accounts Detection in Twitter Based on Minimum Weighted Feature Set
Authors: Ahmed ElAzab, Amira M. Idrees, Mahmoud A. Mahmoud, Hesham Hefny
Abstract:
Social networking sites such as Twitter and Facebook attracts over 500 million users across the world, for those users, their social life, even their practical life, has become interrelated. Their interaction with social networking has affected their life forever. Accordingly, social networking sites have become among the main channels that are responsible for vast dissemination of different kinds of information during real time events. This popularity in Social networking has led to different problems including the possibility of exposing incorrect information to their users through fake accounts which results to the spread of malicious content during life events. This situation can result to a huge damage in the real world to the society in general including citizens, business entities, and others. In this paper, we present a classification method for detecting fake accounts on Twitter. The study determines the minimized set of the main factors that influence the detection of the fake accounts on Twitter, then the determined factors have been applied using different classification techniques, a comparison of the results for these techniques has been performed and the most accurate algorithm is selected according to the accuracy of the results. The study has been compared with different recent research in the same area, this comparison has proved the accuracy of the proposed study. We claim that this study can be continuously applied on Twitter social network to automatically detect the fake accounts, moreover, the study can be applied on different Social network sites such as Facebook with minor changes according to the nature of the social network which are discussed in this paper.Keywords: fake accounts detection, classification algorithms, twitter accounts analysis, features based techniques
Procedia PDF Downloads 416