Search results for: visual sensor networks
270 Data Analysis Tool for Predicting Water Scarcity in Industry
Authors: Tassadit Issaadi Hamitouche, Nicolas Gillard, Jean Petit, Valerie Lavaste, Celine Mayousse
Abstract:
Water is a fundamental resource for the industry. It is taken from the environment either from municipal distribution networks or from various natural water sources such as the sea, ocean, rivers, aquifers, etc. Once used, water is discharged into the environment, reprocessed at the plant or treatment plants. These withdrawals and discharges have a direct impact on natural water resources. These impacts can apply to the quantity of water available, the quality of the water used, or to impacts that are more complex to measure and less direct, such as the health of the population downstream from the watercourse, for example. Based on the analysis of data (meteorological, river characteristics, physicochemical substances), we wish to predict water stress episodes and anticipate prefectoral decrees, which can impact the performance of plants and propose improvement solutions, help industrialists in their choice of location for a new plant, visualize possible interactions between companies to optimize exchanges and encourage the pooling of water treatment solutions, and set up circular economies around the issue of water. The development of a system for the collection, processing, and use of data related to water resources requires the functional constraints specific to the latter to be made explicit. Thus the system will have to be able to store a large amount of data from sensors (which is the main type of data in plants and their environment). In addition, manufacturers need to have 'near-real-time' processing of information in order to be able to make the best decisions (to be rapidly notified of an event that would have a significant impact on water resources). Finally, the visualization of data must be adapted to its temporal and geographical dimensions. In this study, we set up an infrastructure centered on the TICK application stack (for Telegraf, InfluxDB, Chronograf, and Kapacitor), which is a set of loosely coupled but tightly integrated open source projects designed to manage huge amounts of time-stamped information. The software architecture is coupled with the cross-industry standard process for data mining (CRISP-DM) data mining methodology. The robust architecture and the methodology used have demonstrated their effectiveness on the study case of learning the level of a river with a 7-day horizon. The management of water and the activities within the plants -which depend on this resource- should be considerably improved thanks, on the one hand, to the learning that allows the anticipation of periods of water stress, and on the other hand, to the information system that is able to warn decision-makers with alerts created from the formalization of prefectoral decrees.Keywords: data mining, industry, machine Learning, shortage, water resources
Procedia PDF Downloads 121269 Analyzing Transit Network Design versus Urban Dispersion
Authors: Hugo Badia
Abstract:
This research answers which is the most suitable transit network structure to serve specific demand requirements in an increasing urban dispersion process. Two main approaches of network design are found in the literature. On the one hand, a traditional answer, widespread in our cities, that develops a high number of lines to connect most of origin-destination pairs by direct trips; an approach based on the idea that users averse to transfers. On the other hand, some authors advocate an alternative design characterized by simple networks where transfer is essential to complete most of trips. To answer which of them is the best option, we use a two-step methodology. First, by means of an analytical model, three basic network structures are compared: a radial scheme, starting point for the other two structures, a direct trip-based network, and a transfer-based one, which represent the two alternative transit network designs. The model optimizes the network configuration with regard to the total cost for each structure. For a scenario of dispersion, the best alternative is the structure with the minimum cost. This dispersion degree is defined in a simple way considering that only a central area attracts all trips. If this area is small, we have a high concentrated mobility pattern; if this area is too large, the city is highly decentralized. In this first step, we can determine the area of applicability for each structure in function to that urban dispersion degree. The analytical results show that a radial structure is suitable when the demand is so centralized, however, when this demand starts to scatter, new transit lines should be implemented to avoid transfers. If the urban dispersion advances, the introduction of more lines is no longer a good alternative, in this case, the best solution is a change of structure, from direct trips to a network based on transfers. The area of applicability of each network strategy is not constant, it depends on the characteristics of demand, city and transport technology. In the second step, we translate analytical results to a real case study by the relationship between the parameters of dispersion of the model and direct measures of dispersion in a real city. Two dimensions of the urban sprawl process are considered: concentration, defined by Gini coefficient, and centralization by area based centralization index. Once it is estimated the real dispersion degree, we are able to identify in which area of applicability the city is located. In summary, from a strategic point of view, we can obtain with this methodology which is the best network design approach for a city, comparing the theoretical results with the real dispersion degree.Keywords: analytical network design model, network structure, public transport, urban dispersion
Procedia PDF Downloads 230268 Automatic Aggregation and Embedding of Microservices for Optimized Deployments
Authors: Pablo Chico De Guzman, Cesar Sanchez
Abstract:
Microservices are a software development methodology in which applications are built by composing a set of independently deploy-able, small, modular services. Each service runs a unique process and it gets instantiated and deployed in one or more machines (we assume that different microservices are deployed into different machines). Microservices are becoming the de facto standard for developing distributed cloud applications due to their reduced release cycles. In principle, the responsibility of a microservice can be as simple as implementing a single function, which can lead to the following issues: - Resource fragmentation due to the virtual machine boundary. - Poor communication performance between microservices. Two composition techniques can be used to optimize resource fragmentation and communication performance: aggregation and embedding of microservices. Aggregation allows the deployment of a set of microservices on the same machine using a proxy server. Aggregation helps to reduce resource fragmentation, and is particularly useful when the aggregated services have a similar scalability behavior. Embedding deals with communication performance by deploying on the same virtual machine those microservices that require a communication channel (localhost bandwidth is reported to be about 40 times faster than cloud vendor local networks and it offers better reliability). Embedding can also reduce dependencies on load balancer services since the communication takes place on a single virtual machine. For example, assume that microservice A has two instances, a1 and a2, and it communicates with microservice B, which also has two instances, b1 and b2. One embedding can deploy a1 and b1 on machine m1, and a2 and b2 are deployed on a different machine m2. This deployment configuration allows each pair (a1-b1), (a2-b2) to communicate using the localhost interface without the need of a load balancer between microservices A and B. Aggregation and embedding techniques are complex since different microservices might have incompatible runtime dependencies which forbid them from being installed on the same machine. There is also a security concern since the attack surface between microservices can be larger. Luckily, container technology allows to run several processes on the same machine in an isolated manner, solving the incompatibility of running dependencies and the previous security concern, thus greatly simplifying aggregation/embedding implementations by just deploying a microservice container on the same machine as the aggregated/embedded microservice container. Therefore, a wide variety of deployment configurations can be described by combining aggregation and embedding to create an efficient and robust microservice architecture. This paper presents a formal method that receives a declarative definition of a microservice architecture and proposes different optimized deployment configurations by aggregating/embedding microservices. The first prototype is based on i2kit, a deployment tool also submitted to ICWS 2018. The proposed prototype optimizes the following parameters: network/system performance, resource usage, resource costs and failure tolerance.Keywords: aggregation, deployment, embedding, resource allocation
Procedia PDF Downloads 203267 Preparation of Allyl BODIPY for the Click Reaction with Thioglycolic Acid
Authors: Chrislaura Carmo, Luca Deiana, Mafalda Laranjo, Abilio Sobral, Armando Cordova
Abstract:
Photodynamic therapy (PDT) is currently used for the treatment of malignancies and premalignant tumors. It is based on the capture of a photosensitizing molecule (PS) which, when excited by light at a certain wavelength, reacts with oxygen and generates oxidizing species (radicals, singlet oxygen, triplet species) in target tissues, leading to cell death. BODIPY (4,4-difluoro-4-bora-3a,4a-diaza-s-indaceno) derivatives are emerging as important candidates for photosensitizer in photodynamic therapy of cancer cells due to their high triplet quantum yield. Today these dyes are relevant molecules in photovoltaic materials and fluorescent sensors. In this study, it will be demonstrated the possibility that BODIPY can be covalently linked to thioglycolic acid through the click reaction. Thiol−ene click chemistry has become a powerful synthesis method in materials science and surface modification. The design of biobased allyl-terminated precursors with high renewable carbon content for the construction of the thiol-ene polymer networks is essential for sustainable development and green chemistry. The work aims to synthesize the BODIPY (10-(4-(allyloxy) phenyl)-2,8-diethyl-5,5-difluoro-1,3,7,9-tetramethyl-5H-dipyrrolo[1,2-c:2',1'-f] [1,3,2] diazaborinin-4-ium-5-uide) and to click reaction with Thioglycolic acid. BODIPY was synthesized by the condensation reaction between aldehyde and pyrrole in dichloromethane, followed by in situ complexation with BF3·OEt2 in the presence of the base. Then it was functionalized with allyl bromide to achieve the double bond and thus be able to carry out the click reaction. The thiol−ene click was performed using DMPA (2,2-Dimethoxy-2-phenylacetophenone) as a photo-initiator in the presence of UV light (320–500 nm) in DMF at room temperature for 24 hours. Compounds were characterized by standard analytical techniques, including UV-Vis Spectroscopy, 1H, 13C, 19F NMR and mass spectroscopy. The results of this study will be important to link BODIPY to polymers through the thiol group offering a diversity of applications and functionalization. This new molecule can be tested as third-generation photosensitizers, in which the dye is targeted by antibodies or nanocarriers by cells, mainly in cancer cells, PDT and Photodynamic Antimicrobial Chemotherapy (PACT). According to our studies, it was possible to visualize a click reaction between allyl BODIPY and thioglycolic acid. Our team will also test the reaction with other thiol groups for comparison. Further, we will do the click reaction of BODIPY with a natural polymer linked with a thiol group. The results of the above compounds will be tested in PDT assays on various lung cancer cell lines.Keywords: bodipy, click reaction, thioglycolic acid, allyl, thiol-ene click
Procedia PDF Downloads 132266 Envy and Schadenfreude Domains in a Model of Neurodegeneration
Authors: Hernando Santamaría-García, Sandra Báez, Pablo Reyes, José Santamaría-García, Diana Matallana, Adolfo García, Agustín Ibañez
Abstract:
The study of moral emotions (i.e., Schadenfreude and envy) is critical to understand the ecological complexity of everyday interactions between cognitive, affective, and social cognition processes. Most previous studies in this area have used correlational imaging techniques and framed Schadenfreude and envy as monolithic domains. Here, we profit from a relevant neurodegeneration model to disentangle the brain regions engaged in three dimensions of Schadenfreude and envy: deservingness, morality, and legality. We tested 20 patients with behavioral variant frontotemporal dementia (bvFTD), 24 patients with Alzheimer’s disease (AD), as a contrastive neurodegeneration model, and 20 healthy controls on a novel task highlighting each of these dimensions in scenarios eliciting Schadenfreude and envy. Compared with the AD and control groups, bvFTD patients obtained significantly higher scores on all dimensions for both emotions. Interestingly, the legal dimension for both envy and Schadenfreude elicited higher emotional scores than the deservingness and moral dimensions. Furthermore, correlational analyses in bvFTD showed that higher envy and Schadenfreude scores were associated with greater deficits in social cognition, inhibitory control, and behavior. Brain anatomy findings (restricted to bvFTD and controls) confirmed differences in how these groups process each dimension. Schadenfreude was associated with the ventral striatum in all subjects. Also, in bvFTD patients, increased Schadenfreude across dimensions was negatively correlated with regions supporting social-value rewards, mentalizing, and social cognition (frontal pole, temporal pole, angular gyrus and precuneus). In all subjects, all dimensions of envy positively correlated with the volume of the anterior cingulate cortex, a region involved in processing unfair social comparisons. By contrast, in bvFTD patients, the intensified experience of envy across all dimensions was negatively correlated with a set of areas subserving social cognition, including the prefrontal cortex, the parahippocampus, and the amygdala. Together, the present results provide the first lesion-based evidence for the multidimensional nature of the emotional experiences of envy and Schadenfreude. Moreover, this is the first demonstration of a selective exacerbation of envy and Schadenfreude in bvFTD patients, probably triggered by atrophy to social cognition networks. Our results offer new insights into the mechanisms subserving complex emotions and moral cognition in neurodegeneration, paving the way for groundbreaking research on their interaction with other cognitive, social, and emotional processes.Keywords: social cognition, moral emotions, neuroimaging, frontotemporal dementia
Procedia PDF Downloads 290265 To Live on the Margins: A Closer Look at the Social and Economic Situation of Illegal Afghan Migrants in Iran
Authors: Abdullah Mohammadi
Abstract:
Years of prolong war in Afghanistan has led to one of the largest refugee and migrant populations in the contemporary world. During this continuous unrest which began in 1970s (by military coup, Marxist revolution and the subsequent invasion of USSR), over one-third of the population migrated to neighboring countries, especially Pakistan and Iran. After the Soviet Army withdrawal in 1989, a new wave of conflicts emerged between rival Afghan groups and this led to new refugees. Taliban period, also, created its own refugees. During all these years, I.R. of Iran has been one of the main destinations of Afghan refugees and migrants. At first, due to the political situation after Islamic Revolution, Iran government didn’t restrict the entry of Afghan refugees. Those who came first in Iran received ID cards and had access to education and healthcare services. But in 1990s, due to economic and social concerns, Iran’s policy towards Afghan refugees and migrants changed. The government has tried to identify and register Afghans in Iran and limit their access to some services and jobs. Unfortunately, there are few studies on Afghan refugees and migrants’ situation in Iran and we have a dim and vague picture of them. Of the few studies done on this group, none of them focus on the illegal Afghan migrants’ situation in Iran. Here, we tried to study the social and economic aspects of illegal Afghan migrants’ living in Iran. In doing so, we interviewed 24 illegal Afghan migrants in Iran. The method applied for analyzing the data is thematic analysis. For the interviews, we chose family heads (17 men and 7 women). According to the findings, illegal Afghan migrants’ socio-economic situation in Iran is very undesirable. Its main cause is the marginalization of this group which is resulted from government policies towards Afghan migrants. Most of the illegal Afghan migrants work in unskilled and inferior jobs and live in rent houses on the margins of cities and villages. None of them could buy a house or vehicle due to law. Based on their income, they form one of the lowest, unprivileged groups in the society. Socially, they face many problems in their everyday life: social insecurity, harassment and violence, misuse of their situation by police and people, lack of education opportunity, etc. In general, we may conclude that illegal Afghan migrant have little adaptation with Iran’s society. They face severe limitations compared to legal migrants and refugees and have no opportunity for upward social mobility. However, they have managed some strategies to face these difficulties including: seeking financial and emotional helps from family and friendship networks, sending one of the family members to third country (mostly to European countries), establishing self-administered schools for children (schools which are illegal and run by Afghan educated youth).Keywords: illegal Afghan migrants, marginalization, social insecurity, upward social mobility
Procedia PDF Downloads 317264 Shark Detection and Classification with Deep Learning
Authors: Jeremy Jenrette, Z. Y. C. Liu, Pranav Chimote, Edward Fox, Trevor Hastie, Francesco Ferretti
Abstract:
Suitable shark conservation depends on well-informed population assessments. Direct methods such as scientific surveys and fisheries monitoring are adequate for defining population statuses, but species-specific indices of abundance and distribution coming from these sources are rare for most shark species. We can rapidly fill these information gaps by boosting media-based remote monitoring efforts with machine learning and automation. We created a database of shark images by sourcing 24,546 images covering 219 species of sharks from the web application spark pulse and the social network Instagram. We used object detection to extract shark features and inflate this database to 53,345 images. We packaged object-detection and image classification models into a Shark Detector bundle. We developed the Shark Detector to recognize and classify sharks from videos and images using transfer learning and convolutional neural networks (CNNs). We applied these models to common data-generation approaches of sharks: boosting training datasets, processing baited remote camera footage and online videos, and data-mining Instagram. We examined the accuracy of each model and tested genus and species prediction correctness as a result of training data quantity. The Shark Detector located sharks in baited remote footage and YouTube videos with an average accuracy of 89\%, and classified located subjects to the species level with 69\% accuracy (n =\ eight species). The Shark Detector sorted heterogeneous datasets of images sourced from Instagram with 91\% accuracy and classified species with 70\% accuracy (n =\ 17 species). Data-mining Instagram can inflate training datasets and increase the Shark Detector’s accuracy as well as facilitate archiving of historical and novel shark observations. Base accuracy of genus prediction was 68\% across 25 genera. The average base accuracy of species prediction within each genus class was 85\%. The Shark Detector can classify 45 species. All data-generation methods were processed without manual interaction. As media-based remote monitoring strives to dominate methods for observing sharks in nature, we developed an open-source Shark Detector to facilitate common identification applications. Prediction accuracy of the software pipeline increases as more images are added to the training dataset. We provide public access to the software on our GitHub page.Keywords: classification, data mining, Instagram, remote monitoring, sharks
Procedia PDF Downloads 121263 Forming Form, Motivation and Their Biolinguistic Hypothesis: The Case of Consonant Iconicity in Tashelhiyt Amazigh and English
Authors: Noury Bakrim
Abstract:
When dealing with motivation/arbitrariness, forming form (Forma Formans) and morphodynamics are to be grasped as relevant implications of enunciation/enactment, schematization within the specificity of language as sound/meaning articulation. Thus, the fact that a language is a form does not contradict stasis/dynamic enunciation (reflexivity vs double articulation). Moreover, some languages exemplify the role of the forming form, uttering, and schematization (roots in Semitic languages, the Chinese case). Beyond the evolutionary biosemiotic process (form/substance bifurcation, the split between realization/representation), non-isomorphism/asymmetry between linguistic form/norm and linguistic realization (phonetics for instance) opens up a new horizon problematizing the role of Brain – sensorimotor contribution in the continuous forming form. Therefore, we hypothesize biotization as both process/trace co-constructing motivation/forming form. Henceforth, referring to our findings concerning distribution and motivation patterns within Berber written texts (pulse based obstruents and nasal-lateral levels in poetry) and oral storytelling (consonant intensity clustering in quantitative and semantic/prosodic motivation), we understand consonant clustering, motivation and schematization as a complex phenomenon partaking in patterns of oral/written iconic prosody and reflexive metalinguistic representation opening the stable form. We focus our inquiry on both Amazigh and English clusters (/spl/, /spr/) and iconic consonant iteration in [gnunnuy] (to roll/tumble), [smummuy] (to moan sadly or crankily). For instance, the syllabic structures of /splaeʃ/ and /splaet/ imply an anamorphic representation of the state of the world: splash, impact on aquatic surfaces/splat impact on the ground. The pair has stridency and distribution as distinctive features which specify its phonetic realization (and a part of its meaning) /ʃ/ is [+ strident] and /t/ is [+ distributed] on the vocal tract. Schematization is then a process relating both physiology/code as an arthron vocal/bodily, vocal/practical shaping of the motor-articulatory system, leading to syntactic/semantic thematization (agent/patient roles in /spl/, /sm/ and other clusters or the tense uvular /qq/ at the initial position in Berber). Furthermore, the productivity of serial syllable sequencing in Berber points out different expressivity forms. We postulate two Components of motivated formalization: i) the process of memory paradigmatization relating to sequence modeling under sensorimotor/verbal specific categories (production/perception), ii) the process of phonotactic selection - prosodic unconscious/subconscious distribution by virtue of iconicity. Basing on multiple tests including a questionnaire, phonotactic/visual recognition and oral/written reproduction, we aim at patterning/conceptualizing consonant schematization and motivation among EFL and Amazigh (Berber) learners and speakers integrating biolinguistic hypotheses.Keywords: consonant motivation and prosody, language and order of life, anamorphic representation, represented representation, biotization, sensori-motor and brain representation, form, formalization and schematization
Procedia PDF Downloads 143262 The Positive Effects of Processing Instruction on the Acquisition of French as a Second Language: An Eye-Tracking Study
Authors: Cecile Laval, Harriet Lowe
Abstract:
Processing Instruction is a psycholinguistic pedagogical approach drawing insights from the Input Processing Model which establishes the initial innate strategies used by second language learners to connect form and meaning of linguistic features. With the ever-growing use of technology in Second Language Acquisition research, the present study uses eye-tracking to measure the effectiveness of Processing Instruction in the acquisition of French and its effects on learner’s cognitive strategies. The experiment was designed using a TOBII Pro-TX300 eye-tracker to measure participants’ default strategies when processing French linguistic input and any cognitive changes after receiving Processing Instruction treatment. Participants were drawn from lower intermediate adult learners of French at the University of Greenwich and randomly assigned to two groups. The study used a pre-test/post-test methodology. The pre-tests (one per linguistic item) were administered via the eye-tracker to both groups one week prior to instructional treatment. One group received full Processing Instruction treatment (explicit information on the grammatical item and on the processing strategies, and structured input activities) on the primary target linguistic feature (French past tense imperfective aspect). The second group received Processing Instruction treatment except the explicit information on the processing strategies. Three immediate post-tests on the three grammatical structures under investigation (French past tense imperfective aspect, French Subjunctive used for the expression of doubt, and the French causative construction with Faire) were administered with the eye-tracker. The eye-tracking data showed the positive change in learners’ processing of the French target features after instruction with improvement in the interpretation of the three linguistic features under investigation. 100% of participants in both groups made a statistically significant improvement (p=0.001) in the interpretation of the primary target feature (French past tense imperfective aspect) after treatment. 62.5% of participants made an improvement in the secondary target item (French Subjunctive used for the expression of doubt) and 37.5% of participants made an improvement in the cumulative target feature (French causative construction with Faire). Statistically there was no significant difference between the pre-test and post-test scores in the cumulative target feature; however, the variance approximately tripled between the pre-test and the post-test (3.9 pre-test and 9.6 post-test). This suggests that the treatment does not affect participants homogenously and implies a role for individual differences in the transfer-of-training effect of Processing Instruction. The use of eye-tracking provides an opportunity for the study of unconscious processing decisions made during moment-by-moment comprehension. The visual data from the eye-tracking demonstrates changes in participants’ processing strategies. Gaze plots from pre- and post-tests display participants fixation points changing from focusing on content words to focusing on the verb ending. This change in processing strategies can be clearly seen in the interpretation of sentences in both primary and secondary target features. This paper will present the research methodology, design and results of the experimental study using eye-tracking to investigate the primary effects and transfer-of-training effects of Processing Instruction. It will then provide evidence of the cognitive benefits of Processing Instruction in Second Language Acquisition and offer suggestion in second language teaching of grammar.Keywords: eye-tracking, language teaching, processing instruction, second language acquisition
Procedia PDF Downloads 280261 Stroke Prevention in Patients with Atrial Fibrillation and Co-Morbid Physical and Mental Health Problems
Authors: Dina Farran, Mark Ashworth, Fiona Gaughran
Abstract:
Atrial fibrillation (AF), the most prevalent cardiac arrhythmia, is associated with an increased risk of stroke, contributing to heart failure and death. In this project, we aim to improve patient safety by screening for stroke risk among people with AF and co-morbid mental illness. To do so, we started by conducting a systematic review and meta-analysis on prevalence, management, and outcomes of AF in people with Serious Mental Illness (SMI) versus the general population. We then evaluated oral anticoagulation (OAC) prescription trends in people with AF and co-morbid SMI in King’s College Hospital. We also evaluated the association between mental illness severity and OAC prescription in eligible patients in South London and Maudsley (SLaM) NHS Foundation Trust. Next, we implemented an electronic clinical decision support system (eCDSS) consisting of a visual prompt on patient electronic Personal Health Records to screen for AF-related stroke risk in three Mental Health of Older Adults wards at SLaM. Finally, we assessed the feasibility and acceptability of the eCDSS by qualitatively investigating clinicians’ perspectives of the potential usefulness of the eCDSS (pre-intervention) and their experiences and their views regarding its impact on clinicians and patients (post-intervention). The systematic review showed that people with SMI had low reported rates of AF. AF patients with SMI were less likely to receive OAC than the general population. When receiving warfarin, people with SMI, particularly bipolar disorder, experienced poor anticoagulation control compared to the general population. Meta-analysis showed that SMI was not significantly associated with an increased risk of stroke or major bleeding when adjusting for underlying risk factors. The main findings of the first observational study were that among AF patients having a high stroke risk, those with co-morbid SMI were less likely than non-SMI to be prescribed any OAC, particularly warfarin. After 2019, there was no significant difference between the two groups. In the second observational study, patients with AF and co-morbid SMI were less likely to be prescribed any OAC compared to those with dementia, substance use disorders, or common mental disorders, adjusting for age, sex, stroke, and bleeding risk scores. Among AF patients with co-morbid SMI, warfarin was less likely to be prescribed to those having alcohol or substance dependency, serious self-injury, hallucinations or delusions, and activities of daily living impairment. In the intervention, clinicians were asked to confirm the presence of AF, clinically assess stroke and bleeding risks, record risk scores in clinical notes, and refer patients at high risk of stroke to OAC clinics. Clinicians reported many potential benefits for the eCDSS, including improving clinical effectiveness, better identification of patients at risk, safer and more comprehensive care, consistency in decision making and saving time. Identified potential risks included rigidity in decision-making, overreliance, reduced critical thinking, false positive recommendations, annoyance, and increased workload. This study presents a unique opportunity to quantify AF patients with mental illness who are at high risk of severe outcomes using electronic health records. This has the potential to improve health outcomes and, therefore patients' quality of life.Keywords: atrial fibrillation, stroke, mental health conditions, electronic clinical decision support systems
Procedia PDF Downloads 49260 Immigrant Women's Voices and Integrating Feminism into Migration Theory
Authors: Florence Nyemba, Rufaro Chitiyo
Abstract:
This work features the voices of women as they describe their experiences living in the diaspora either with their families or alone. The contributing authors of this work pursued this project to understand how the women’s personal lives (and those of their families back home) changed (both positively and negatively). The work addressed the following important questions, what is female migration? What are the factors causing women to migrate? What types of migration do women engage in? What is the influence of family relationships on migration? What are the challenges of migration? How do migrant women maintain ties with their home countries? What is the role of social networks in migration? How can feminist theories and methodologies be incorporated in migration studies? Women continue to contribute significantly to mass movements of people across the yet, their voices silent in the literature on migration. History shows that women have always been on the move trying to make a living just like their male counterparts. Whether they migrate as spouses, daughters, or alone, women make up a sizeable portion of migration statistics around the world. These women are migrating independently without the accompaniment of male relatives. This calls for the need to expand research on women as independent migrants without generalizing their experiences as in the case with early studies on international migration. The goal of this work is to offer a rich and detailed description of the lives of immigrant women across the globe using theoretical frameworks that advance gender and migration research. Methodology: This work invited scholars and researchers from across the globe whose research interests were in gender and migration. The work incorporated a variety of methodologies for data collection and analysis, which included oral narratives, interviews, systematic literature reviews and interviews. Conclusion: There is a considerable amount of interest in various topics on gender, violence, and equality throughout social science disciplines in higher education. Therefore, the three major topics covered in this work, Women’s Immigration: Theories and Methodologies, Women as Migrant Workers, and Women as Refugees, Asylees, and Permanent Migrants, can be of interest across social sciences disciplines. Feminist theories can expand the curriculum on identity and gendered roles and norms in societies. Findings of this work advance knowledge of population movements across the globe. This work will also appeal to students and scholars wanting to expand their knowledge on women and migration, migration theories, gender violence, and women empowerment. The topics and issues presented in this work will also assist the international community and lawyers concerned with global migration.Keywords: gender, feminism, identity formation, international migration
Procedia PDF Downloads 140259 Hybrid Precoder Design Based on Iterative Hard Thresholding Algorithm for Millimeter Wave Multiple-Input-Multiple-Output Systems
Authors: Ameni Mejri, Moufida Hajjaj, Salem Hasnaoui, Ridha Bouallegue
Abstract:
The technology advances have most lately made the millimeter wave (mmWave) communication possible. Due to the huge amount of spectrum that is available in MmWave frequency bands, this promising candidate is considered as a key technology for the deployment of 5G cellular networks. In order to enhance system capacity and achieve spectral efficiency, very large antenna arrays are employed at mmWave systems by exploiting array gain. However, it has been shown that conventional beamforming strategies are not suitable for mmWave hardware implementation. Therefore, new features are required for mmWave cellular applications. Unlike traditional multiple-input-multiple-output (MIMO) systems for which only digital precoders are essential to accomplish precoding, MIMO technology seems to be different at mmWave because of digital precoding limitations. Moreover, precoding implements a greater number of radio frequency (RF) chains supporting more signal mixers and analog-to-digital converters. As RF chain cost and power consumption is increasing, we need to resort to another alternative. Although the hybrid precoding architecture has been regarded as the best solution based on a combination between a baseband precoder and an RF precoder, we still do not get the optimal design of hybrid precoders. According to the mapping strategies from RF chains to the different antenna elements, there are two main categories of hybrid precoding architecture. Given as a hybrid precoding sub-array architecture, the partially-connected structure reduces hardware complexity by using a less number of phase shifters, whereas it sacrifices some beamforming gain. In this paper, we treat the hybrid precoder design in mmWave MIMO systems as a problem of matrix factorization. Thus, we adopt the alternating minimization principle in order to solve the design problem. Further, we present our proposed algorithm for the partially-connected structure, which is based on the iterative hard thresholding method. Through simulation results, we show that our hybrid precoding algorithm provides significant performance gains over existing algorithms. We also show that the proposed approach reduces significantly the computational complexity. Furthermore, valuable design insights are provided when we use the proposed algorithm to make simulation comparisons between the hybrid precoding partially-connected structure and the fully-connected structure.Keywords: alternating minimization, hybrid precoding, iterative hard thresholding, low-complexity, millimeter wave communication, partially-connected structure
Procedia PDF Downloads 321258 Experimental Proof of Concept for Piezoelectric Flow Harvesting for In-Pipe Metering Systems
Authors: Sherif Keddis, Rafik Mitry, Norbert Schwesinger
Abstract:
Intelligent networking of devices has rapidly been gaining importance over the past years and with recent advances in the fields of microcontrollers, integrated circuits and wireless communication, low power applications have emerged, enabling this trend even more. Connected devices provide a much larger database thus enabling highly intelligent and accurate systems. Ensuring safe drinking water is one of the fields that require constant monitoring and can benefit from an increased accuracy. Monitoring is mainly achieved either through complex measures, such as collecting samples from the points of use, or through metering systems typically distant to the points of use which deliver less accurate assessments of the quality of water. Constant metering near the points of use is complicated due to their inaccessibility; e.g. buried water pipes, locked spaces, which makes system maintenance extremely difficult and often unviable. The research presented here attempts to overcome this challenge by providing these systems with enough energy through a flow harvester inside the pipe thus eliminating the maintenance requirements in terms of battery replacements or containment of leakage resulting from wiring such systems. The proposed flow harvester exploits the piezoelectric properties of polyvinylidene difluoride (PVDF) films to convert turbulence induced oscillations into electrical energy. It is intended to be used in standard water pipes with diameters between 0.5 and 1 inch. The working principle of the harvester uses a ring shaped bluff body inside the pipe to induce pressure fluctuations. Additionally the bluff body houses electronic components such as storage, circuitry and RF-unit. Placing the piezoelectric films downstream of that bluff body causes their oscillation which generates electrical charge. The PVDF-film is placed as a multilayered wrap fixed to the pipe wall leaving the top part to oscillate freely inside the flow. The warp, which allows for a larger active, consists of two layers of 30µm thick and 12mm wide PVDF layered alternately with two centered 6µm thick and 8mm wide aluminum foil electrodes. The length of the layers depends on the number of windings and is part of the investigation. Sealing the harvester against liquid penetration is achieved by wrapping it in a ring-shaped LDPE-film and welding the open ends. The fabrication of the PVDF-wraps is done by hand. After validating the working principle using a wind tunnel, experiments have been conducted in water, placing the harvester inside a 1 inch pipe at water velocities of 0.74m/s. To find a suitable placement of the wrap inside the pipe, two forms of fixation were compared regarding their power output. Further investigations regarding the number of windings required for efficient transduction were made. Best results were achieved using a wrap with 3 windings of the active layers which delivers a constant power output of 0.53µW at a 2.3MΩ load and an effective voltage of 1.1V. Considering the extremely low power requirements of sensor applications, these initial results are promising. For further investigations and optimization, machine designs are currently being developed to automate the fabrication and decrease tolerance of the prototypes.Keywords: maintenance-free sensors, measurements at point of use, piezoelectric flow harvesting, universal micro generator, wireless metering systems
Procedia PDF Downloads 193257 Microfluidic Plasmonic Device for the Sensitive Dual LSPR-Thermal Detection of the Cardiac Troponin Biomarker in Laminal Flow
Authors: Andreea Campu, Ilinica Muresan, Simona Cainap, Simion Astilean, Monica Focsan
Abstract:
Acute myocardial infarction (AMI) is the most severe cardiovascular disease, which has threatened human lives for decades, thus a continuous interest is directed towards the detection of cardiac biomarkers such as cardiac troponin I (cTnI) in order to predict risk and, implicitly, fulfill the early diagnosis requirements in AMI settings. Microfluidics is a major technology involved in the development of efficient sensing devices with real-time fast responses and on-site applicability. Microfluidic devices have gathered a lot of attention recently due to their advantageous features such as high sensitivity and specificity, miniaturization and portability, ease-of-use, low-cost, facile fabrication, and reduced sample manipulation. The integration of gold nanoparticles into the structure of microfluidic sensors has led to the development of highly effective detection systems, considering the unique properties of the metallic nanostructures, specifically the Localized Surface Plasmon Resonance (LSPR), which makes them highly sensitive to their microenvironment. In this scientific context, herein, we propose the implementation of a novel detection device, which successfully combines the efficiency of gold bipyramids (AuBPs) as signal transducers and thermal generators with the sample-driven advantages of the microfluidic channels into a miniaturized, portable, low-cost, specific, and sensitive test for the dual LSPR-thermographic cTnI detection. Specifically, AuBPs with longitudinal LSPR response at 830 nm were chemically synthesized using the seed-mediated growth approach and characterized in terms of optical and morphological properties. Further, the colloidal AuBPs were deposited onto pre-treated silanized glass substrates thus, a uniform nanoparticle coverage of the substrate was obtained and confirmed by extinction measurements showing a 43 nm blue-shift of the LSPR response as a consequence of the refractive index change. The as-obtained plasmonic substrate was then integrated into a microfluidic “Y”-shaped polydimethylsiloxane (PDMS) channel, fabricated using a Laser Cutter system. Both plasmonic and microfluidic elements were plasma treated in order to achieve a permanent bond. The as-developed microfluidic plasmonic chip was further coupled to an automated syringe pump system. The proposed biosensing protocol implicates the successive injection inside the microfluidic channel as follows: p-aminothiophenol and glutaraldehyde, to achieve a covalent bond between the metallic surface and cTnI antibody, anti-cTnI, as a recognition element, and target cTnI biomarker. The successful functionalization and capture of cTnI was monitored by LSPR detection thus, after each step, a red-shift of the optical response was recorded. Furthermore, as an innovative detection technique, thermal determinations were made after each injection by exposing the microfluidic plasmonic chip to 785 nm laser excitation, considering that the AuBPs exhibit high light-to-heat conversion performances. By the analysis of the thermographic images, thermal curves were obtained, showing a decrease in the thermal efficiency after the anti-cTnI-cTnI reaction was realized. Thus, we developed a microfluidic plasmonic chip able to operate as both LSPR and thermal sensor for the detection of the cardiac troponin I biomarker, leading thus to the progress of diagnostic devices.Keywords: gold nanobipyramids, microfluidic device, localized surface plasmon resonance detection, thermographic detection
Procedia PDF Downloads 129256 Modelling the Antecedents of Supply Chain Enablers in Online Groceries Using Interpretive Structural Modelling and MICMAC Analysis
Authors: Rose Antony, Vivekanand B. Khanapuri, Karuna Jain
Abstract:
Online groceries have transformed the way the supply chains are managed. These are facing numerous challenges in terms of product wastages, low margins, long breakeven to achieve and low market penetration to mention a few. The e-grocery chains need to overcome these challenges in order to survive the competition. The purpose of this paper is to carry out a structural analysis of the enablers in e-grocery chains by applying Interpretive Structural Modeling (ISM) and MICMAC analysis in the Indian context. The research design is descriptive-explanatory in nature. The enablers have been identified from the literature and through semi-structured interviews conducted among the managers having relevant experience in e-grocery supply chains. The experts have been contacted through professional/social networks by adopting a purposive snowball sampling technique. The interviews have been transcribed, and manual coding is carried using open and axial coding method. The key enablers are categorized into themes, and the contextual relationship between these and the performance measures is sought from the Industry veterans. Using ISM, the hierarchical model of the enablers is developed and MICMAC analysis identifies the driver and dependence powers. Based on the driver-dependence power the enablers are categorized into four clusters namely independent, autonomous, dependent and linkage. The analysis found that information technology (IT) and manpower training acts as key enablers towards reducing the lead time and enhancing the online service quality. Many of the enablers fall under the linkage cluster viz., frequent software updating, branding, the number of delivery boys, order processing, benchmarking, product freshness and customized applications for different stakeholders, depicting these as critical in online food/grocery supply chains. Considering the perishability nature of the product being handled, the impact of the enablers on the product quality is also identified. Hence, study aids as a tool to identify and prioritize the vital enablers in the e-grocery supply chain. The work is perhaps unique, which identifies the complex relationships among the supply chain enablers in fresh food for e-groceries and linking them to the performance measures. It contributes to the knowledge of supply chain management in general and e-retailing in particular. The approach focus on the fresh food supply chains in the Indian context and hence will be applicable in developing economies context, where supply chains are evolving.Keywords: interpretive structural modelling (ISM), India, online grocery, retail operations, supply chain management
Procedia PDF Downloads 204255 Impact of Farm Settlements' Facilities on Farm Patronage in Oyo State
Authors: Simon Ayorinde Okanlawon
Abstract:
The youths’ prevalent negative attitude to farming is partly due to amenities and facilities found in the urban centers at the expense of the rural areas. Hence, there is the need to create a befitting and conducive farm environment to retain farm employees and attract the youth to farming. This can be achieved through the provision of services and amenities that will ensure a comfortable standard of living higher than that obtained by a person of equal status in other forms of employment in urban centers, thereby eliminating the psychological feeling of lowered self-esteem associated with farming. This study assessed farm settlements’ facilities and patronage in Oyo State with a view to using the information to encourage sustainable agriculture in Nigeria. The study becomes necessary because of the dearth of information on the state of facilities in the farm settlements as it affects patronage of farm settlements for sustainable agriculture in the developing countries like Nigeria. The study utilized three purposely selected farm settlements- Ogbomoso, Fasola and Ilora out of the seven existing ones n Oyo State. One hundred percent (100%) of the 262 residential buildings in the three settlements were sampled, from where a household head from each of the buildings was randomly chosen. This translates to 262 household heads served with questionnaire out of which 47.7% of the questionnaires were recovered. Information obtained included respondents’ residency categories, residents’ status, residency years, housing types, types of holding and number of acres/holding. Others include the socio-economic attributes such as age, gender, income, educational status of respondents, assessment of existing facilities in the selected sites, the level of patronage of the farm settlements including perceived pull factors that can enhance farm settlements patronage. The study revealed that the residents were not satisfied with the adequacy and quality of all the facilities available in their settlements. Residents’ satisfaction with infrastructural facilities cannot be statistically linked with location across the study area. Findings suggested that residents of Ogbomoso farm settlements were not enjoying adequate provision of water supply and road as much as those from Ilora and Fasola. Patronage of the farm settlements were largely driven by farming activities and sale of farm produce. The respondents agreed that provision of farm resort centers, standard recreational and tourism facilities, vacation employment opportunities for youths, functional internet and communication networks among others are likely to boost the level of patronage of the farm settlements. The study concluded that improvement of the facilities both in quality and quantity will encourage the youths in going back to farming. It then recommends that maintenance of existing facilities and provision of more facilities such as resort centers be ensured.Keywords: encourage, farm settlements' facilities, Oyo state, patronage
Procedia PDF Downloads 229254 Green Building Risks: Limits on Environmental and Health Quality Metrics for Contractors
Authors: Erica Cochran Hameen, Bobuchi Ken-Opurum, Mounica Guturu
Abstract:
The United Stated (U.S.) populous spends the majority of their time indoors in spaces where building codes and voluntary sustainability standards provide clear Indoor Environmental Quality (IEQ) metrics. The existing sustainable building standards and codes are aimed towards improving IEQ, health of occupants, and reducing the negative impacts of buildings on the environment. While they address the post-occupancy stage of buildings, there are fewer standards on the pre-occupancy stage thereby placing a large labor population in environments much less regulated. Construction personnel are often exposed to a variety of uncomfortable and unhealthy elements while on construction sites, primarily thermal, visual, acoustic, and air quality related. Construction site power generators, equipment, and machinery generate on average 9 decibels (dBA) above the U.S. OSHA regulations, creating uncomfortable noise levels. Research has shown that frequent exposure to high noise levels leads to chronic physiological issues and increases noise induced stress, yet beyond OSHA no other metric focuses directly on the impacts of noise on contractors’ well-being. Research has also associated natural light with higher productivity and attention span, and lower cases of fatigue in construction workers. However, daylight is not always available as construction workers often perform tasks in cramped spaces, dark areas, or at nighttime. In these instances, the use of artificial light is necessary, yet lighting standards for use during lengthy tasks and arduous activities is not specified. Additionally, ambient air, contaminants, and material off-gassing expelled at construction sites are one of the causes of serious health effects in construction workers. Coupled with extreme hot and cold temperatures for different climate zones, health and productivity can be seriously compromised. This research evaluates the impact of existing green building metrics on construction and risk management, by analyzing two codes and nine standards including LEED, WELL, and BREAM. These metrics were chosen based on the relevance to the U.S. construction industry. This research determined that less than 20% of the sustainability context within the standards and codes (texts) are related to the pre-occupancy building sector. The research also investigated the impact of construction personnel’s health and well-being on construction management through two surveys of project managers and on-site contractors’ perception of their work environment on productivity. To fully understand the risks of limited Environmental and Health Quality metrics for contractors (EHQ) this research evaluated the connection between EHQ factors such as inefficient lighting, on construction workers and investigated the correlation between various site coping strategies for comfort and productivity. Outcomes from this research are three-pronged. The first includes fostering a discussion about the existing conditions of EQH elements, i.e. thermal, lighting, ergonomic, acoustic, and air quality on the construction labor force. The second identifies gaps in sustainability standards and codes during the pre-occupancy stage of building construction from ground-breaking to substantial completion. The third identifies opportunities for improvements and mitigation strategies to improve EQH such as increased monitoring of effects on productivity and health of contractors and increased inclusion of the pre-occupancy stage in green building standards.Keywords: construction contractors, health and well-being, environmental quality, risk management
Procedia PDF Downloads 132253 Exploring the Impact of Input Sequence Lengths on Long Short-Term Memory-Based Streamflow Prediction in Flashy Catchments
Authors: Farzad Hosseini Hossein Abadi, Cristina Prieto Sierra, Cesar Álvarez Díaz
Abstract:
Predicting streamflow accurately in flashy catchments prone to floods is a major research and operational challenge in hydrological modeling. Recent advancements in deep learning, particularly Long Short-Term Memory (LSTM) networks, have shown to be promising in achieving accurate hydrological predictions at daily and hourly time scales. In this work, a multi-timescale LSTM (MTS-LSTM) network was applied to the context of regional hydrological predictions at an hourly time scale in flashy catchments. The case study includes 40 catchments allocated in the Basque Country, north of Spain. We explore the impact of hyperparameters on the performance of streamflow predictions given by regional deep learning models through systematic hyperparameter tuning - where optimal regional values for different catchments are identified. The results show that predictions are highly accurate, with Nash-Sutcliffe (NSE) and Kling-Gupta (KGE) metrics values as high as 0.98 and 0.97, respectively. A principal component analysis reveals that a hyperparameter related to the length of the input sequence contributes most significantly to the prediction performance. The findings suggest that input sequence lengths have a crucial impact on the model prediction performance. Moreover, employing catchment-scale analysis reveals distinct sequence lengths for individual basins, highlighting the necessity of customizing this hyperparameter based on each catchment’s characteristics. This aligns with well known “uniqueness of the place” paradigm. In prior research, tuning the length of the input sequence of LSTMs has received limited focus in the field of streamflow prediction. Initially it was set to 365 days to capture a full annual water cycle. Later, performing limited systematic hyper-tuning using grid search, revealed a modification to 270 days. However, despite the significance of this hyperparameter in hydrological predictions, usually studies have overlooked its tuning and fixed it to 365 days. This study, employing a simultaneous systematic hyperparameter tuning approach, emphasizes the critical role of input sequence length as an influential hyperparameter in configuring LSTMs for regional streamflow prediction. Proper tuning of this hyperparameter is essential for achieving accurate hourly predictions using deep learning models.Keywords: LSTMs, streamflow, hyperparameters, hydrology
Procedia PDF Downloads 69252 A Conceptual Framework of the Individual and Organizational Antecedents to Knowledge Sharing
Authors: Muhammad Abdul Basit Memon
Abstract:
The importance of organizational knowledge sharing and knowledge management has been documented in numerous research studies in available literature, since knowledge sharing has been recognized as a founding pillar for superior organizational performance and a source of gaining competitive advantage. Built on this, most of the successful organizations perceive knowledge management and knowledge sharing as a concern of high strategic importance and spend huge amounts on the effective management and sharing of organizational knowledge. However, despite some very serious endeavors, many firms fail to capitalize on the benefits of knowledge sharing because of being unaware of the individual characteristics, interpersonal, organizational and contextual factors that influence knowledge sharing; simply the antecedent to knowledge sharing. The extant literature on antecedents to knowledge sharing, offers a range of antecedents mentioned in a number of research articles and research studies. Some of the previous studies about antecedents to knowledge sharing, studied antecedents to knowledge sharing regarding inter-organizational knowledge transfer; others focused on inter and intra organizational knowledge sharing and still others investigated organizational factors. Some of the organizational antecedents to KS can relate to the characteristics and underlying aspects of knowledge being shared e.g., specificity and complexity of the underlying knowledge to be transferred; others relate to specific organizational characteristics e.g., age and size of the organization, decentralization and absorptive capacity of the firm and still others relate to the social relations and networks of organizations such as social ties, trusting relationships, and value systems. In the same way some researchers have highlighted on only one aspect like organizational commitment, transformational leadership, knowledge-centred culture, learning and performance orientation and social network-based relationships in the organizations. A bulk of the existing research articles on antecedents to knowledge sharing has mainly discussed organizational or environmental factors affecting knowledge sharing. However, the focus, later on, shifted towards the analysis of individuals or personal determinants as antecedents for the individual’s engagement in knowledge sharing activities, like personality traits, attitude and self efficacy etc. For example, employees’ goal orientations (i.e. learning orientation or performance orientation is an important individual antecedent of knowledge sharing behaviour. While being consistent with the existing literature therefore, the antecedents to knowledge sharing can be classified as being individual and organizational. This paper is an endeavor to discuss a conceptual framework of the individual and organizational antecedents to knowledge sharing in the light of the available literature and empirical evidence. This model not only can help in getting familiarity and comprehension on the subject matter by presenting a holistic view of the antecedents to knowledge sharing as discussed in the literature, but can also help the business managers and especially human resource managers to find insights about the salient features of organizational knowledge sharing. Moreover, this paper can help provide a ground for research students and academicians to conduct both qualitative as well and quantitative research and design an instrument for conducting survey on the topic of individual and organizational antecedents to knowledge sharing.Keywords: antecedents to knowledge sharing, knowledge management, individual and organizational, organizational knowledge sharing
Procedia PDF Downloads 324251 Knowledge Transfer through Entrepreneurship: From Research at the University to the Consolidation of a Spin-off Company
Authors: Milica Lilic, Marina Rosales Martínez
Abstract:
Academic research cannot be oblivious to social problems and needs, so projects that have the capacity for transformation and impact should have the opportunity to go beyond the University circles and bring benefit to society. Apart from patents and R&D research contracts, this opportunity can be achieved through entrepreneurship as one of the most direct tools to turn knowledge into a tangible product. Thus, as an example of good practices, it is intended to analyze the case of an institutional entrepreneurship program carried out at the University of Seville, aimed at researchers interested in assessing the business opportunity of their research and expanding their knowledge on procedures for the commercialization of technologies used at academic projects. The program is based on three pillars: training, teamwork sessions and networking. The training includes aspects such as product-client fit, technical-scientific and economic-financial feasibility of a spin-off, institutional organization and decision making, public and private fundraising, and making the spin-off visible in the business world (social networks, key contacts, corporate image and ethical principles). On the other hand, the teamwork sessions are guided by a mentor and aimed at identifying research results with potential, clarifying financial needs and procedures to obtain the necessary resources for the consolidation of the spin-off. This part of the program is considered to be crucial in order for the participants to convert their academic findings into a business model. Finally, the networking part is oriented to workshops about the digital transformation of a project, the accurate communication of the product or service a spin-off offers to society and the development of transferable skills necessary for managing a business. This blended program results in the final stage where each team, through an elevator pitch format, presents their research turned into a business model to an experienced jury. The awarded teams get a starting capital for their enterprise and enjoy the opportunity of formally consolidating their spin-off company at the University. Studying the results of the program, it has been shown that many researchers have basic or no knowledge of entrepreneurship skills and different ways to turn their research results into a business model with a direct impact on society. Therefore, the described program has been used as an example to highlight the importance of knowledge transfer at the University and the role that this institution should have in providing the tools to promote entrepreneurship within it. Keeping in mind that the University is defined by three main activities (teaching, research and knowledge transfer), it is safe to conclude that the latter, and the entrepreneurship as an expression of it, is crucial in order for the other two to comply with their purpose.Keywords: good practice, knowledge transfer, a spin-off company, university
Procedia PDF Downloads 146250 Celebrity Culture and Social Role of Celebrities in Türkiye during the 1990s: The Case of Türkiye, Newspaper, Radio, Televison (TGRT) Channel
Authors: Yelda Yenel, Orkut Acele
Abstract:
In a media-saturated world, celebrities have become ubiquitous figures, encountered both in public spaces and within the privacy of our homes, seamlessly integrating into daily life. From Alexander the Great to contemporary media personalities, the image of celebrity has persisted throughout history, manifesting in various forms and contexts. Over time, as the relationship between society and the market evolved, so too did the roles and behaviors of celebrities. These transformations offer insights into the cultural climate, revealing shifts in habits and worldviews. In Türkiye, the emergence of private television channels brought an influx of celebrities into everyday life, making them a pervasive part of daily routines. To understand modern celebrity culture, it is essential to examine the ideological functions of media within political, economic, and social contexts. Within this framework, celebrities serve as both reflections and creators of cultural values and, at times, act as intermediaries, offering insights into the society of their era. Starting its broadcasting life in 1992 with religious films and religious conversation, Türkiye Newspaper, Radio, Television channel (TGRT) later changed its appearance, slogan, and the celebrities it featured in response to the political atmosphere. Celebrities played a critical role in transforming from the existing slogan 'Peace has come to the screen' to 'Watch and see what will happen”. Celebrities hold significant roles in society, and their images are produced and circulated by various actors, including media organizations and public relations teams. Understanding these dynamics is crucial for analyzing their influence and impact. This study aims to explore Turkish society in the 1990s, focusing on TGRT and its visual and discursive characteristics regarding celebrity figures such as Seda Sayan. The first section examines the historical development of celebrity culture and its transformations, guided by the conceptual framework of celebrity studies. The complex and interconnected image of celebrity, as introduced by post-structuralist approaches, plays a fundamental role in making sense of existing relationships. This section traces the existence and functions of celebrities from antiquity to the present day. The second section explores the economic, social, and cultural contexts of 1990s Türkiye, focusing on the media landscape and visibility that became prominent in the neoliberal era following the 1980s. This section also discusses the political factors underlying TGRT's transformation, such as the 1997 military memorandum. The third section analyzes TGRT as a case study, focusing on its significance as an Islamic television channel and the shifts in its public image, categorized into two distinct periods. The channel’s programming, which aligned with Islamic teachings, and the celebrities who featured prominently during these periods became the public face of both TGRT and the broader society. In particular, the transition to a more 'secular' format during TGRT's second phase is analyzed, focusing on changes in celebrity attire and program formats. This study reveals that celebrities are used as indicators of ideology, benefiting from this instrumentalization by enhancing their own fame and reflecting the prevailing cultural hegemony in society.Keywords: celebrity culture, media, neoliberalism, TGRT
Procedia PDF Downloads 30249 Integrating Radar Sensors with an Autonomous Vehicle Simulator for an Enhanced Smart Parking Management System
Authors: Mohamed Gazzeh, Bradley Null, Fethi Tlili, Hichem Besbes
Abstract:
The burgeoning global ownership of personal vehicles has posed a significant strain on urban infrastructure, notably parking facilities, leading to traffic congestion and environmental concerns. Effective parking management systems (PMS) are indispensable for optimizing urban traffic flow and reducing emissions. The most commonly deployed systems nowadays rely on computer vision technology. This paper explores the integration of radar sensors and simulation in the context of smart parking management. We concentrate on radar sensors due to their versatility and utility in automotive applications, which extends to PMS. Additionally, radar sensors play a crucial role in driver assistance systems and autonomous vehicle development. However, the resource-intensive nature of radar data collection for algorithm development and testing necessitates innovative solutions. Simulation, particularly the monoDrive simulator, an internal development tool used by NI the Test and Measurement division of Emerson, offers a practical means to overcome this challenge. The primary objectives of this study encompass simulating radar sensors to generate a substantial dataset for algorithm development, testing, and, critically, assessing the transferability of models between simulated and real radar data. We focus on occupancy detection in parking as a practical use case, categorizing each parking space as vacant or occupied. The simulation approach using monoDrive enables algorithm validation and reliability assessment for virtual radar sensors. It meticulously designed various parking scenarios, involving manual measurements of parking spot coordinates, orientations, and the utilization of TI AWR1843 radar. To create a diverse dataset, we generated 4950 scenarios, comprising a total of 455,400 parking spots. This extensive dataset encompasses radar configuration details, ground truth occupancy information, radar detections, and associated object attributes such as range, azimuth, elevation, radar cross-section, and velocity data. The paper also addresses the intricacies and challenges of real-world radar data collection, highlighting the advantages of simulation in producing radar data for parking lot applications. We developed classification models based on Support Vector Machines (SVM) and Density-Based Spatial Clustering of Applications with Noise (DBSCAN), exclusively trained and evaluated on simulated data. Subsequently, we applied these models to real-world data, comparing their performance against the monoDrive dataset. The study demonstrates the feasibility of transferring models from a simulated environment to real-world applications, achieving an impressive accuracy score of 92% using only one radar sensor. This finding underscores the potential of radar sensors and simulation in the development of smart parking management systems, offering significant benefits for improving urban mobility and reducing environmental impact. The integration of radar sensors and simulation represents a promising avenue for enhancing smart parking management systems, addressing the challenges posed by the exponential growth in personal vehicle ownership. This research contributes valuable insights into the practicality of using simulated radar data in real-world applications and underscores the role of radar technology in advancing urban sustainability.Keywords: autonomous vehicle simulator, FMCW radar sensors, occupancy detection, smart parking management, transferability of models
Procedia PDF Downloads 81248 Emotional State and Cognitive Workload during a Flight Simulation: Heart Rate Study
Authors: Damien Mouratille, Antonio R. Hidalgo-Muñoz, Nadine Matton, Yves Rouillard, Mickael Causse, Radouane El Yagoubi
Abstract:
Background: The monitoring of the physiological activity related to mental workload (MW) on pilots will be useful to improve aviation safety by anticipating human performance degradation. The electrocardiogram (ECG) can reveal MW fluctuations due to either cognitive workload or/and emotional state since this measure exhibits autonomic nervous system modulations. Arguably, heart rate (HR) is one of its most intuitive and reliable parameters. It would be particularly interesting to analyze the interaction between cognitive requirements and emotion in ecologic sets such as a flight simulator. This study aims to explore by means of HR the relation between cognitive demands and emotional activation. Presumably, the effects of cognition and emotion overloads are not necessarily cumulative. Methodology: Eight healthy volunteers in possession of the Private Pilot License were recruited (male; 20.8±3.2 years). ECG signal was recorded along the whole experiment by placing two electrodes on the clavicle and left pectoral of the participants. The HR was computed within 4 minutes segments. NASA-TLX and Big Five inventories were used to assess subjective workload and to consider the influence of individual personality differences. The experiment consisted in completing two dual-tasks of approximately 30 minutes of duration into a flight simulator AL50. Each dual-task required the simultaneous accomplishment of both a pre-established flight plan and an additional task based on target stimulus discrimination inserted between Air Traffic Control instructions. This secondary task allowed us to vary the cognitive workload from low (LC) to high (HC) levels, by combining auditory and visual numerical stimuli to respond to meeting specific criteria. Regarding emotional condition, the two dual-tasks were designed to assure analogous difficulty in terms of solicited cognitive demands. The former was realized by the pilot alone, i.e. Low Arousal (LA) condition. In contrast, the latter generates a high arousal (HA), since the pilot was supervised by two evaluators, filmed and involved into a mock competition with the rest of the participants. Results: Performance for the secondary task showed significant faster reaction times (RT) for HA compared to LA condition (p=.003). Moreover, faster RT was found for LC compared to HC (p < .001) condition. No interaction was found. Concerning HR measure, despite the lack of main effects an interaction between emotion and cognition is evidenced (p=.028). Post hoc analysis showed smaller HR for HA compared to LA condition only for LC (p=.049). Conclusion. The control of an aircraft is a very complex task including strong cognitive demands and depends on the emotional state of pilots. According to the behavioral data, the experimental set has permitted to generate satisfactorily different emotional and cognitive levels. As suggested by the interaction found in HR measure, these two factors do not seem to have a cumulative impact on the sympathetic nervous system. Apparently, low cognitive workload makes pilots more sensitive to emotional variations. These results hint the independency between data processing and emotional regulation. Further physiological data are necessary to confirm and disentangle this relation. This procedure may be useful for monitoring objectively pilot’s mental workload.Keywords: cognitive demands, emotion, flight simulator, heart rate, mental workload
Procedia PDF Downloads 275247 High Performance Computing Enhancement of Agent-Based Economic Models
Authors: Amit Gill, Lalith Wijerathne, Sebastian Poledna
Abstract:
This research presents the details of the implementation of high performance computing (HPC) extension of agent-based economic models (ABEMs) to simulate hundreds of millions of heterogeneous agents. ABEMs offer an alternative approach to study the economy as a dynamic system of interacting heterogeneous agents, and are gaining popularity as an alternative to standard economic models. Over the last decade, ABEMs have been increasingly applied to study various problems related to monetary policy, bank regulations, etc. When it comes to predicting the effects of local economic disruptions, like major disasters, changes in policies, exogenous shocks, etc., on the economy of the country or the region, it is pertinent to study how the disruptions cascade through every single economic entity affecting its decisions and interactions, and eventually affect the economic macro parameters. However, such simulations with hundreds of millions of agents are hindered by the lack of HPC enhanced ABEMs. In order to address this, a scalable Distributed Memory Parallel (DMP) implementation of ABEMs has been developed using message passing interface (MPI). A balanced distribution of computational load among MPI-processes (i.e. CPU cores) of computer clusters while taking all the interactions among agents into account is a major challenge for scalable DMP implementations. Economic agents interact on several random graphs, some of which are centralized (e.g. credit networks, etc.) whereas others are dense with random links (e.g. consumption markets, etc.). The agents are partitioned into mutually-exclusive subsets based on a representative employer-employee interaction graph, while the remaining graphs are made available at a minimum communication cost. To minimize the number of communications among MPI processes, real-life solutions like the introduction of recruitment agencies, sales outlets, local banks, and local branches of government in each MPI-process, are adopted. Efficient communication among MPI-processes is achieved by combining MPI derived data types with the new features of the latest MPI functions. Most of the communications are overlapped with computations, thereby significantly reducing the communication overhead. The current implementation is capable of simulating a small open economy. As an example, a single time step of a 1:1 scale model of Austria (i.e. about 9 million inhabitants and 600,000 businesses) can be simulated in 15 seconds. The implementation is further being enhanced to simulate 1:1 model of Euro-zone (i.e. 322 million agents).Keywords: agent-based economic model, high performance computing, MPI-communication, MPI-process
Procedia PDF Downloads 128246 Stochastic Pi Calculus in Financial Markets: An Alternate Approach to High Frequency Trading
Authors: Jerome Joshi
Abstract:
The paper presents the modelling of financial markets using the Stochastic Pi Calculus model. The Stochastic Pi Calculus model is mainly used for biological applications; however, the feature of this model promotes its use in financial markets, more prominently in high frequency trading. The trading system can be broadly classified into exchange, market makers or intermediary traders and fundamental traders. The exchange is where the action of the trade is executed, and the two types of traders act as market participants in the exchange. High frequency trading, with its complex networks and numerous market participants (intermediary and fundamental traders) poses a difficulty while modelling. It involves the participants to seek the advantage of complex trading algorithms and high execution speeds to carry out large volumes of trades. To earn profits from each trade, the trader must be at the top of the order book quite frequently by executing or processing multiple trades simultaneously. This would require highly automated systems as well as the right sentiment to outperform other traders. However, always being at the top of the book is also not best for the trader, since it was the reason for the outbreak of the ‘Hot – Potato Effect,’ which in turn demands for a better and more efficient model. The characteristics of the model should be such that it should be flexible and have diverse applications. Therefore, a model which has its application in a similar field characterized by such difficulty should be chosen. It should also be flexible in its simulation so that it can be further extended and adapted for future research as well as be equipped with certain tools so that it can be perfectly used in the field of finance. In this case, the Stochastic Pi Calculus model seems to be an ideal fit for financial applications, owing to its expertise in the field of biology. It is an extension of the original Pi Calculus model and acts as a solution and an alternative to the previously flawed algorithm, provided the application of this model is further extended. This model would focus on solving the problem which led to the ‘Flash Crash’ which is the ‘Hot –Potato Effect.’ The model consists of small sub-systems, which can be integrated to form a large system. It is designed in way such that the behavior of ‘noise traders’ is considered as a random process or noise in the system. While modelling, to get a better understanding of the problem, a broader picture is taken into consideration with the trader, the system, and the market participants. The paper goes on to explain trading in exchanges, types of traders, high frequency trading, ‘Flash Crash,’ ‘Hot-Potato Effect,’ evaluation of orders and time delay in further detail. For the future, there is a need to focus on the calibration of the module so that they would interact perfectly with other modules. This model, with its application extended, would provide a basis for researchers for further research in the field of finance and computing.Keywords: concurrent computing, high frequency trading, financial markets, stochastic pi calculus
Procedia PDF Downloads 77245 Risks for Cyanobacteria Harmful Algal Blooms in Georgia Piedmont Waterbodies Due to Land Management and Climate Interactions
Authors: Sam Weber, Deepak Mishra, Susan Wilde, Elizabeth Kramer
Abstract:
The frequency and severity of cyanobacteria harmful blooms (CyanoHABs) have been increasing over time, with point and non-point source eutrophication and shifting climate paradigms being blamed as the primary culprits. Excessive nutrients, warm temperatures, quiescent water, and heavy and less regular rainfall create more conducive environments for CyanoHABs. CyanoHABs have the potential to produce a spectrum of toxins that cause gastrointestinal stress, organ failure, and even death in humans and animals. To promote enhanced, proactive CyanoHAB management, risk modeling using geospatial tools can act as predictive mechanisms to supplement current CyanoHAB monitoring, management and mitigation efforts. The risk maps would empower water managers to focus their efforts on high risk water bodies in an attempt to prevent CyanoHABs before they occur, and/or more diligently observe those waterbodies. For this research, exploratory spatial data analysis techniques were used to identify the strongest predicators for CyanoHAB blooms based on remote sensing-derived cyanobacteria cell density values for 771 waterbodies in the Georgia Piedmont and landscape characteristics of their watersheds. In-situ datasets for cyanobacteria cell density, nutrients, temperature, and rainfall patterns are not widely available, so free gridded geospatial datasets were used as proxy variables for assessing CyanoHAB risk. For example, the percent of a watershed that is agriculture was used as a proxy for nutrient loading, and the summer precipitation within a watershed was used as a proxy for water quiescence. Cyanobacteria cell density values were calculated using atmospherically corrected images from the European Space Agency’s Sentinel-2A satellite and multispectral instrument sensor at a 10-meter ground resolution. Seventeen explanatory variables were calculated for each watershed utilizing the multi-petabyte geospatial catalogs available within the Google Earth Engine cloud computing interface. The seventeen variables were then used in a multiple linear regression model, and the strongest predictors of cyanobacteria cell density were selected for the final regression model. The seventeen explanatory variables included land cover composition, winter and summer temperature and precipitation data, topographic derivatives, vegetation index anomalies, and soil characteristics. Watershed maximum summer temperature, percent agriculture, percent forest, percent impervious, and waterbody area emerged as the strongest predictors of cyanobacteria cell density with an adjusted R-squared value of 0.31 and a p-value ~ 0. The final regression equation was used to make a normalized cyanobacteria cell density index, and a Jenks Natural Break classification was used to assign waterbodies designations of low, medium, or high risk. Of the 771 waterbodies, 24.38% were low risk, 37.35% were medium risk, and 38.26% were high risk. This study showed that there are significant relationships between free geospatial datasets representing summer maximum temperatures, nutrient loading associated with land use and land cover, and the area of a waterbody with cyanobacteria cell density. This data analytics approach to CyanoHAB risk assessment corroborated the literature-established environmental triggers for CyanoHABs, and presents a novel approach for CyanoHAB risk mapping in waterbodies across the greater southeastern United States.Keywords: cyanobacteria, land use/land cover, remote sensing, risk mapping
Procedia PDF Downloads 211244 Vertebral Artery Dissection Complicating Pregnancy and Puerperium: Case Report and Review of the Literature
Authors: N. Reza Pour, S. Chuah, T. Vo
Abstract:
Background: Vertebral artery dissection (VAD) is a rare complication of pregnancy. It can occur spontaneously or following a traumatic event. The pathogenesis is unclear. Predisposing factors include chronic hypertension, Marfan’s syndrome, fibromuscular dysplasia, vasculitis and cystic medial necrosis. Physiological changes of pregnancy have also been proposed as potential mechanisms of injury to the vessel wall. The clinical presentation varies and it can present as a headache, neck pain, diplopia, transient ischaemic attack, or an ischemic stroke. Isolated cases of VAD in pregnancy and puerperium have been reported in the literature. One case was found to have posterior circulation stroke as a result of bilateral VAD and labour was induced at 37 weeks gestation for preeclampsia. Another patient at 38 weeks with severe neck pain that persisted after induction for elevated blood pressure and arteriography showed right VAD postpartum. A single case of lethal VAD in pregnancy with subsequent massive subarachnoid haemorrhage has been reported which was confirmed by the autopsy. Case Presentation: We report two cases of vertebral artery dissection in pregnancy. The first patient was a 32-year-old primigravida presented at the 38th week of pregnancy with the onset of early labour and blood pressure (BP) of 130/70 on arrival. After 2 hours, the patient developed a severe headache with blurry vision and BP was 238/120. Despite treatment with an intravenous antihypertensive, she had eclamptic fit. Magnesium solfate was started and Emergency Caesarean Section was performed under the general anaesthesia. On the second day after the operation, she developed left-sided neck pain. Magnetic Resonance Imaging (MRI) angiography confirmed a short segment left vertebral artery dissection at the level of C3. The patient was treated with aspirin and remained stable without any neurological deficit. The second patient was a 33-year-old primigavida who was admitted to the hospital at 36 weeks gestation with BP of 155/105, constant headache and visual disturbances. She was medicated with an oral antihypertensive agent. On day 4, she complained of right-sided neck pain. MRI angiogram revealed a short segment dissection of the right vertebral artery at the C2-3 level. Pregnancy was terminated on the same day with emergency Caesarean Section and anticoagulation was started subsequently. Post-operative recovery was complicated by rectus sheath haematoma requiring evacuation. She was discharged home on Aspirin without any neurological sequelae. Conclusion: Because of collateral circulation, unilateral vertebral artery dissections may go unrecognized and may be more common than suspected. The outcome for most patients is benign, reflecting the adequacy of the collateral circulation in young patients. Spontaneous VAD is usually treated with anticoagulation or antiplatelet therapy for a minimum of 3-6 months to prevent future ischaemic events, allowing the dissection to heal on its own. We had two cases of VAD in the context of hypertensive disorders of pregnancy with an acceptable outcome. A high level of vigilance is required particularly with preeclamptic patients presenting with head/neck pain to allow an early diagnosis. This is as we hypothesize, early and aggressive management of vertebral artery dissection may potentially prevent further complications.Keywords: eclampsia, preeclampsia, pregnancy, Vertebral Artery Dissection
Procedia PDF Downloads 279243 The International Fight against the Financing of Terrorism: Analysis of the Anti-Money Laundering and Combating Financing of Terrorism Regime
Authors: Loukou Amoin Marie Djedri
Abstract:
Financing is important for all terrorists – from the largest organizations in control of territories, to the smallest groups – not only for spreading fear through attacks, but also to finance the expansion of terrorist dogmas. These organizations pose serious threats to the international community. The disruption of terrorist financing aims to create a hostile environment for the growth of terrorism and to limit considerably the terrorist groups capacities. The World Bank (WB), together with the International Monetary Fund (IMF), decided to include in their scope the Fight against the money laundering and the financing of terrorism, in order to assist Member States in protecting their internal financial system from terrorism use and abuse and reinforcing their legal system. To do so, they have adopted the Anti-Money Laundering /Combating Financing of Terrorism (AML/CFT) standards that have been set up by the Financial Action Task Force. This set of standards, recognized as the international standards for anti-money laundering and combating the financing of terrorism, has to be implemented by States Members in order to strengthen their judicial system and relevant national institutions. However, we noted that, to date, some States Members still have significant AML/CFT deficiencies, which can constitute serious threats not only to the country’s economic stability but also for the global financial system. In addition, studies stressed out that repressive measures are more implemented by countries than preventive measures, which could be an important weakness in a state security system. Furthermore, we noticed that the AML/CFT standards evolve slowly, while techniques used by terrorist networks keep developing. The goal of the study is to show how to enhance the AML/CFT global compliance through the work of the IMF and the WB, to help member states to consolidate their financial system. To encourage and ensure the effectiveness of these standards, a methodology for assessing the compliance with the AML/CFT standards has been created to follow up the concrete implementation of these standards and to provide accurate technical assistance to countries in need. A risk-based approach has also been adopted as a key component of the implementation of the AML/CFT Standards, with the aim of strengthening the efficiency of the standards. Instead, we noted that the assessment is not efficient in the process of enhancing AML/CFT measures because it seems to lack of adaptation to the country situation. In other words, internal and external factors are not enough taken into account in a country assessment program. The purpose of this paper is to analyze the AML/CFT regime in the fight against the financing of terrorism and to find lasting solutions to achieve the global AML/CFT compliance. The work of all the organizations involved in this combat is imperative to protect the financial network and to lead to the disintegration of terrorist groups in the future.Keywords: AML/CFT standards, financing of terrorism, international financial institutions, risk-based approach
Procedia PDF Downloads 275242 Seeking Compatibility between Green Infrastructure and Recentralization: The Case of Greater Toronto Area
Authors: Sara Saboonian, Pierre Filion
Abstract:
There are two distinct planning approaches attempting to transform the North American suburb so as to reduce its adverse environmental impacts. The first one, the recentralization approach, proposes intensification, multi-functionality and more reliance on public transit and walking. It thus offers an alternative to the prevailing low-density, spatial specialization and automobile dependence of the North American suburb. The second approach concentrates instead on the provision of green infrastructure, which rely on natural systems rather than on highly engineered solutions to deal with the infrastructure needs of suburban areas. There are tensions between these two approaches as recentralization generally overlooks green infrastructure, which can be space consuming (as in the case of water retention systems), and thus conflicts with the intensification goals of recentralization. The research investigates three Canadian planned suburban centres in the Greater Toronto Area, where recentralization is the current planning practice, despite rising awareness of the benefits of green infrastructure. Methods include reviewing the literature on green infrastructure planning, a critical analysis of the Ontario provincial plans for recentralization, surveying residents’ preferences regarding alternative suburban development models, and interviewing officials who deal with the local planning of the three centres. The case studies expose the difficulties in creating planned suburban centres that accommodate green infrastructure while adhering to recentralization principles. Until now, planners have been mostly focussed on recentralization at the expense of green infrastructure. In this context, the frequent lack of compatibility between recentralization and the space requirements of green infrastructure explains the limited presence of such infrastructures in planned suburban centres. Finally, while much attention has been given in the planning discourse to the economic and lifestyle benefits of recentralization, much less has been made of the wide range of advantages of green infrastructure, which explains limited public mobilization over the development of green infrastructure networks. The paper will concentrate on ways of combining recentralization with green infrastructure strategies and identify the aspects of the two approaches that are most compatible with each other. The outcome of such blending will marry high density, public-transit oriented developments, which generate walkability and street-level animation, with the presence of green space, naturalized settings and reliance on renewable energy. The paper will advance a planning framework that will fuse green infrastructure with recentralization, thus ensuring the achievement of higher density and reduced reliance on the car along with the provision of critical ecosystem services throughout cities. This will support and enhance the objectives of both green infrastructure and recentralization.Keywords: environmental-based planning, green infrastructure, multi-functionality, recentralization
Procedia PDF Downloads 131241 Automated Facial Symmetry Assessment for Orthognathic Surgery: Utilizing 3D Contour Mapping and Hyperdimensional Computing-Based Machine Learning
Authors: Wen-Chung Chiang, Lun-Jou Lo, Hsiu-Hsia Lin
Abstract:
This study aimed to improve the evaluation of facial symmetry, which is crucial for planning and assessing outcomes in orthognathic surgery (OGS). Facial symmetry plays a key role in both aesthetic and functional aspects of OGS, making its accurate evaluation essential for optimal surgical results. To address the limitations of traditional methods, a different approach was developed, combining three-dimensional (3D) facial contour mapping with hyperdimensional (HD) computing to enhance precision and efficiency in symmetry assessments. The study was conducted at Chang Gung Memorial Hospital, where data were collected from 2018 to 2023 using 3D cone beam computed tomography (CBCT), a highly detailed imaging technique. A large and comprehensive dataset was compiled, consisting of 150 normal individuals and 2,800 patients, totaling 5,750 preoperative and postoperative facial images. These data were critical for training a machine learning model designed to analyze and quantify facial symmetry. The machine learning model was trained to process 3D contour data from the CBCT images, with HD computing employed to power the facial symmetry quantification system. This combination of technologies allowed for an objective and detailed analysis of facial features, surpassing the accuracy and reliability of traditional symmetry assessments, which often rely on subjective visual evaluations by clinicians. In addition to developing the system, the researchers conducted a retrospective review of 3D CBCT data from 300 patients who had undergone OGS. The patients’ facial images were analyzed both before and after surgery to assess the clinical utility of the proposed system. The results showed that the facial symmetry algorithm achieved an overall accuracy of 82.5%, indicating its robustness in real-world clinical applications. Postoperative analysis revealed a significant improvement in facial symmetry, with an average score increase of 51%. The mean symmetry score rose from 2.53 preoperatively to 3.89 postoperatively, demonstrating the system's effectiveness in quantifying improvements after OGS. These results underscore the system's potential for providing valuable feedback to surgeons and aiding in the refinement of surgical techniques. The study also led to the development of a web-based system that automates facial symmetry assessment. This system integrates HD computing and 3D contour mapping into a user-friendly platform that allows for rapid and accurate evaluations. Clinicians can easily access this system to perform detailed symmetry assessments, making it a practical tool for clinical settings. Additionally, the system facilitates better communication between clinicians and patients by providing objective, easy-to-understand symmetry scores, which can help patients visualize the expected outcomes of their surgery. In conclusion, this study introduced a valuable and highly effective approach to facial symmetry evaluation in OGS, combining 3D contour mapping, HD computing, and machine learning. The resulting system achieved high accuracy and offers a streamlined, automated solution for clinical use. The development of the web-based platform further enhances its practicality, making it a valuable tool for improving surgical outcomes and patient satisfaction in orthognathic surgery.Keywords: facial symmetry, orthognathic surgery, facial contour mapping, hyperdimensional computing
Procedia PDF Downloads 27