Search results for: frequency modulated continuous wave radar
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7404

Search results for: frequency modulated continuous wave radar

654 Human Lens Metabolome: A Combined LC-MS and NMR Study

Authors: Vadim V. Yanshole, Lyudmila V. Yanshole, Alexey S. Kiryutin, Timofey D. Verkhovod, Yuri P. Tsentalovich

Abstract:

Cataract, or clouding of the eye lens, is the leading cause of vision impairment in the world. The lens tissue have very specific structure: It does not have vascular system, the lens proteins – crystallins – do not turnover throughout lifespan. The protection of lens proteins is provided by the metabolites which diffuse inside the lens from the aqueous humor or synthesized in the lens epithelial layer. Therefore, the study of changes in the metabolite composition of a cataractous lens as compared to a normal lens may elucidate the possible mechanisms of the cataract formation. Quantitative metabolomic profiles of normal and cataractous human lenses were obtained with the combined use of high-frequency nuclear magnetic resonance (NMR) and ion-pairing high-performance liquid chromatography with high-resolution mass-spectrometric detection (LC-MS) methods. The quantitative content of more than fifty metabolites has been determined in this work for normal aged and cataractous human lenses. The most abundant metabolites in the normal lens are myo-inositol, lactate, creatine, glutathione, glutamate, and glucose. For the majority of metabolites, their levels in the lens cortex and nucleus are similar, with the few exceptions including antioxidants and UV filters: The concentrations of glutathione, ascorbate and NAD in the lens nucleus decrease as compared to the cortex, while the levels of the secondary UV filters formed from primary UV filters in redox processes increase. That confirms that the lens core is metabolically inert, and the metabolic activity in the lens nucleus is mostly restricted by protection from the oxidative stress caused by UV irradiation, UV filter spontaneous decomposition, or other factors. It was found that the metabolomic composition of normal and age-matched cataractous human lenses differ significantly. The content of the most important metabolites – antioxidants, UV filters, and osmolytes – in the cataractous nucleus is at least ten fold lower than in the normal nucleus. One may suppose that the majority of these metabolites are synthesized in the lens epithelial layer, and that age-related cataractogenesis might originate from the dysfunction of the lens epithelial cells. Comprehensive quantitative metabolic profiles of the human eye lens have been acquired for the first time. The obtained data can be used for the analysis of changes in the lens chemical composition occurring with age and with the cataract development.

Keywords: cataract, lens, NMR, LC-MS, metabolome

Procedia PDF Downloads 298
653 Analytical Performance of Cobas C 8000 Analyzer Based on Sigma Metrics

Authors: Sairi Satari

Abstract:

Introduction: Six-sigma is a metric that quantifies the performance of processes as a rate of Defects-Per-Million Opportunities. Sigma methodology can be applied in chemical pathology laboratory for evaluating process performance with evidence for process improvement in quality assurance program. In the laboratory, these methods have been used to improve the timeliness of troubleshooting, reduce the cost and frequency of quality control and minimize pre and post-analytical errors. Aim: The aim of this study is to evaluate the sigma values of the Cobas 8000 analyzer based on the minimum requirement of the specification. Methodology: Twenty-one analytes were chosen in this study. The analytes were alanine aminotransferase (ALT), albumin, alkaline phosphatase (ALP), Amylase, aspartate transaminase (AST), total bilirubin, calcium, chloride, cholesterol, HDL-cholesterol, creatinine, creatinine kinase, glucose, lactate dehydrogenase (LDH), magnesium, potassium, protein, sodium, triglyceride, uric acid and urea. Total error was obtained from Clinical Laboratory Improvement Amendments (CLIA). The Bias was calculated from end cycle report of Royal College of Pathologists of Australasia (RCPA) cycle from July to December 2016 and coefficient variation (CV) from six-month internal quality control (IQC). The sigma was calculated based on the formula :Sigma = (Total Error - Bias) / CV. The analytical performance was evaluated based on the sigma, sigma > 6 is world class, sigma > 5 is excellent, sigma > 4 is good and sigma < 4 is satisfactory and sigma < 3 is poor performance. Results: Based on the calculation, we found that, 96% are world class (ALT, albumin, ALP, amylase, AST, total bilirubin, cholesterol, HDL-cholesterol, creatinine, creatinine kinase, glucose, LDH, magnesium, potassium, triglyceride and uric acid. 14% are excellent (calcium, protein and urea), and 10% ( chloride and sodium) require more frequent IQC performed per day. Conclusion: Based on this study, we found that IQC should be performed frequently for only Chloride and Sodium to ensure accurate and reliable analysis for patient management.

Keywords: sigma matrics, analytical performance, total error, bias

Procedia PDF Downloads 154
652 Elastoplastic Modified Stillinger Weber-Potential Based Discretized Virtual Internal Bond and Its Application to the Dynamic Fracture Propagation

Authors: Dina Kon Mushid, Kabutakapua Kakanda, Dibu Dave Mbako

Abstract:

The failure of material usually involves elastoplastic deformation and fracturing. Continuum mechanics can effectively deal with plastic deformation by using a yield function and the flow rule. At the same time, it has some limitations in dealing with the fracture problem since it is a theory based on the continuous field hypothesis. The lattice model can simulate the fracture problem very well, but it is inadequate for dealing with plastic deformation. Based on the discretized virtual internal bond model (DVIB), this paper proposes a lattice model that can account for plasticity. DVIB is a lattice method that considers material to comprise bond cells. Each bond cell may have any geometry with a finite number of bonds. The two-body or multi-body potential can characterize the strain energy of a bond cell. The two-body potential leads to the fixed Poisson ratio, while the multi-body potential can overcome the limitation of the fixed Poisson ratio. In the present paper, the modified Stillinger-Weber (SW), a multi-body potential, is employed to characterize the bond cell energy. The SW potential is composed of two parts. One part is the two-body potential that describes the interatomic interactions between particles. Another is the three-body potential that represents the bond angle interactions between particles. Because the SW interaction can represent the bond stretch and bond angle contribution, the SW potential-based DVIB (SW-DVIB) can represent the various Poisson ratios. To embed the plasticity in the SW-DVIB, the plasticity is considered in the two-body part of the SW potential. It is done by reducing the bond stiffness to a lower level once the bond reaches the yielding point. While before the bond reaches the yielding point, the bond is elastic. When the bond deformation exceeds the yielding point, the bond stiffness is softened to a lower value. When unloaded, irreversible deformation occurs. With the bond length increasing to a critical value, termed the failure bond length, the bond fails. The critical failure bond length is related to the cell size and the macro fracture energy. By this means, the fracture energy is conserved so that the cell size sensitivity problem is relieved to a great extent. In addition, the plasticity and the fracture are also unified at the bond level. To make the DVIB able to simulate different Poisson ratios, the three-body part of the SW potential is kept elasto-brittle. The bond angle can bear the moment before the bond angle increment is smaller than a critical value. By this method, the SW-DVIB can simulate the plastic deformation and the fracturing process of material with various Poisson ratios. The elastoplastic SW-DVIB is used to simulate the plastic deformation of a material, the plastic fracturing process, and the tunnel plastic deformation. It has been shown that the current SW-DVIB method is straightforward in simulating both elastoplastic deformation and plastic fracture.

Keywords: lattice model, discretized virtual internal bond, elastoplastic deformation, fracture, modified stillinger-weber potential

Procedia PDF Downloads 80
651 Enhancing Project Management Performance in Prefabricated Building Construction under Uncertainty: A Comprehensive Approach

Authors: Niyongabo Elyse

Abstract:

Prefabricated building construction is a pioneering approach that combines design, production, and assembly to attain energy efficiency, environmental sustainability, and economic feasibility. Despite continuous development in the industry in China, the low technical maturity of standardized design, factory production, and construction assembly introduces uncertainties affecting prefabricated component production and on-site assembly processes. This research focuses on enhancing project management performance under uncertainty to help enterprises navigate these challenges and optimize project resources. The study introduces a perspective on how uncertain factors influence the implementation of prefabricated building construction projects. It proposes a theoretical model considering project process management ability, adaptability to uncertain environments, and collaboration ability of project participants. The impact of uncertain factors is demonstrated through case studies and quantitative analysis, revealing constraints on implementation time, cost, quality, and safety. To address uncertainties in prefabricated component production scheduling, a fuzzy model is presented, expressing processing times in interval values. The model utilizes a cooperative co-evolution evolution algorithm (CCEA) to optimize scheduling, demonstrated through a real case study showcasing reduced project duration and minimized effects of processing time disturbances. Additionally, the research addresses on-site assembly construction scheduling, considering the relationship between task processing times and assigned resources. A multi-objective model with fuzzy activity durations is proposed, employing a hybrid cooperative co-evolution evolution algorithm (HCCEA) to optimize project scheduling. Results from real case studies indicate improved project performance in terms of duration, cost, and resilience to processing time delays and resource changes. The study also introduces a multistage dynamic process control model, utilizing IoT technology for real-time monitoring during component production and construction assembly. This approach dynamically adjusts schedules when constraints arise, leading to enhanced project management performance, as demonstrated in a real prefabricated housing project. Key contributions include a fuzzy prefabricated components production scheduling model, a multi-objective multi-mode resource-constrained construction project scheduling model with fuzzy activity durations, a multi-stage dynamic process control model, and a cooperative co-evolution evolution algorithm. The integrated mathematical model addresses the complexity of prefabricated building construction project management, providing a theoretical foundation for practical decision-making in the field.

Keywords: prefabricated construction, project management performance, uncertainty, fuzzy scheduling

Procedia PDF Downloads 36
650 Efficacy and Safety of Eucalyptus for Relief Cough Symptom: A Systematic Review and Meta-Analysis

Authors: Ladda Her, Juntip Kanjanasilp, Ratree Sawangjit, Nathorn Chaiyakunapruk

Abstract:

Cough is the common symptom of the respiratory tract infections or non-infections; the duration of cough indicates a classification and severity of disease. Herbal medicines can be used as the alternative to drugs for relief of cough symptoms from acute and chronic disease. Eucalyptus was used for reducing cough with evidences suggesting it has an active role in reduction of airway inflammation. The present study aims to evaluate efficacy and safety of eucalyptus for relief of cough symptom in respiratory disease. Method: The Cochrane Library, MEDLINE (PubMed), Scopus, CINAHL, Springer, Science direct, ProQuest, and THAILIS databases. From its inception until 01/02/2019 for randomized control trials. We follow for the efficacy and safety of eucalyptus for reducing cough. Methodological quality was evaluated by using the Cochrane risk of bias tool; two reviewers in our team screened eligibility and extracted data. Result: Six studies were included for the review and five studies were included in the meta-analysis, there were 1.911 persons including children (n: 1) and adult (n: 5) studies; for study in children and adult were between 1 and 80 years old, respectively. Eucalyptus was used as mono herb (n: 2) and in combination with other herbs form (n: 4). All of the studies with eucalyptus were compared for efficacy and safety with placebo or standard treatment, Eucalyptus dosage form in studies included capsules, spray, and syrup. Heterogeneity was 32.44 used random effect model (I² = 1.2%, χ² = 1.01; P-value = 0.314). The efficacy of eucalyptus was showed a reduced cough symptom statistically significant (n = 402, RR: 1.40, 95%CI [1.19, 1.65], P-value < 0.0001) when compared with placebo. Adverse events (AEs) were reported mild to moderate intensity with mostly gastrointestinal symptom. The methodological quality of the included trials was overall poor. Conclusion: Eucalyptus appears to be beneficial and safe for relieving in respiratory diseases focus on cough frequency. The evidence was inconclusive due to limited quality trial. Well-designed trials for evaluating the effectiveness in humans, the effectiveness for reducing cough symptom in human is needed. Eucalyptus had safety as monotherapy or in combination with other herbs.

Keywords: cough, eucalyptus, cineole, herbal medicine, systematic review, meta-analysis

Procedia PDF Downloads 140
649 Investigating the Neural Heterogeneity of Developmental Dyscalculia

Authors: Fengjuan Wang, Azilawati Jamaludin

Abstract:

Developmental Dyscalculia (DD) is defined as a particular learning difficulty with continuous challenges in learning requisite math skills that cannot be explained by intellectual disability or educational deprivation. Recent studies have increasingly recognized that DD is a heterogeneous, instead of monolithic, learning disorder with not only cognitive and behavioral deficits but so too neural dysfunction. In recent years, neuroimaging studies employed group comparison to explore the neural underpinnings of DD, which contradicted the heterogenous nature of DD and may obfuscate critical individual differences. This research aimed to investigate the neural heterogeneity of DD using case studies with functional near-infrared spectroscopy (fNIRS). A total of 54 aged 6-7 years old of children participated in this study, comprising two comprehensive cognitive assessments, an 8-minute resting state, and an 8-minute one-digit addition task. Nine children met the criteria of DD and scored at or below 85 (i.e., the 16th percentile) on the Mathematics or Math Fluency subtest of the Wechsler Individual Achievement Test, Third Edition (WIAT-III) (both subtest scores were 90 and below). The remaining 45 children formed the typically developing (TD) group. Resting-state data and brain activation in the inferior frontal gyrus (IFG), superior frontal gyrus (SFG), and intraparietal sulcus (IPS) were collected for comparison between each case and the TD group. Graph theory was used to analyze the brain network under the resting state. This theory represents the brain network as a set of nodes--brain regions—and edges—pairwise interactions across areas to reveal the architectural organizations of the nervous network. Next, a single-case methodology developed by Crawford et al. in 2010 was used to compare each case’s brain network indicators and brain activation against 45 TD children’s average data. Results showed that three out of the nine DD children displayed significant deviation from TD children’s brain indicators. Case 1 had inefficient nodal network properties. Case 2 showed inefficient brain network properties and weaker activation in the IFG and IPS areas. Case 3 displayed inefficient brain network properties with no differences in activation patterns. As a rise above, the present study was able to distill differences in architectural organizations and brain activation of DD vis-à-vis TD children using fNIRS and single-case methodology. Although DD is regarded as a heterogeneous learning difficulty, it is noted that all three cases showed lower nodal efficiency in the brain network, which may be one of the neural sources of DD. Importantly, although the current “brain norm” established for the 45 children is tentative, the results from this study provide insights not only for future work in “developmental brain norm” with reliable brain indicators but so too the viability of single-case methodology, which could be used to detect differential brain indicators of DD children for early detection and interventions.

Keywords: brain activation, brain network, case study, developmental dyscalculia, functional near-infrared spectroscopy, graph theory, neural heterogeneity

Procedia PDF Downloads 39
648 The Preventive Effect of Metformin on Paclitaxel-Induced Peripheral Neuropathy

Authors: AliAkbar Hafezi, Jamshid Abedi, Jalal Taherian, Behnam Kadkhodaei, Mahsa Elahi

Abstract:

Background. Peripheral neuropathy is a common side effect of the administration of neurotoxic chemotherapy agents. This adverse effect is a major dose-limiting factor of many commonly used chemotherapy drugs. Currently, there are no Food and Drug Administration (FDA) approved medications for the prevention or treatment of chemotherapy-induced peripheral neuropathy. Therefore, this study was performed to investigate the efficacy and safety of metformin on paclitaxel-induced peripheral neuropathy (PIPN). Methods. In this randomized clinical trial, cancer patients who were candidates for chemotherapy with paclitaxel referred to the radiation oncology departments in Iran from 2022 to 2023 were studied. Patients were randomly divided into two groups; 1- Case group (n = 30) received metformin 500 mg orally twice a day after meals during chemotherapy with paclitaxel, and 2- Control group (30 people) received chemotherapy without metformin or any additional medication. Patients were visited in terms of numbness or other neurological symptoms two weeks before chemotherapy, 1-2 days before and weekly during chemotherapy, and at the end of the study. They were assessed by nerve conduction study (NCS) before intervention and one week after the end of chemotherapy. The primary outcome was the efficacy in reducing PIPN and the secondary outcome was adverse effects. Eventually, the outcomes were compared between the two groups of patients. Results. A total of 60 female cancer patients receiving chemotherapy with paclitaxel were evaluated in two groups. The groups were matched in terms of age, body mass index, fasting blood sugar, smoking, pathologic stage, and creatinine levels. The results showed that 18 patients (60.0 %) in the case group and 23 patients (76.6 %) in the control group had PIPN clinically (P = 0.267), and NCS showed 11 patients (36.6 %) in the case group and 15 patients (50.0 %) in the control group suffered from PIPN which no significant difference was observed between the two groups (P = 0.435). Diarrhea (n = 3; 10.0 %) and nausea (n = 3; 10.0 %) were the most common side effects of metformin in the case group and no serious side effects (lactic acidosis and anemia) were found in these patients. Conclusion. This study indicated that metformin did not significantly prevent PIPN in cancer patients receiving chemotherapy, although the frequency of peripheral neuropathy in the case group was lower than in the control group. The use of metformin in the patients had acceptable safety and no serious side effects were reported.

Keywords: peripheral neuropathy, chemotherapy, paclitaxel, metformin

Procedia PDF Downloads 24
647 Thermal Stress and Computational Fluid Dynamics Analysis of Coatings for High-Temperature Corrosion

Authors: Ali Kadir, O. Anwar Beg

Abstract:

Thermal barrier coatings are among the most popular methods for providing corrosion protection in high temperature applications including aircraft engine systems, external spacecraft structures, rocket chambers etc. Many different materials are available for such coatings, of which ceramics generally perform the best. Motivated by these applications, the current investigation presents detailed finite element simulations of coating stress analysis for a 3- dimensional, 3-layered model of a test sample representing a typical gas turbine component scenario. Structural steel is selected for the main inner layer, Titanium (Ti) alloy for the middle layer and Silicon Carbide (SiC) for the outermost layer. The model dimensions are 20 mm (width), 10 mm (height) and three 1mm deep layers. ANSYS software is employed to conduct three types of analysis- static structural, thermal stress analysis and also computational fluid dynamic erosion/corrosion analysis (via ANSYS FLUENT). The specified geometry which corresponds to corrosion test samples exactly is discretized using a body-sizing meshing approach, comprising mainly of tetrahedron cells. Refinements were concentrated at the connection points between the layers to shift the focus towards the static effects dissipated between them. A detailed grid independence study is conducted to confirm the accuracy of the selected mesh densities. To recreate gas turbine scenarios; in the stress analysis simulations, static loading and thermal environment conditions of up to 1000 N and 1000 degrees Kelvin are imposed. The default solver was used to set the controls for the simulation with the fixed support being set as one side of the model while subjecting the opposite side to a tabular force of 500 and 1000 Newtons. Equivalent elastic strain, total deformation, equivalent stress and strain energy were computed for all cases. Each analysis was duplicated twice to remove one of the layers each time, to allow testing of the static and thermal effects with each of the coatings. ANSYS FLUENT simulation was conducted to study the effect of corrosion on the model under similar thermal conditions. The momentum and energy equations were solved and the viscous heating option was applied to represent improved thermal physics of heat transfer between the layers of the structures. A Discrete Phase Model (DPM) in ANSYS FLUENT was employed which allows for the injection of continuous uniform air particles onto the model, thereby enabling an option for calculating the corrosion factor caused by hot air injection (particles prescribed 5 m/s velocity and 1273.15 K). Extensive visualization of results is provided. The simulations reveal interesting features associated with coating response to realistic gas turbine loading conditions including significantly different stress concentrations with different coatings.

Keywords: thermal coating, corrosion, ANSYS FEA, CFD

Procedia PDF Downloads 125
646 Improving Fingerprinting-Based Localization System Using Generative AI

Authors: Getaneh Berie Tarekegn, Li-Chia Tai

Abstract:

With the rapid advancement of artificial intelligence, low-power built-in sensors on Internet of Things devices, and communication technologies, location-aware services have become increasingly popular and have permeated every aspect of people’s lives. Global navigation satellite systems (GNSSs) are the default method of providing continuous positioning services for ground and aerial vehicles, as well as consumer devices (smartphones, watches, notepads, etc.). However, the environment affects satellite positioning systems, particularly indoors, in dense urban and suburban cities enclosed by skyscrapers, or when deep shadows obscure satellite signals. This is because (1) indoor environments are more complicated due to the presence of many objects surrounding them; (2) reflection within the building is highly dependent on the surrounding environment, including the positions of objects and human activity; and (3) satellite signals cannot be reached in an indoor environment, and GNSS doesn't have enough power to penetrate building walls. GPS is also highly power-hungry, which poses a severe challenge for battery-powered IoT devices. Due to these challenges, IoT applications are limited. Consequently, precise, seamless, and ubiquitous Positioning, Navigation and Timing (PNT) systems are crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarms, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. We also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 0.39 m, and more than 90% of the errors are less than 0.82 m. According to numerical results, SRCLoc improves positioning performance and reduces radio map construction costs significantly compared to traditional methods.

Keywords: location-aware services, feature extraction technique, generative adversarial network, long short-term memory, support vector machine

Procedia PDF Downloads 26
645 Unfolding Architectural Assemblages: Mapping Contemporary Spatial Objects' Affective Capacity

Authors: Panagiotis Roupas, Yota Passia

Abstract:

This paper aims at establishing an index of design mechanisms - immanent in spatial objects - based on the affective capacity of their material formations. While spatial objects (design objects, buildings, urban configurations, etc.) are regarded as systems composed of interacting parts, within the premises of assemblage theory, their ability to affect and to be affected has not yet been mapped or sufficiently explored. This ability lies in excess, a latent potentiality they contain, not transcendental but immanent in their pre-subjective aesthetic power. As spatial structures are theorized as assemblages - composed of heterogeneous elements that enter into relations with one another - and since all assemblages are parts of larger assemblages, their components' ability to engage is contingent. We thus seek to unfold the mechanisms inherent in spatial objects that allow to the constituent parts of design assemblages to perpetually enter into new assemblages. To map architectural assemblage's affective ability, spatial objects are analyzed in two axes. The first axis focuses on the relations that the assemblage's material and expressive components develop in order to enter the assemblages. Material components refer to those material elements that an assemblage requires in order to exist, while expressive components includes non-linguistic (sense impressions) as well as linguistic (beliefs). The second axis records the processes known as a-signifying signs or a-signs, which are the triggering mechanisms able to territorialize or deterritorialize, stabilize or destabilize the assemblage and thus allow it to assemble anew. As a-signs cannot be isolated from matter, we point to their resulting effects, which without entering the linguistic level they are expressed in terms of intensity fields: modulations, movements, speeds, rhythms, spasms, etc. They belong to a molecular level where they operate in the pre-subjective world of perceptions, effects, drives, and emotions. A-signs have been introduced as intensities that transform the object beyond meaning, beyond fixed or known cognitive procedures. To that end, from an archive of more than 100 spatial objects by contemporary architects and designers, we have created an effective mechanisms index is created, where each a-sign is now connected with the list of effects it triggers and which thoroughly defines it. And vice versa, the same effect can be triggered by different a-signs, allowing the design object to lie in a perpetual state of becoming. To define spatial objects, A-signs are categorized in terms of their aesthetic power to affect and to be affected on the basis of the general categories of form, structure and surface. Thus, different part's degree of contingency are evaluated and measured and finally, we introduce as material information that is immanent in the spatial object while at the same time they confer no meaning; they only convey some information without semantic content. Through this index, we are able to analyze and direct the final form of the spatial object while at the same time establishing the mechanism to measure its continuous transformation.

Keywords: affective mechanisms index, architectural assemblages, a-signifying signs, cartography, virtual

Procedia PDF Downloads 108
644 Assessing the Competitiveness of Green Charcoal Energy as an Alternative Source of Cooking Fuel in Uganda

Authors: Judith Awacorach, Quentin Gausset

Abstract:

Wood charcoal and firewood are the primary sources of energy for cooking fuel in most Sub-Saharan African countries, including Uganda. This leads to unsustainable forest use and to rapid deforestation. Green charcoal (made out of agricultural residues that are carbonized, reduced in char powder, and glued in briquettes, using a binder such as sugar molasse, cassava flour or clay) is a promising and sustainable alternative to wood charcoal and firewood. It is considered as renewable energy because the carbon emissions released by the combustion of green charcoal are immediately captured again in the next agricultural cycle. If practiced on a large scale, this has the potential to replace wood charcoal and stop deforestation. However, the uptake of green charcoal for cooking remains low in Uganda despite the introduction of the technology 15 years ago. The present paper reviews the barriers to the production and commercialization of green charcoal. The paper is based on the study of 13 production sites, recording the raw materials used, the production techniques, the quantity produced, the frequency of production, and the business model. Observations were made on each site, and interviews were conducted with the managers of the facilities and with one or two employees in the larger facilities. We also interviewed project administrators from four funding agencies interested in financing green charcoal production. The results of our research identify the main barriers as follows: 1) The price of green charcoal is not competitive (it is more labor and capital-intensive than wood charcoal). 2) There is a problem with quality control and labeling (one finds a wide variety of green charcoal with very different performances). 3) The carbonization of agricultural crop residues is a major bottleneck in green char production. Most briquettes are produced with wood charcoal dust or powder, which is a by-product of wood charcoal. As such, they increase the efficiency of wood charcoal but do not yet replace it. 4) There is almost no marketing chain for the product (most green charcoal is sold directly from producer to consumer without any middleman). 5) The financing institutions are reluctant to lend money for this kind of activity. 6) Storage can be challenging since briquettes can dissolve due to moisture. In conclusion, a number of important barriers need to be overcome before green charcoal can become a serious alternative to wood charcoal.

Keywords: briquettes, competitiveness, deforestation, green charcoal, renewable energy

Procedia PDF Downloads 30
643 The Diurnal and Seasonal Relationships of Pedestrian Injuries Secondary to Motor Vehicles in Young People

Authors: Amina Akhtar, Rory O'Connor

Abstract:

Introduction: There remains significant morbidity and mortality in young pedestrians hit by motor vehicles, even in the era of pedestrian crossings and speed limits. The aim of this study was to compare incidence and injury severity of motor vehicle-related pedestrian trauma according to time of day and season in a young population, based on the supposition that injuries would be more prevalent during dusk and dawn and during autumn and winter. Methods: Data was retrieved for patients between 10-25 years old from the National Trauma Audit and Research Network (TARN) database who had been involved as pedestrians in motor vehicle accidents between 2015-2020. The incidence of injuries, their severity (using the Injury Severity Score [ISS]), hospital transfer time, and mortality were analysed according to the hours of daylight, darkness, and season. Results: The study identified a seasonal pattern, showing that autumn was the predominant season and led to 34.9% of injuries, with a further 25.4% in winter in comparison to spring and summer, with 21.4% and 18.3% of injuries, respectively. However, visibility alone was not a sufficient factor as 49.5% of injuries occurred during the time of darkness, while 50.5% occurred during daylight. Importantly, the greatest injury rate (number of injuries/hour) occurred between 1500-1630, correlating to school pick-up times. A further significant relationship between injury severity score (ISS) and daylight was demonstrated (p-value= 0.0124), with moderate injuries (ISS 9-14) occurring most commonly during the day (72.7%) and more severe injuries (ISS>15) occurred during the night (55.8%). Conclusion: We have identified a relationship between time of day and the frequency and severity of pedestrian trauma in young people. In addition, particular time groupings correspond to the greatest injury rate, suggesting that reduced visibility coupled with school pick-up times may play a significant role. This could be addressed through a targeted public health approach to implementing change. We recommend targeted public health measures to improve road safety that focus on these times and that increase the visibility of children combined with education for drivers.

Keywords: major trauma, paediatric trauma, road traffic accidents, diurnal pattern

Procedia PDF Downloads 82
642 An Introspective look into Hotel Employees Career Satisfaction

Authors: Anastasios Zopiatis, Antonis L. Theocharous

Abstract:

In the midst of a fierce war for talent, the hospitality industry is seeking new and innovative ways to enrich its image as an employer of choice and not a necessity. Historically, the industry’s professions are portrayed as ‘unattractive’ due to their repetitious nature, long and unsocial working schedules, below average remunerations, and the mental and physical demands of the job. Aligning with the industry, hospitality and tourism scholars embarked on a journey to investigate pertinent topics with the aim of enhancing our conceptual understanding of the elements that influence employees at the hospitality world of work. Topics such as job involvement, commitment, job and career satisfaction, and turnover intentions became the focal points in a multitude of relevant empirical and conceptual investigations. Nevertheless, gaps or inconsistencies in existing theories, as a result of both the volatile complexity of the relationships governing human behavior in the hospitality workplace, and the academic community’s unopposed acceptance of theoretical frameworks mainly propounded in the United States and United Kingdom years ago, necessitate our continuous vigilance. Thus, in an effort to enhance and enrich the discourse, we set out to investigate the relationship between intrinsic and extrinsic job satisfaction traits and the individual’s career satisfaction, and subsequent intention to remain in the hospitality industry. Reflecting on existing literature, a quantitative survey was developed and administered, face-to-face, to 650 individuals working as full-time employees in 4- and 5- star hotel establishments in Cyprus, whereas a multivariate statistical analysis method, namely Structural Equation Modeling (SEM), was utilized to determine whether relationships existed between constructs as a means to either accept or reject the hypothesized theory. Findings, of interest to both industry stakeholders and academic scholars, suggest that the individual’s future intention to remain within the industry is primarily associated with extrinsic job traits. Our findings revealed that positive associations exist between extrinsic job traits, and both career satisfaction and future intention. In contrast, when investigating the relationship of intrinsic traits, a positive association was revealed only with career satisfaction. Apparently, the local industry’s environmental factors of seasonality, excessive turnover, overdependence on seasonal, and part-time migrant workers, prohibit industry stakeholders in effectively investing the time and resources in the development and professional growth of their employees. Consequently intrinsic job satisfaction factors such as advancement, growth, and achievement, take backstage to the more materialistic extrinsic factors. Findings from the subsequent mediation analysis support the notion that intrinsic traits can positively influence future intentions indirectly only through career satisfaction, whereas extrinsic traits can positively impact both career satisfaction and future intention both directly and indirectly.

Keywords: career satisfaction, Cyprus, hotel employees, structural equation modeling, SEM

Procedia PDF Downloads 267
641 Numerical Analyses of Dynamics of Deployment of PW-Sat2 Deorbit Sail Compared with Results of Experiment under Micro-Gravity and Low Pressure Conditions

Authors: P. Brunne, K. Ciechowska, K. Gajc, K. Gawin, M. Gawin, M. Kania, J. Kindracki, Z. Kusznierewicz, D. Pączkowska, F. Perczyński, K. Pilarski, D. Rafało, E. Ryszawa, M. Sobiecki, I. Uwarowa

Abstract:

Big amount of space debris constitutes nowadays a real thread for operating space crafts; therefore the main purpose of PW-Sat2’ team was to create a system that could help cleanse the Earth’s orbit after each small satellites’ mission. After 4 years of development, the motorless, low energy consumption and low weight system has been created. During series of tests, the system has shown high reliable efficiency. The PW-Sat2’s deorbit system is a square-shaped sail which covers an area of 4m². The sail surface is made of 6 μm aluminized Mylar film which is stretched across 4 diagonally placed arms, each consisting of two C-shaped flat springs and enveloped in Mylar sleeves. The sail is coiled using a special, custom designed folding stand that provides automation and repeatability of the sail unwinding tests and placed in a container with inner diameter of 85 mm. In the final configuration the deorbit system weights ca. 600 g and occupies 0.6U (in accordance with CubeSat standard). The sail’s releasing system requires minimal amount of power based on thermal knife that burns out the Dyneema wire, which holds the system before deployment. The Sail is being pushed out of the container within a safe distance (20 cm away) from the satellite. The energy for the deployment is completely assured by coiled C-shaped flat springs, which during the release, unfold the sail surface. To avoid dynamic effects on the satellite’s structure, there is the rotational link between the sail and satellite’s main body. To obtain complete knowledge about complex dynamics of the deployment, a number of experiments have been performed in varied environments. The numerical model of the dynamics of the Sail’s deployment has been built and is still under continuous development. Currently, the integration of the flight model and Deorbit Sail is performed. The launch is scheduled for February 2018. At the same time, in cooperation with United Nations Office for Outer Space Affairs, sail models and requested facilities are being prepared for the sail deployment experiment under micro-gravity and low pressure conditions at Bremen Drop Tower, Germany. Results of those tests will provide an ultimate and wide knowledge about deployment in space environment to which system will be exposed during its mission. Outcomes of the numerical model and tests will be compared afterwards and will help the team in building a reliable and correct model of a very complex phenomenon of deployment of 4 c-shaped flat springs with surface attached. The verified model could be used inter alia to investigate if the PW-Sat2’s sail is scalable and how far is it possible to go with enlarging when creating systems for bigger satellites.

Keywords: cubesat, deorbitation, sail, space, debris

Procedia PDF Downloads 275
640 Long-Term Resilience Performance Assessment of Dual and Singular Water Distribution Infrastructures Using a Complex Systems Approach

Authors: Kambiz Rasoulkhani, Jeanne Cole, Sybil Sharvelle, Ali Mostafavi

Abstract:

Dual water distribution systems have been proposed as solutions to enhance the sustainability and resilience of urban water systems by improving performance and decreasing energy consumption. The objective of this study was to evaluate the long-term resilience and robustness of dual water distribution systems versus singular water distribution systems under various stressors such as demand fluctuation, aging infrastructure, and funding constraints. To this end, the long-term dynamics of these infrastructure systems was captured using a simulation model that integrates institutional agency decision-making processes with physical infrastructure degradation to evaluate the long-term transformation of water infrastructure. A set of model parameters that varies for dual and singular distribution infrastructure based on the system attributes, such as pipes length and material, energy intensity, water demand, water price, average pressure and flow rate, as well as operational expenditures, were considered and input in the simulation model. Accordingly, the model was used to simulate various scenarios of demand changes, funding levels, water price growth, and renewal strategies. The long-term resilience and robustness of each distribution infrastructure were evaluated based on various performance measures including network average condition, break frequency, network leakage, and energy use. An ecologically-based resilience approach was used to examine regime shifts and tipping points in the long-term performance of the systems under different stressors. Also, Classification and Regression Tree analysis was adopted to assess the robustness of each system under various scenarios. Using data from the City of Fort Collins, the long-term resilience and robustness of the dual and singular water distribution systems were evaluated over a 100-year analysis horizon for various scenarios. The results of the analysis enabled: (i) comparison between dual and singular water distribution systems in terms of long-term performance, resilience, and robustness; (ii) identification of renewal strategies and decision factors that enhance the long-term resiliency and robustness of dual and singular water distribution systems under different stressors.

Keywords: complex systems, dual water distribution systems, long-term resilience performance, multi-agent modeling, sustainable and resilient water systems

Procedia PDF Downloads 273
639 The Role of Emotional Intelligence in the Manager's Psychophysiological Activity during a Performance-Review Discussion

Authors: Mikko Salminen, Niklas Ravaja

Abstract:

Emotional intelligence (EI) consists of skills for monitoring own emotions and emotions of others, skills for discriminating different emotions, and skills for using this information in thinking and actions. EI enhances, for example, work outcomes and organizational climate. We suggest that the role and manifestations of EI should also be studied in real leadership situations, especially during the emotional, social interaction. Leadership is essentially a process to influence others for reaching a certain goal. This influencing happens by managerial processes and computer-mediated communication (e.g. e-mail) but also by face-to-face, where facial expressions have a significant role in conveying emotional information. Persons with high EI are typically perceived more positively, and they have better social skills. We hypothesize, that during social interaction high EI enhances the ability to detect other’s emotional state and controlling own emotional expressions. We suggest, that emotionally intelligent leader’s experience less stress during social leadership situations, since they have better skills in dealing with the related emotional work. Thus the high-EI leaders would be more able to enjoy these situations, but also be more efficient in choosing appropriate expressions for building constructive dialogue. We suggest, that emotionally intelligent leaders show more positive emotional expressions than low-EI leaders. To study these hypotheses we observed performance review discussions of 40 leaders (24 female) with 78 (45 female) of their followers. Each leader held a discussion with two followers. Psychophysiological methods were chosen because they provide objective and continuous data from the whole duration of the discussions. We recorded sweating of the hands (electrodermal activation) by electrodes placed to the fingers of the non-dominant hand to assess the stress-related physiological arousal of the leaders. In addition, facial electromyography was recorded from cheek (zygomaticus major, activated during e.g. smiling) and periocular (orbicularis oculi, activated during smiling) muscles using electrode pairs placed on the left side of the face. Leader’s trait EI was measured with a 360 questionnaire, filled by each leader’s followers, peers, managers and by themselves. High-EI leaders had less sweating of the hands (p = .007) than the low-EI leaders. It is thus suggested that the high-EI leaders experienced less physiological stress during the discussions. Also, high scores in the factor “Using of emotions” were related to more facial muscle activation indicating positive emotional expressions (cheek muscle: p = .048; periocular muscle: p = .076, almost statistically significant). The results imply that emotionally intelligent managers are positively relaxed during s social leadership situations such as a performance review discussion. The current study also highlights the importance of EI in face-to-face social interaction, given the central role facial expressions have in interaction situations. The study also offers new insight to the biological basis of trait EI. It is suggested that the identification, forming, and intelligently using of facial expressions are skills that could be trained during leadership development courses.

Keywords: emotional intelligence, leadership, performance review discussion, psychophysiology, social interaction

Procedia PDF Downloads 236
638 Revolutionizing Accounting: Unleashing the Power of Artificial Intelligence

Authors: Sogand Barghi

Abstract:

The integration of artificial intelligence (AI) in accounting practices is reshaping the landscape of financial management. This paper explores the innovative applications of AI in the realm of accounting, emphasizing its transformative impact on efficiency, accuracy, decision-making, and financial insights. By harnessing AI's capabilities in data analysis, pattern recognition, and automation, accounting professionals can redefine their roles, elevate strategic decision-making, and unlock unparalleled value for businesses. This paper delves into AI-driven solutions such as automated data entry, fraud detection, predictive analytics, and intelligent financial reporting, highlighting their potential to revolutionize the accounting profession. Artificial intelligence has swiftly emerged as a game-changer across industries, and accounting is no exception. This paper seeks to illuminate the profound ways in which AI is reshaping accounting practices, transcending conventional boundaries, and propelling the profession toward a new era of efficiency and insight-driven decision-making. One of the most impactful applications of AI in accounting is automation. Tasks that were once labor-intensive and time-consuming, such as data entry and reconciliation, can now be streamlined through AI-driven algorithms. This not only reduces the risk of errors but also allows accountants to allocate their valuable time to more strategic and analytical tasks. AI's ability to analyze vast amounts of data in real time enables it to detect irregularities and anomalies that might go unnoticed by traditional methods. Fraud detection algorithms can continuously monitor financial transactions, flagging any suspicious patterns and thereby bolstering financial security. AI-driven predictive analytics can forecast future financial trends based on historical data and market variables. This empowers organizations to make informed decisions, optimize resource allocation, and develop proactive strategies that enhance profitability and sustainability. Traditional financial reporting often involves extensive manual effort and data manipulation. With AI, reporting becomes more intelligent and intuitive. Automated report generation not only saves time but also ensures accuracy and consistency in financial statements. While the potential benefits of AI in accounting are undeniable, there are challenges to address. Data privacy and security concerns, the need for continuous learning to keep up with evolving AI technologies, and potential biases within algorithms demand careful attention. The convergence of AI and accounting marks a pivotal juncture in the evolution of financial management. By harnessing the capabilities of AI, accounting professionals can transcend routine tasks, becoming strategic advisors and data-driven decision-makers. The applications discussed in this paper underline the transformative power of AI, setting the stage for an accounting landscape that is smarter, more efficient, and more insightful than ever before. The future of accounting is here, and it's driven by artificial intelligence.

Keywords: artificial intelligence, accounting, automation, predictive analytics, financial reporting

Procedia PDF Downloads 52
637 Extracting Opinions from Big Data of Indonesian Customer Reviews Using Hadoop MapReduce

Authors: Veronica S. Moertini, Vinsensius Kevin, Gede Karya

Abstract:

Customer reviews have been collected by many kinds of e-commerce websites selling products, services, hotel rooms, tickets and so on. Each website collects its own customer reviews. The reviews can be crawled, collected from those websites and stored as big data. Text analysis techniques can be used to analyze that data to produce summarized information, such as customer opinions. Then, these opinions can be published by independent service provider websites and used to help customers in choosing the most suitable products or services. As the opinions are analyzed from big data of reviews originated from many websites, it is expected that the results are more trusted and accurate. Indonesian customers write reviews in Indonesian language, which comes with its own structures and uniqueness. We found that most of the reviews are expressed with “daily language”, which is informal, do not follow the correct grammar, have many abbreviations and slangs or non-formal words. Hadoop is an emerging platform aimed for storing and analyzing big data in distributed systems. A Hadoop cluster consists of master and slave nodes/computers operated in a network. Hadoop comes with distributed file system (HDFS) and MapReduce framework for supporting parallel computation. However, MapReduce has weakness (i.e. inefficient) for iterative computations, specifically, the cost of reading/writing data (I/O cost) is high. Given this fact, we conclude that MapReduce function is best adapted for “one-pass” computation. In this research, we develop an efficient technique for extracting or mining opinions from big data of Indonesian reviews, which is based on MapReduce with one-pass computation. In designing the algorithm, we avoid iterative computation and instead adopt a “look up table” technique. The stages of the proposed technique are: (1) Crawling the data reviews from websites; (2) cleaning and finding root words from the raw reviews; (3) computing the frequency of the meaningful opinion words; (4) analyzing customers sentiments towards defined objects. The experiments for evaluating the performance of the technique were conducted on a Hadoop cluster with 14 slave nodes. The results show that the proposed technique (stage 2 to 4) discovers useful opinions, is capable of processing big data efficiently and scalable.

Keywords: big data analysis, Hadoop MapReduce, analyzing text data, mining Indonesian reviews

Procedia PDF Downloads 188
636 Learners' Perception of Digitalization of Medical Education in a Low Middle-Income Country – A Case Study of the Lecturio Platform

Authors: Naomi Nathan

Abstract:

Introduction Digitalization of medical education can revolutionize how medical students learn and interact with the medical curriculum across contexts. With the increasing availability of the internet and mobile connectivity in LMICs, online medical education platforms and digital learning tools are becoming more widely available, providing new opportunities for learners to access high-quality medical education and training. However, the adoption and integration of digital technologies in medical education in LMICs is a complex process influenced by various factors, including learners' perceptions and attitudes toward digital learning. In Ethiopia, the adoption of digital platforms for medical education has been slow, with traditional face-to-face teaching methods still being the norm. However, as access to technology improves and more universities adopt digital platforms, it is crucial to understand how medical students perceive this shift. Methodology This study investigated medical students' perception of the digitalization of medical education in relation to their access to the Lecturio Digital Medical Education Platform through a capacity-building project. 740 medical students from over 20 medical universities participated in the study. The students were surveyed using a questionnaire that included their attitudes toward the digitalization of medical education, their frequency of use of the digital platform, and their perceived benefits and challenges. Results The study results showed that most medical students had a positive attitude toward digitalizing medical education. The most commonly cited benefit was the convenience and flexibility of accessing course material/curriculum online. Many students also reported that they found the platform more interactive and engaging, leading to a more meaningful learning experience. The study also identified several challenges medical students faced when using the platform. The most commonly reported challenge was the need for more reliable internet access, which made it difficult for students to access content consistently. Overall, the results of this study suggest that medical students in Ethiopia have a positive perception of the digitalization of medical education. Over 97% of students continuously expressed a need for access to the Lecturio platform throughout their studies. Conclusion Significant challenges still need to be addressed to fully realize the Lecturio digital platform's benefits. Universities, relevant ministries, and various stakeholders must work together to address these challenges to ensure that medical students fully participate in and benefit from digitalized medical education - sustainably and effectively.

Keywords: digital medical education, EdTech, LMICs, e-learning

Procedia PDF Downloads 77
635 Advances in Health Risk Assessment of Mycotoxins in Africa

Authors: Wilfred A. Abiaa, Chibundu N. Ezekiel, Benedikt Warth, Michael Sulyok, Paul C. Turner, Rudolf Krska, Paul F. Moundipa

Abstract:

Mycotoxins are a wide range of toxic secondary metabolites of fungi that contaminate various food commodities worldwide especially in sub-Saharan Africa (SSA). Such contamination seriously compromises food safety and quality posing a serious problem for human health as well as to trade and the economy. Their concentrations depend on various factors, such as the commodity itself, climatic conditions, storage conditions, seasonal variances, and processing methods. When humans consume foods contaminated by mycotoxins, they exert toxic effects to their health through various modes of actions. Rural populations in sub-Saharan Africa, are exposed to dietary mycotoxins, but it is supposed that exposure levels and health risks associated with mycotoxins between SSA countries may vary. Dietary exposures and health risk assessment studies have been limited by lack of equipment for the proper assessment of the associated health implications on consumer populations when they eat contaminated agricultural products. As such, mycotoxin research is premature in several SSA nations with product evaluation for mycotoxin loads below/above legislative limits being inadequate. Few nations have health risk assessment reports mainly based on direct quantification of the toxins in foods ('external exposure') and linking food levels with data from food frequency questionnaires. Nonetheless, the assessment of the exposure and health risk to mycotoxins requires more than the traditional approaches. Only a fraction of the mycotoxins in contaminated foods reaches the blood stream and exert toxicity ('internal exposure'). Also, internal exposure is usually smaller than external exposure thus dependence on external exposure alone may induce confounders in risk assessment. Some studies from SSA earlier focused on biomarker analysis mainly on aflatoxins while a few recent studies have concentrated on the multi-biomarker analysis of exposures in urine providing probable associations between observed disease occurrences and dietary mycotoxins levels. As a result, new techniques that could assess the levels of exposures directly in body tissue or fluid, and possibly link them to the disease state of individuals became urgent.

Keywords: mycotoxins, biomarkers, exposure assessment, health risk assessment, sub-Saharan Africa

Procedia PDF Downloads 557
634 Foot Self-Monitoring Knowledge, Attitude, Practice, and Related Factors among Diabetic Patients: A Descriptive and Correlational Study in a Taiwan Teaching Hospital

Authors: Li-Ching Lin, Yu-Tzu Dai

Abstract:

Recurrent foot ulcers or foot amputation have a major impact on patients with diabetes mellitus (DM), medical professionals, and society. A critical procedure for foot care is foot self-monitoring. Medical professionals’ understanding of patients’ foot self-monitoring knowledge, attitude, and practice is beneficial for raising patients’ disease awareness. This study investigated these and related factors among patients with DM through a descriptive study of the correlations. A scale for measuring the foot self-monitoring knowledge, attitude, and practice of patients with DM was used. Purposive sampling was adopted, and 100 samples were collected from the respondents’ self-reports or from interviews. The statistical methods employed were an independent-sample t-test, one-way analysis of variance, Pearson correlation coefficient, and multivariate regression analysis. The findings were as follows: the respondents scored an average of 12.97 on foot self-monitoring knowledge, and the correct answer rate was 68.26%. The respondents performed relatively lower in foot health screenings and recording, and awareness of neuropathy in the foot. The respondents held a positive attitude toward self-monitoring their feet and a negative attitude toward having others check the soles of their feet. The respondents scored an average of 12.64 on foot self-monitoring practice. Their scores were lower in their frequency of self-monitoring their feet, recording their self-monitoring results, checking their pedal pulse, and examining if their soles were red immediately after taking off their shoes. Significant positive correlations were observed among foot self-monitoring knowledge, attitude, and practice. The correlation coefficient between self-monitoring knowledge and self-monitoring practice was 0.20, and that between self-monitoring attitude and self-monitoring practice was 0.44. Stepwise regression analysis revealed that the main predictive factors of the foot self-monitoring practice in patients with DM were foot self-monitoring attitude, prior experience in foot care, and an educational attainment of college or higher. These factors predicted 33% of the variance. This study concludes that patients with DM lacked foot self-monitoring practice and advises that the patients’ self-monitoring abilities be evaluated first, including whether patients have poor eyesight, difficulties in bending forward due to obesity, and people who can assist them in self-monitoring. In addition, patient education should emphasize self-monitoring knowledge and practice, such as perceptions regarding the symptoms of foot neurovascular lesions, pulse monitoring methods, and new foot self-monitoring equipment. By doing so, new or recurring ulcers may be discovered in their early stages.

Keywords: diabetic foot, foot self-monitoring attitude, foot self-monitoring knowledge, foot self-monitoring practice

Procedia PDF Downloads 181
633 Improving Fingerprinting-Based Localization (FPL) System Using Generative Artificial Intelligence (GAI)

Authors: Getaneh Berie Tarekegn, Li-Chia Tai

Abstract:

With the rapid advancement of artificial intelligence, low-power built-in sensors on Internet of Things devices, and communication technologies, location-aware services have become increasingly popular and have permeated every aspect of people’s lives. Global navigation satellite systems (GNSSs) are the default method of providing continuous positioning services for ground and aerial vehicles, as well as consumer devices (smartphones, watches, notepads, etc.). However, the environment affects satellite positioning systems, particularly indoors, in dense urban and suburban cities enclosed by skyscrapers, or when deep shadows obscure satellite signals. This is because (1) indoor environments are more complicated due to the presence of many objects surrounding them; (2) reflection within the building is highly dependent on the surrounding environment, including the positions of objects and human activity; and (3) satellite signals cannot be reached in an indoor environment, and GNSS doesn't have enough power to penetrate building walls. GPS is also highly power-hungry, which poses a severe challenge for battery-powered IoT devices. Due to these challenges, IoT applications are limited. Consequently, precise, seamless, and ubiquitous Positioning, Navigation and Timing (PNT) systems are crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarming, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a novel semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. We also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 0.39 m, and more than 90% of the errors are less than 0.82 m. According to numerical results, SRCLoc improves positioning performance and reduces radio map construction costs significantly compared to traditional methods.

Keywords: location-aware services, feature extraction technique, generative adversarial network, long short-term memory, support vector machine

Procedia PDF Downloads 24
632 Test Rig Development for Up-to-Date Experimental Study of Multi-Stage Flash Distillation Process

Authors: Marek Vondra, Petr Bobák

Abstract:

Vacuum evaporation is a reliable and well-proven technology with a wide application range which is frequently used in food, chemical or pharmaceutical industries. Recently, numerous remarkable studies have been carried out to investigate utilization of this technology in the area of wastewater treatment. One of the most successful applications of vacuum evaporation principal is connected with seawater desalination. Since 1950’s, multi-stage flash distillation (MSF) has been the leading technology in this field and it is still irreplaceable in many respects, despite a rapid increase in cheaper reverse-osmosis-based installations in recent decades. MSF plants are conveniently operated in countries with a fluctuating seawater quality and at locations where a sufficient amount of waste heat is available. Nowadays, most of the MSF research is connected with alternative heat sources utilization and with hybridization, i.e. merging of different types of desalination technologies. Some of the studies are concerned with basic principles of the static flash phenomenon, but only few scientists have lately focused on the fundamentals of continuous multi-stage evaporation. Limited measurement possibilities at operating plants and insufficiently equipped experimental facilities may be the reasons. The aim of the presented study was to design, construct and test an up-to-date test rig with an advanced measurement system which will provide real time monitoring options of all the important operational parameters under various conditions. The whole system consists of a conventionally designed MSF unit with 8 evaporation chambers, versatile heating circuit for different kinds of feed water (e.g. seawater, waste water), sophisticated system for acquisition and real-time visualization of all the related quantities (temperature, pressure, flow rate, weight, conductivity, pH, water level, power input), access to a wide spectrum of operational media (salt, fresh and softened water, steam, natural gas, compressed air, electrical energy) and integrated transparent features which enable a direct visual control of selected physical mechanisms (water evaporation in chambers, water level right before brine and distillate pumps). Thanks to the adjustable process parameters, it is possible to operate the test unit at desired operational conditions. This allows researchers to carry out statistical design and analysis of experiments. Valuable results obtained in this manner could be further employed in simulations and process modeling. First experimental tests confirm correctness of the presented approach and promise interesting outputs in the future. The presented experimental apparatus enables flexible and efficient research of the whole MSF process.

Keywords: design of experiment, multi-stage flash distillation, test rig, vacuum evaporation

Procedia PDF Downloads 374
631 The Association Between CYP2C19 Gene Distribution and Medical Cannabis Treatment

Authors: Vichayada Laohapiboolkul

Abstract:

Introduction: As the legal use of cannabis is being widely accepted throughout the world, medical cannabis has been explored in order to become an alternative cure for patients. Tetrahydrocannabinol (THC) and Cannabidiol (CBD) are natural cannabinoids found in the Cannabis plant which is proved to have positive treatment for various diseases and symptoms such as chronic pain, neuropathic pain, spasticity resulting from multiple sclerosis, reduce cancer-associated pain, autism spectrum disorders (ASD), dementia, cannabis and opioid dependence, psychoses/schizophrenia, general social anxiety, posttraumatic stress disorder, anorexia nervosa, attention-deficit hyperactivity disorder, and Tourette's disorder. Regardless of all the medical benefits, THC, if not metabolized, can lead to mild up to severe adverse drug reactions (ADR). The enzyme CYP2C19 was found to be one of the metabolizers of THC. However, the suballele CYP2C19*2 manifests as a poor metabolizer which could lead to higher levels of THC than usual, possibly leading to various ADRs. Objective: The aim of this study was to investigate the distribution of CYP2C19, specifically CYP2C19*2, genes in Thai patients treated with medical cannabis along with adverse drug reactions. Materials and Methods: Clinical data and EDTA whole blood for DNA extraction and genotyping were collected from patients for this study. CYP2C19*2 (681G>A, rs4244285) genotyping was conducted using the Real-time PCR (ABI, Foster City, CA, USA). Results: There were 42 medical cannabis-induced ADRs cases and 18 medical cannabis tolerance controls who were included in this study. A total of 60 patients were observed where 38 (63.3%) patients were female and 22 (36.7%) were male, with a range of age approximately 19 - 87 years. The most apparent ADRs for medical cannabis treatment were dry mouth/dry throat (76.7%), followed by tachycardia (70%), nausea (30%) and a few arrhythmias (10%). In the total of 27 cases, we found a frequency of 18 CYP2C19*1/*1 alleles (normal metabolizers, 66.7%), 8 CYP2C19*1/*2 alleles (intermediate metabolizers, 29.6%) and 1 CYP2C19*2/*2 alleles (poor metabolizers, 3.7%). Meanwhile, 63.6% of CYP2C19*1/*1, 36.3% and 0% of CYP2C19*1/*2 and *2/*2 in the tolerance controls group, respectively. Conclusions: This is the first study to confirm the distribution of CYP2C19*2 allele and the prevalence of poor metabolizer genes in Thai patients who received medical cannabis for treatment. Thus, CYP2C19 allele might serve as a pharmacogenetics marker for screening before initiating treatment.

Keywords: medical cannabis, adverse drug reactions, CYP2C19, tetrahydrocannabinol, poor metabolizer

Procedia PDF Downloads 85
630 Implementation of Inclusive Education in DepEd-Dasmarinas: Basis for Inclusion Program Framework

Authors: Manuela S. Tolentino, John G. Nepomuceno

Abstract:

The purpose of this investigation was to assess the implementation of inclusive education (IE) in 6 elementary and 5 secondary public schools in the City Schools Division of Dasmarinas. Participants in this study were 11 school heads, 73 teachers, 22 parents and 22 students (regular and with special needs) who were selected using purposive sampling. A 30-item questionnaire was used to gather data on the extent of the implementation of IE in the division while focus group discussion (FGD) was used to gather insights on what facilitate and hinder the implementation of the IE program. This study assessed the following variables: school culture and environment, inclusive education policy implementation, and curriculum design and practices. Data were analyzed using frequency count, mean and ranking. Results revealed that participants have similar assessment on the extent of the implementation of IE. School heads rated school culture and environment as highest in terms of implementation while teachers and pupils chose curriculum design and practices. On the other hand, parents felt that inclusive education policies are implemented best. School culture and environment are given high ratings. Participants perceived that the IE program in the division is making everyone feel welcome regardless of age, sex, social status, physical, mental and emotional state; students with or without disability are equally valued, and students help each. However, some aspects of the IE program implementation are given low ratings namely: partnership between staff, parents and caregivers, school’s effort to minimize discriminatory practice, and stakeholders sharing the philosophy of inclusion. As regards education policy implementation, indicators with the highest ranks were school’s effort to admit students from the locality especially students with special needs, and the implementation of the child protection policy and anti-bullying policy. The results of the FGD revealed that both school heads and teachers possessed the welcoming gesture to accommodate students with special needs. This can be linked to the increasing enrolment of SNE in the division. However, limitations of the teachers’ knowledge on handling learners, facilities and collaboration among stakeholders hinder the implementation of IE program. Based on the findings, inclusion program framework was developed for program enhancement. This will be the basis for the improvement of the program’s efficiency, the relationship between stakeholders, and formulation of solutions.

Keywords: inclusion, inclusive education, framework, special education

Procedia PDF Downloads 157
629 University Curriculum Policy Processes in Chile: A Case Study

Authors: Victoria C. Valdebenito

Abstract:

Located within the context of accelerating globalization in the 21st-century knowledge society, this paper focuses on one selected university in Chile at which radical curriculum policy changes have been taking place, diverging from the traditional curriculum in Chile at the undergraduate level as a section of a larger investigation. Using a ‘policy trajectory’ framework, and guided by the interpretivist approach to research, interview transcripts and institutional documents were analyzed in relation to the meso (university administration) and the micro (academics) level. Inside the case study, participants from the university administration and academic levels were selected both via snow-ball technique and purposive selection, thus they had different levels of seniority, with some participating actively in the curriculum reform processes. Guided by an interpretivist approach to research, documents and interview transcripts were analyzed to reveal major themes emerging from the data. A further ‘bigger picture’ analysis guided by critical theory was then undertaken, involving interrogation of underlying ideologies and how political and economic interests influence the cultural production of policy. The case-study university was selected because it represents a traditional and old case of university setting in the country, undergoing curriculum changes based on international trends such as the competency model and the liberal arts. Also, it is representative of a particular socioeconomic sector of the country. Access to the university was gained through email contact. Qualitative research methods were used, namely interviews and analysis of institutional documents. In all, 18 people were interviewed. The number was defined by when the saturation criterion was met. Semi-structured interview schedules were based on the four research questions about influences, policy texts, policy enactment and longer-term outcomes. Triangulation of information was used for the analysis. While there was no intention to generalize the specific findings of the case study, the results of the research were used as a focus for engagement with broader themes, often evident in global higher education policy developments. The research results were organized around major themes in three of the four contexts of the ‘policy trajectory’. Regarding the context of influences and the context of policy text production, themes relate to hegemony exercised by first world countries’ universities in the higher education field, its associated neoliberal ideology, with accountability and the discourse of continuous improvement, the local responses to those pressures, and the value of interdisciplinarity. Finally, regarding the context of policy practices and effects (enactment), themes emerged around the impacts of the curriculum changes on university staff, students, and resistance amongst academics. The research concluded with a few recommendations that potentially provide ‘food for thought’ beyond the localized settings of this study, as well as possibilities for further research.

Keywords: curriculum, global-local dynamics, higher education, policy, sociology of education

Procedia PDF Downloads 63
628 A Case Study Report on Acoustic Impact Assessment and Mitigation of the Hyprob Research Plant

Authors: D. Bianco, A. Sollazzo, M. Barbarino, G. Elia, A. Smoraldi, N. Favaloro

Abstract:

The activities, described in the present paper, have been conducted in the framework of the HYPROB-New Program, carried out by the Italian Aerospace Research Centre (CIRA) promoted and funded by the Italian Ministry of University and Research (MIUR) in order to improve the National background on rocket engine systems for space applications. The Program has the strategic objective to improve National system and technology capabilities in the field of liquid rocket engines (LRE) for future Space Propulsion Systems applications, with specific regard to LOX/LCH4 technology. The main purpose of the HYPROB program is to design and build a Propulsion Test Facility (HIMP) allowing test activities on Liquid Thrusters. The development of skills in liquid rocket propulsion can only pass through extensive test campaign. Following its mission, CIRA has planned the development of new testing facilities and infrastructures for space propulsion characterized by adequate sizes and instrumentation. The IMP test cell is devoted to testing articles representative of small combustion chambers, fed with oxygen and methane, both in liquid and gaseous phase. This article describes the activities that have been carried out for the evaluation of the acoustic impact, and its consequent mitigation. The impact of the simulated acoustic disturbance has been evaluated, first, using an approximated method based on experimental data by Baumann and Coney, included in “Noise and Vibration Control Engineering” edited by Vér and Beranek. This methodology, used to evaluate the free-field radiation of jet in ideal acoustical medium, analyzes in details the jet noise and assumes sources acting at the same time. It considers as principal radiation sources the jet mixing noise, caused by the turbulent mixing of jet gas and the ambient medium. Empirical models, allowing a direct calculation of the Sound Pressure Level, are commonly used for rocket noise simulation. The model named after K. Eldred is probably one of the most exploited in this area. In this paper, an improvement of the Eldred Standard model has been used for a detailed investigation of the acoustical impact of the Hyprob facility. This new formulation contains an explicit expression for the acoustic pressure of each equivalent noise source, in terms of amplitude and phase, allowing the investigation of the sources correlation effects and their propagation through wave equations. In order to enhance the evaluation of the facility acoustic impact, including an assessment of the mitigation strategies to be set in place, a more advanced simulation campaign has been conducted using both an in-house code for noise propagation and scattering, and a commercial code for industrial noise environmental impact, CadnaA. The noise prediction obtained with the revised Eldred-based model has then been used for formulating an empirical/BEM (Boundary Element Method) hybrid approach allowing the evaluation of the barrier mitigation effect, at the design. This approach has been compared with the analogous empirical/ray-acoustics approach, implemented within CadnaA using a customized definition of sources and directivity factor. The resulting impact evaluation study is reported here, along with the design-level barrier optimization for noise mitigation.

Keywords: acoustic impact, industrial noise, mitigation, rocket noise

Procedia PDF Downloads 130
627 21st-Century Middlebrow Film: A Critical Examination of the Spectator Experience in Malayalam Film

Authors: Anupama A. P.

Abstract:

The Malayalam film industry, known as Mollywood, has a rich tradition of storytelling and cultural significance within Indian cinema. Middlebrow films have emerged as a distinct influential category, particularly in the 1980s, with directors like K.G. George, who engaged with female subjectivity and drew inspiration from the ‘women’s cinema’ of the 1950s and 1960s. In recent decades, particularly post-2010, the industry has transformed significantly with a new generation of filmmakers diverging from melodrama and new wave of the past, incorporating advanced technology and modern content. This study examines the evolution and impact of Malayalam middlebrow cinema in the 21st century, focusing on post-2000 films and their influence on contemporary spectator experiences. These films appeal to a wide range of audiences without compromising on their artistic integrity, tackling social issues and personal dramas with thematic and narrative complexity. Historically, middlebrow films in Malayalam cinema have portrayed realism and addressed the socio-political climate of Kerala, blending realism with reflexivity and moving away from traditional sentimentality. This shift is evident in the new generation of Malayalam films, which present a global representation of characters and a modern treatment of individuals. To provide a comprehensive understanding of this evolution, the study analyzes a diverse selection of films such as Kerala Varma Pazhassi Raja (2009), Drishyam (2013), Maheshinte Prathikaaram (2016), Take Off (2017), and Thondimuthalum Driksakshiyum (2017) and Virus (2019) illustrating the broad thematic range and innovative narrative techniques characteristic of this genre. These films exemplify how middlebrow cinema continues to evolve, adapting to changing societal contexts and audience expectations. This research employs a theoretical methodology, drawing on cultural studies and audience reception theory, utilizing frameworks such as Bordwell’s narrative theory, Deleuze’s concept of deterritorialization, and Hall’s encoding/decoding model to analyze the changes in Malayalam middlebrow cinema and interpret the storytelling methods, spectator experience, and audience reception of these films. The findings indicate that Malayalam middlebrow cinema post-2010 offers a spectator experience that is both intellectually stimulating and broadly appealing. This study highlights the critical role of middlebrow cinema in reflecting and shaping societal values, making it a significant cultural artefact within the broader context of Indian and global cinema. By bridging entertainment with thought-provoking narratives, these films engage audiences and contribute to wider cultural discourse, making them pivotal in contemporary cinematic landscapes. To conclude, this study highlights the importance of Malayalam middle-brow cinema in influencing contemporary cinematic tastes. The nuanced and approachable narratives of post-2010 films are posited to assume an increasingly pivotal role in the future of Malayalam cinema. By providing a deeper understanding of Malayalam middlebrow cinema and its societal implications, this study enriches theoretical discourse, promotes regional cinema, and offers valuable insights into contemporary spectator experiences and the future trajectory of Malayalam cinema.

Keywords: Malayalam cinema, middlebrow cinema, spectator experience, audience reception, deterritorialization

Procedia PDF Downloads 13
626 Hydrogen Production from Auto-Thermal Reforming of Ethanol Catalyzed by Tri-Metallic Catalyst

Authors: Patrizia Frontera, Anastasia Macario, Sebastiano Candamano, Fortunato Crea, Pierluigi Antonucci

Abstract:

The increasing of the world energy demand makes today biomass an attractive energy source, based on the minimizing of CO2 emission and on the global warming reduction purposes. Recently, COP-21, the international meeting on global climate change, defined the roadmap for sustainable worldwide development, based on low-carbon containing fuel. Hydrogen is an energy vector able to substitute the conventional fuels from petroleum. Ethanol for hydrogen production represents a valid alternative to the fossil sources due to its low toxicity, low production costs, high biodegradability, high H2 content and renewability. Ethanol conversion to generate hydrogen by a combination of partial oxidation and steam reforming reactions is generally called auto-thermal reforming (ATR). The ATR process is advantageous due to the low energy requirements and to the reduced carbonaceous deposits formation. Catalyst plays a pivotal role in the ATR process, especially towards the process selectivity and the carbonaceous deposits formation. Bimetallic or trimetallic catalysts, as well as catalysts with doped-promoters supports, may exhibit high activity, selectivity and deactivation resistance with respect to the corresponding monometallic ones. In this work, NiMoCo/GDC, NiMoCu/GDC and NiMoRe/GDC (where GDC is Gadolinia Doped Ceria support and the metal composition is 60:30:10 for all catalyst) have been prepared by impregnation method. The support, Gadolinia 0.2 Doped Ceria 0.8, was impregnated by metal precursors solubilized in aqueous ethanol solution (50%) at room temperature for 6 hours. After this, the catalysts were dried at 100°C for 8 hours and, subsequently, calcined at 600°C in order to have the metal oxides. Finally, active catalysts were obtained by reduction procedure (H2 atmosphere at 500°C for 6 hours). All sample were characterized by different analytical techniques (XRD, SEM-EDX, XPS, CHNS, H2-TPR and Raman Spectorscopy). Catalytic experiments (auto-thermal reforming of ethanol) were carried out in the temperature range 500-800°C under atmospheric pressure, using a continuous fixed-bed microreactor. Effluent gases from the reactor were analyzed by two Varian CP4900 chromarographs with a TCD detector. The analytical investigation focused on the preventing of the coke deposition, the metals sintering effect and the sulfur poisoning. Hydrogen productivity, ethanol conversion and products distribution were measured and analyzed. At 600°C, all tri-metallic catalysts show the best performance: H2 + CO reaching almost the 77 vol.% in the final gases. While NiMoCo/GDC catalyst shows the best selectivity to hydrogen whit respect to the other tri-metallic catalysts (41 vol.% at 600°C). On the other hand, NiMoCu/GDC and NiMoRe/GDC demonstrated high sulfur poisoning resistance (up to 200 cc/min) with respect to the NiMoCo/GDC catalyst. The correlation among catalytic results and surface properties of the catalysts will be discussed.

Keywords: catalysts, ceria, ethanol, gadolinia, hydrogen, Nickel

Procedia PDF Downloads 141
625 Re-Engineering Management Process in IRAN’s Smart Schools

Authors: M. R. Babaei, S. M. Hosseini, S. Rahmani, L. Moradi

Abstract:

Today, the quality of education and training systems and the effectiveness of the education systems of most concern to stakeholders and decision-makers of our country's development in each country. In Iran this is a double issue of concern to numerous reasons; So that governments, over the past decade have hardly even paid the running costs of education. ICT is claiming it has the power to change the structure of a program for training, reduce costs and increase quality, and do education systems and products consistent with the needs of the community and take steps to practice education. Own of the areas that the introduction of information technology has fundamentally changed is the field of education. The aim of this research is process reengineering management in schools simultaneously has been using field studies to collect data in the form of interviews and a questionnaire survey. The statistical community of this research has been the country of Iran and smart schools under the education. Sampling was targeted. The data collection tool was a questionnaire composed of two parts. The questionnaire consists of 36 questions that each question designates one of effective factors on the management of smart schools. Also each question consists of two parts. The first part designates the operating position in the management process, which represents the domain's belonging to the management agent (planning, organizing, leading, controlling). According to the classification of Dabryn and in second part the factors affect the process of managing the smart schools were examined, that Likert scale is used to classify. Questions the validity of the group of experts and prominent university professors in the fields of information technology, management and reengineering of approved and Cronbach's alpha reliability and also with the use of the formula is evaluated and approved. To analyse the data, descriptive and inferential statistics were used to analyse the factors contributing to the rating of (Linkert scale) descriptive statistics (frequency table data, mean, median, mode) was used. To analyse the data using analysis of variance and nonparametric tests and Friedman test, the assumption was evaluated. The research conclusions show that the factors influencing the management process re-engineering smart schools in school performance is affected.

Keywords: re-engineering, management process, smart school, Iran's school

Procedia PDF Downloads 231