Search results for: continuous dynamic recrystallization
707 Modelling of Solidification in a Latent Thermal Energy Storage with a Finned Tube Bundle Heat Exchanger Unit
Authors: Remo Waser, Simon Maranda, Anastasia Stamatiou, Ludger J. Fischer, Joerg Worlitschek
Abstract:
In latent heat storage, a phase change material (PCM) is used to store thermal energy. The heat transfer rate during solidification is limited and considered as a key challenge in the development of latent heat storages. Thus, finned heat exchangers (HEX) are often utilized to increase the heat transfer rate of the storage system. In this study, a new modeling approach to calculating the heat transfer rate in latent thermal energy storages with complex HEX geometries is presented. This model allows for an optimization of the HEX design in terms of costs and thermal performance of the system. Modeling solidification processes requires the calculation of time-dependent heat conduction with moving boundaries. Commonly used computational fluid dynamic (CFD) methods enable the analysis of the heat transfer in complex HEX geometries. If applied to the entire storage, the drawback of this approach is the high computational effort due to small time steps and fine computational grids required for accurate solutions. An alternative to describe the process of solidification is the so-called temperature-based approach. In order to minimize the computational effort, a quasi-stationary assumption can be applied. This approach provides highly accurate predictions for tube heat exchangers. However, it shows unsatisfactory results for more complex geometries such as finned tube heat exchangers. The presented simulation model uses a temporal and spatial discretization of heat exchanger tube. The spatial discretization is based on the smallest possible symmetric segment of the HEX. The heat flow in each segment is calculated using finite volume method. Since the heat transfer fluid temperature can be derived using energy conservation equations, the boundary conditions at the inner tube wall is dynamically updated for each time step and segment. The model allows a prediction of the thermal performance of latent thermal energy storage systems using complex HEX geometries with considerably low computational effort.Keywords: modelling of solidification, finned tube heat exchanger, latent thermal energy storage
Procedia PDF Downloads 268706 Data Quality on Regular Childhood Immunization Programme at Degehabur District: Somali Region, Ethiopia
Authors: Eyob Seife
Abstract:
Immunization is a life-saving intervention which prevents needless suffering through sickness, disability, and death. Emphasis on data quality and use will become even stronger with the development of the immunization agenda 2030 (IA2030). Quality of data is a key factor in generating reliable health information that enables monitoring progress, financial planning, vaccine forecasting capacities, and making decisions for continuous improvement of the national immunization program. However, ensuring data of sufficient quality and promoting an information-use culture at the point of the collection remains critical and challenging, especially in hard-to-reach and pastoralist areas where Degehabur district is selected based on a hypothesis of ‘there is no difference in reported and recounted immunization data consistency. Data quality is dependent on different factors where organizational, behavioral, technical, and contextual factors are the mentioned ones. A cross-sectional quantitative study was conducted on September 2022 in the Degehabur district. The study used the world health organization (WHO) recommended data quality self-assessment (DQS) tools. Immunization tally sheets, registers, and reporting documents were reviewed at 5 health facilities (2 health centers and 3 health posts) of primary health care units for one fiscal year (12 months) to determine the accuracy ratio. The data was collected by trained DQS assessors to explore the quality of monitoring systems at health posts, health centers, and the district health office. A quality index (QI) was assessed, and the accuracy ratio formulated were: the first and third doses of pentavalent vaccines, fully immunized (FI), and the first dose of measles-containing vaccines (MCV). In this study, facility-level results showed both over-reporting and under-reporting were observed at health posts when computing the accuracy ratio of the tally sheet to health post reports found at health centers for almost all antigens verified where pentavalent 1 was 88.3%, 60.4%, and 125.6% for Health posts A, B, and C respectively. For first-dose measles-containing vaccines (MCV), similarly, the accuracy ratio was found to be 126.6%, 42.6%, and 140.9% for Health posts A, B, and C, respectively. The accuracy ratio for fully immunized children also showed 0% for health posts A and B and 100% for health post-C. A relatively better accuracy ratio was seen at health centers where the first pentavalent dose was 97.4% and 103.3% for health centers A and B, while a first dose of measles-containing vaccines (MCV) was 89.2% and 100.9% for health centers A and B, respectively. A quality index (QI) of all facilities also showed results between the maximum of 33.33% and a minimum of 0%. Most of the verified immunization data accuracy ratios were found to be relatively better at the health center level. However, the quality of the monitoring system is poor at all levels, besides poor data accuracy at all health posts. So attention should be given to improving the capacity of staff and quality of monitoring system components, namely recording, reporting, archiving, data analysis, and using information for decision at all levels, especially in pastoralist areas where such kinds of study findings need to be improved beside to improving the data quality at root and health posts level.Keywords: accuracy ratio, Degehabur District, regular childhood immunization program, quality of monitoring system, Somali Region-Ethiopia
Procedia PDF Downloads 107705 Rheometer Enabled Study of Tissue/biomaterial Frequency-Dependent Properties
Authors: Polina Prokopovich
Abstract:
Despite the well-established dependence of cartilage mechanical properties on the frequency of the applied load, most research in the field is carried out in either load-free or constant load conditions because of the complexity of the equipment required for the determination of time-dependent properties. These simpler analyses provide a limited representation of cartilage properties thus greatly reducing the impact of the information gathered hindering the understanding of the mechanisms involved in this tissue replacement, development and pathology. More complex techniques could represent better investigative methods, but their uptake in cartilage research is limited by the highly specialised training required and cost of the equipment. There is, therefore, a clear need for alternative experimental approaches to cartilage testing to be deployed in research and clinical settings using more user-friendly and financial accessible devices. Frequency dependent material properties can be determined through rheometry that is an easy to use requiring a relatively inexpensive device; we present how a commercial rheometer can be adapted to determine the viscoelastic properties of articular cartilage. Frequency-sweep tests were run at various applied normal loads on immature, mature and trypsinased (as model of osteoarthritis) cartilage samples to determine the dynamic shear moduli (G*, G′ G″) of the tissues. Moduli increased with increasing frequency and applied load; mature cartilage had generally the highest moduli and GAG depleted samples the lowest. Hydraulic permeability (KH) was estimated from the rheological data and decreased with applied load; GAG depleted cartilage exhibited higher hydraulic permeability than either immature or mature tissues. The rheometer-based methodology developed was validated by the close comparison of the rheometer-obtained cartilage characteristics (G*, G′, G″, KH) with results obtained with more complex testing techniques available in literature. Rheometry is relatively simpler and does not require highly capital intensive machinery and staff training is more accessible; thus the use of a rheometer would represent a cost-effective approach for the determination of frequency-dependent properties of cartilage for more comprehensive and impactful results for both healthcare professional and R&D.Keywords: tissue, rheometer, biomaterial, cartilage
Procedia PDF Downloads 81704 Impact of Pharmacist-Led Care on Glycaemic Control in Patients with Type 2 Diabetes: A Randomised-Controlled Trial
Authors: Emmanuel A. David, Rebecca O. Soremekun, Roseline I. Aderemi-Williams
Abstract:
Background: The complexities involved in the management of diabetes mellitus require a multi-dimensional, multi-professional collaborative and continuous care by health care providers and a substantial self-care by the patients in order to achieve desired treatment outcomes. The effect of pharmacists’ care in the management of diabetes in resource-endowed nations is well documented in literature, but randomised-controlled assessment of the impact of pharmacist-led care among patients with diabetes in resource-limited settings like Nigeria and sub-Saharan Africa countries is scarce. Objective: To evaluate the impact of Pharmacist-led care on glycaemic control in patients with uncontrolled type 2 diabetes, using a randomised-controlled study design Methods: This study employed a prospective randomised controlled design, to assess the impact of pharmacist-led care on glycaemic control of 108 poorly controlled type 2 diabetic patients. A total of 200 clinically diagnosed type 2 diabetes patients were purposively selected using fasting blood glucose ≥ 7mmol/L and tested for long term glucose control using Glycated haemoglobin measure. One hundred and eight (108) patients with ≥ 7% Glycated haemoglobin were recruited for the study and assigned unique identification numbers. They were further randomly allocated to intervention and usual care groups using computer generated random numbers, with each group containing 54 subjects. Patients in the intervention group received pharmacist-structured intervention, including education, periodic phone calls, adherence counselling, referral and 6 months follow-up, while patients in usual care group only kept clinic appointments with their physicians. Data collected at baseline and six months included socio-demographic characteristics, fasting blood glucose, Glycated haemoglobin, blood pressure, lipid profile. With an intention to treat analysis, Mann-Whitney U test was used to compared median change from baseline in the primary outcome (Glycated haemoglobin) and secondary outcomes measure, effect size was computed and proportion of patients that reached target laboratory parameter were compared in both arms. Results: All enrolled participants (108) completed the study, 54 in each study. Mean age was 51±11.75 and majority were female (68.5%). Intervention patients had significant reduction in Glycated haemoglobin (-0.75%; P<0.001; η2 = 0.144), with greater proportion attaining target laboratory parameter after 6 months of care compared to usual care group (Glycated haemoglobin: 42.6% vs 20.8%; P=0.02). Furthermore, patients who received pharmacist-led care were about 3 times more likely to have better glucose control (AOR 2.718, 95%CI: 1.143-6.461) compared to usual care group. Conclusion: Pharmacist-led care significantly improved glucose control in patients with uncontrolled type 2 diabetes mellitus and should be integrated in the routine management of diabetes patients, especially in resource-limited settings.Keywords: glycaemic control , pharmacist-led care, randomised-controlled trial , type 2 diabetes mellitus
Procedia PDF Downloads 120703 Drug Design Modelling and Molecular Virtual Simulation of an Optimized BSA-Based Nanoparticle Formulation Loaded with Di-Berberine Sulfate Acid Salt
Authors: Eman M. Sarhan, Doaa A. Ghareeb, Gabriella Ortore, Amr A. Amara, Mohamed M. El-Sayed
Abstract:
Drug salting and nanoparticle-based drug delivery formulations are considered to be an effective means for rendering the hydrophobic drugs’ nano-scale dispersion in aqueous media, and thus circumventing the pitfalls of their poor solubility as well as enhancing their membrane permeability. The current study aims to increase the bioavailability of quaternary ammonium berberine through acid salting and biodegradable bovine serum albumin (BSA)-based nanoparticulate drug formulation. Berberine hydroxide (BBR-OH) that was chemically synthesized by alkalization of the commercially available berberine hydrochloride (BBR-HCl) was then acidified to get Di-berberine sulfate (BBR)₂SO₄. The purified crystals were spectrally characterized. The desolvation technique was optimized for the preparation of size-controlled BSA-BBR-HCl, BSA-BBR-OH, and BSA-(BBR)₂SO₄ nanoparticles. Particle size, zeta potential, drug release, encapsulation efficiency, Fourier transform infrared spectroscopy (FTIR), tandem MS-MS spectroscopy, energy-dispersive X-ray spectroscopy (EDX), scanning and transmitting electron microscopic examination (SEM, TEM), in vitro bioactivity, and in silico drug-polymer interaction were determined. BSA (PDB ID; 4OR0) protonation state at different pH values was predicted using Amber12 molecular dynamic simulation. Then blind docking was performed using Lamarkian genetic algorithm (LGA) through AutoDock4.2 software. Results proved the purity and the size-controlled synthesis of berberine-BSA-nanoparticles. The possible binding poses, hydrophobic and hydrophilic interactions of berberine on BSA at different pH values were predicted. Antioxidant, anti-hemolytic, and cell differentiated ability of tested drugs and their nano-formulations were evaluated. Thus, drug salting and the potentially effective albumin berberine nanoparticle formulations can be successfully developed using a well-optimized desolvation technique and exhibiting better in vitro cellular bioavailability.Keywords: berberine, BSA, BBR-OH, BBR-HCl, BSA-BBR-HCl, BSA-BBR-OH, (BBR)₂SO₄, BSA-(BBR)₂SO₄, FTIR, AutoDock4.2 Software, Lamarkian genetic algorithm, SEM, TEM, EDX
Procedia PDF Downloads 174702 Using the Smith-Waterman Algorithm to Extract Features in the Classification of Obesity Status
Authors: Rosa Figueroa, Christopher Flores
Abstract:
Text categorization is the problem of assigning a new document to a set of predetermined categories, on the basis of a training set of free-text data that contains documents whose category membership is known. To train a classification model, it is necessary to extract characteristics in the form of tokens that facilitate the learning and classification process. In text categorization, the feature extraction process involves the use of word sequences also known as N-grams. In general, it is expected that documents belonging to the same category share similar features. The Smith-Waterman (SW) algorithm is a dynamic programming algorithm that performs a local sequence alignment in order to determine similar regions between two strings or protein sequences. This work explores the use of SW algorithm as an alternative to feature extraction in text categorization. The dataset used for this purpose, contains 2,610 annotated documents with the classes Obese/Non-Obese. This dataset was represented in a matrix form using the Bag of Word approach. The score selected to represent the occurrence of the tokens in each document was the term frequency-inverse document frequency (TF-IDF). In order to extract features for classification, four experiments were conducted: the first experiment used SW to extract features, the second one used unigrams (single word), the third one used bigrams (two word sequence) and the last experiment used a combination of unigrams and bigrams to extract features for classification. To test the effectiveness of the extracted feature set for the four experiments, a Support Vector Machine (SVM) classifier was tuned using 20% of the dataset. The remaining 80% of the dataset together with 5-Fold Cross Validation were used to evaluate and compare the performance of the four experiments of feature extraction. Results from the tuning process suggest that SW performs better than the N-gram based feature extraction. These results were confirmed by using the remaining 80% of the dataset, where SW performed the best (accuracy = 97.10%, weighted average F-measure = 97.07%). The second best was obtained by the combination of unigrams-bigrams (accuracy = 96.04, weighted average F-measure = 95.97) closely followed by the bigrams (accuracy = 94.56%, weighted average F-measure = 94.46%) and finally unigrams (accuracy = 92.96%, weighted average F-measure = 92.90%).Keywords: comorbidities, machine learning, obesity, Smith-Waterman algorithm
Procedia PDF Downloads 297701 Investigating the Neural Heterogeneity of Developmental Dyscalculia
Authors: Fengjuan Wang, Azilawati Jamaludin
Abstract:
Developmental Dyscalculia (DD) is defined as a particular learning difficulty with continuous challenges in learning requisite math skills that cannot be explained by intellectual disability or educational deprivation. Recent studies have increasingly recognized that DD is a heterogeneous, instead of monolithic, learning disorder with not only cognitive and behavioral deficits but so too neural dysfunction. In recent years, neuroimaging studies employed group comparison to explore the neural underpinnings of DD, which contradicted the heterogenous nature of DD and may obfuscate critical individual differences. This research aimed to investigate the neural heterogeneity of DD using case studies with functional near-infrared spectroscopy (fNIRS). A total of 54 aged 6-7 years old of children participated in this study, comprising two comprehensive cognitive assessments, an 8-minute resting state, and an 8-minute one-digit addition task. Nine children met the criteria of DD and scored at or below 85 (i.e., the 16th percentile) on the Mathematics or Math Fluency subtest of the Wechsler Individual Achievement Test, Third Edition (WIAT-III) (both subtest scores were 90 and below). The remaining 45 children formed the typically developing (TD) group. Resting-state data and brain activation in the inferior frontal gyrus (IFG), superior frontal gyrus (SFG), and intraparietal sulcus (IPS) were collected for comparison between each case and the TD group. Graph theory was used to analyze the brain network under the resting state. This theory represents the brain network as a set of nodes--brain regions—and edges—pairwise interactions across areas to reveal the architectural organizations of the nervous network. Next, a single-case methodology developed by Crawford et al. in 2010 was used to compare each case’s brain network indicators and brain activation against 45 TD children’s average data. Results showed that three out of the nine DD children displayed significant deviation from TD children’s brain indicators. Case 1 had inefficient nodal network properties. Case 2 showed inefficient brain network properties and weaker activation in the IFG and IPS areas. Case 3 displayed inefficient brain network properties with no differences in activation patterns. As a rise above, the present study was able to distill differences in architectural organizations and brain activation of DD vis-à-vis TD children using fNIRS and single-case methodology. Although DD is regarded as a heterogeneous learning difficulty, it is noted that all three cases showed lower nodal efficiency in the brain network, which may be one of the neural sources of DD. Importantly, although the current “brain norm” established for the 45 children is tentative, the results from this study provide insights not only for future work in “developmental brain norm” with reliable brain indicators but so too the viability of single-case methodology, which could be used to detect differential brain indicators of DD children for early detection and interventions.Keywords: brain activation, brain network, case study, developmental dyscalculia, functional near-infrared spectroscopy, graph theory, neural heterogeneity
Procedia PDF Downloads 53700 Investigation of Turbulent Flow in a Bubble Column Photobioreactor and Consequent Effects on Microalgae Cultivation Using Computational Fluid Dynamic Simulation
Authors: Geetanjali Yadav, Arpit Mishra, Parthsarathi Ghosh, Ramkrishna Sen
Abstract:
The world is facing problems of increasing global CO2 emissions, climate change and fuel crisis. Therefore, several renewable and sustainable energy alternatives should be investigated to replace non-renewable fuels in future. Algae presents itself a versatile feedstock for the production of variety of fuels (biodiesel, bioethanol, bio-hydrogen etc.) and high value compounds for food, fodder, cosmetics and pharmaceuticals. Microalgae are simple microorganisms that require water, light, CO2 and nutrients for growth by the process of photosynthesis and can grow in extreme environments, utilize waste gas (flue gas) and waste waters. Mixing, however, is a crucial parameter within the culture system for the uniform distribution of light, nutrients and gaseous exchange in addition to preventing settling/sedimentation, creation of dark zones etc. The overarching goal of the present study is to improve photobioreactor (PBR) design for enhancing dissolution of CO2 from ambient air (0.039%, v/v), pure CO2 and coal-fired flue gas (10 ± 2%) into microalgal PBRs. Computational fluid dynamics (CFD), a state-of-the-art technique has been used to solve partial differential equations with turbulence closure which represents the dynamics of fluid in a photobioreactor. In this paper, the hydrodynamic performance of the PBR has been characterized and compared with that of the conventional bubble column PBR using CFD. Parameters such as flow rate (Q), mean velocity (u), mean turbulent kinetic energy (TKE) were characterized for each experiment that was tested across different aeration schemes. The results showed that the modified PBR design had superior liquid circulation properties and gas-liquid transfer that resulted in creation of uniform environment inside PBR as compared to conventional bubble column PBR. The CFD technique has shown to be promising to successfully design and paves path for a future research in order to develop PBRs which can be commercially available for scale-up microalgal production.Keywords: computational fluid dynamics, microalgae, bubble column photbioreactor, flue gas, simulation
Procedia PDF Downloads 231699 Electrophysiological Correlates of Statistical Learning in Children with and without Developmental Language Disorder
Authors: Ana Paula Soares, Alexandrina Lages, Helena Oliveira, Francisco-Javier Gutiérrez-Domínguez, Marisa Lousada
Abstract:
From an early age, exposure to a spoken language allows us to implicitly capture the structure underlying the succession of the speech sounds in that language and to segment it into meaningful units (words). Statistical learning (SL), i.e., the ability to pick up patterns in the sensory environment even without intention or consciousness of doing it, is thus assumed to play a central role in the acquisition of the rule-governed aspects of language and possibly to lie behind the language difficulties exhibited by children with development language disorder (DLD). The research conducted so far has, however, led to inconsistent results, which might stem from the behavioral tasks used to test SL. In a classic SL experiment, participants are first exposed to a continuous stream (e.g., syllables) in which, unbeknownst to the participants, stimuli are grouped into triplets that always appear together in the stream (e.g., ‘tokibu’, ‘tipolu’), with no pauses between each other (e.g., ‘tokibutipolugopilatokibu’) and without any information regarding the task or the stimuli. Following exposure, SL is assessed by asking participants to discriminate between triplets previously presented (‘tokibu’) from new sequences never presented together during exposure (‘kipopi’), i.e., to perform a two-alternative-forced-choice (2-AFC) task. Despite the widespread use of the 2-AFC to test SL, it has come under increasing criticism as it is an offline post-learning task that only assesses the result of the learning that had occurred during the previous exposure phase and that might be affected by other factors beyond the computation of regularities embedded in the input, typically the likelihood two syllables occurring together, a statistic known as transitional probability (TP). One solution to overcome these limitations is to assess SL as exposure to the stream unfolds using online techniques such as event-related potentials (ERP) that is highly sensitive to the time-course of the learning in the brain. Here we collected ERPs to examine the neurofunctional correlates of SL in preschool children with DLD, and chronological-age typical language development (TLD) controls who were exposed to an auditory stream in which eight three-syllable nonsense words, four of which presenting high-TPs and the other four low-TPs, to further analyze whether the ability of DLD and TLD children to extract-word-like units from the steam was modulated by words’ predictability. Moreover, to ascertain if the previous knowledge of the to-be-learned-regularities affected the neural responses to high- and low-TP words, children performed the auditory SL task, firstly, under implicit, and, subsequently, under explicit conditions. Although behavioral evidence of SL was not obtained in either group, the neural responses elicited during the exposure phases of the SL tasks differentiated children with DLD from children with TLD. Specifically, the results indicated that only children from the TDL group showed neural evidence of SL, particularly in the SL task performed under explicit conditions, firstly, for the low-TP, and, subsequently, for the high-TP ‘words’. Taken together, these findings support the view that children with DLD showed deficits in the extraction of the regularities embedded in the auditory input which might underlie the language difficulties.Keywords: development language disorder, statistical learning, transitional probabilities, word segmentation
Procedia PDF Downloads 188698 Urban Meetings: Graphic Analysis of the Public Space in a Cultural Building from São Paulo
Authors: Thalita Carvalho Martins de Castro, Núbia Bernardi
Abstract:
Currently, studies evidence that our cities are portraits of social relations. In the midst of so many segregations, cultural buildings emerge as a place to assemble collective activities and expressions. Through theater, exhibitions, educational workshops, libraries, the architecture approaches human relations and seeks to propose meeting places. The purpose of this research is to deepen the discussions about the contributions of cultural buildings in the use of the spaces of the contemporary city, based on the data and measure collected in the master's research in progress. The graphic analysis of the insertion of contemporary cultural buildings seeks to highlight the social use of space. The urban insertions of contemporary cultural buildings in the city of São Paulo (Brazil) will be analyzed to understand the relations between the architectural form and its audience. The collected data describe a dynamic of flows and the permanence in the use of these spaces, indicating the contribution of the cultural buildings, associated with artistic production, in the dynamics of urban spaces and the social modifications of their milieu. Among the case studies, the research in development is based on the registration and graphic analysis of the Praça das Artes (2012) building located in the historical central region of the city, which after a long period of great degradation undergoes a current redevelopment. The choice of this building was based on four parameters, both on the architectural scale and on the urban scale: urban insertion, local impact, cultural production and a mix of uses. For the analysis will be applied two methodologies of graphic analysis, one with diagrams accompanied by texts and another with the active analysis for open space projects using complementary graphic methodologies, with maps, plants, info-graphics, perspectives, time-lapse videos and analytical tables. This research aims to reinforce the debates between the methodologies of form-use spaces and visual synthesis applied in cultural buildings, in order that new projects can structure public spaces as catalysts for social use, generating improvements in the daily life of its users and in the cities where they are inserted.Keywords: cultural buildings, design methodologies, graphic analysis, public spaces
Procedia PDF Downloads 306697 Computational Study on Traumatic Brain Injury Using Magnetic Resonance Imaging-Based 3D Viscoelastic Model
Authors: Tanu Khanuja, Harikrishnan N. Unni
Abstract:
Head is the most vulnerable part of human body and may cause severe life threatening injuries. As the in vivo brain response cannot be recorded during injury, computational investigation of the head model could be really helpful to understand the injury mechanism. Majority of the physical damage to living tissues are caused by relative motion within the tissue due to tensile and shearing structural failures. The present Finite Element study focuses on investigating intracranial pressure and stress/strain distributions resulting from impact loads on various sites of human head. This is performed by the development of the 3D model of a human head with major segments like cerebrum, cerebellum, brain stem, CSF (cerebrospinal fluid), and skull from patient specific MRI (magnetic resonance imaging). The semi-automatic segmentation of head is performed using AMIRA software to extract finer grooves of the brain. To maintain the accuracy high number of mesh elements are required followed by high computational time. Therefore, the mesh optimization has also been performed using tetrahedral elements. In addition, model validation with experimental literature is performed as well. Hard tissues like skull is modeled as elastic whereas soft tissues like brain is modeled with viscoelastic prony series material model. This paper intends to obtain insights into the severity of brain injury by analyzing impacts on frontal, top, back, and temporal sites of the head. Yield stress (based on von Mises stress criterion for tissues) and intracranial pressure distribution due to impact on different sites (frontal, parietal, etc.) are compared and the extent of damage to cerebral tissues is discussed in detail. This paper finds that how the back impact is more injurious to overall head than the other. The present work would be helpful to understand the injury mechanism of traumatic brain injury more effectively.Keywords: dynamic impact analysis, finite element analysis, intracranial pressure, MRI, traumatic brain injury, von Misses stress
Procedia PDF Downloads 161696 Improving Fingerprinting-Based Localization System Using Generative AI
Authors: Getaneh Berie Tarekegn, Li-Chia Tai
Abstract:
With the rapid advancement of artificial intelligence, low-power built-in sensors on Internet of Things devices, and communication technologies, location-aware services have become increasingly popular and have permeated every aspect of people’s lives. Global navigation satellite systems (GNSSs) are the default method of providing continuous positioning services for ground and aerial vehicles, as well as consumer devices (smartphones, watches, notepads, etc.). However, the environment affects satellite positioning systems, particularly indoors, in dense urban and suburban cities enclosed by skyscrapers, or when deep shadows obscure satellite signals. This is because (1) indoor environments are more complicated due to the presence of many objects surrounding them; (2) reflection within the building is highly dependent on the surrounding environment, including the positions of objects and human activity; and (3) satellite signals cannot be reached in an indoor environment, and GNSS doesn't have enough power to penetrate building walls. GPS is also highly power-hungry, which poses a severe challenge for battery-powered IoT devices. Due to these challenges, IoT applications are limited. Consequently, precise, seamless, and ubiquitous Positioning, Navigation and Timing (PNT) systems are crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarms, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. We also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 0.39 m, and more than 90% of the errors are less than 0.82 m. According to numerical results, SRCLoc improves positioning performance and reduces radio map construction costs significantly compared to traditional methods.Keywords: location-aware services, feature extraction technique, generative adversarial network, long short-term memory, support vector machine
Procedia PDF Downloads 42695 Influence of Mandrel’s Surface on the Properties of Joints Produced by Magnetic Pulse Welding
Authors: Ines Oliveira, Ana Reis
Abstract:
Magnetic Pulse Welding (MPW) is a cold solid-state welding process, accomplished by the electromagnetically driven, high-speed and low-angle impact between two metallic surfaces. It has the same working principle of Explosive Welding (EXW), i.e. is based on the collision of two parts at high impact speed, in this case, propelled by electromagnetic force. Under proper conditions, i.e., flyer velocity and collision point angle, a permanent metallurgical bond can be achieved between widely dissimilar metals. MPW has been considered a promising alternative to the conventional welding processes and advantageous when compared to other impact processes. Nevertheless, MPW current applications are mostly academic. Despite the existing knowledge, the lack of consensus regarding several aspects of the process calls for further investigation. As a result, the mechanical resistance, morphology and structure of the weld interface in MPW of Al/Cu dissimilar pair were investigated. The effect of process parameters, namely gap, standoff distance and energy, were studied. It was shown that welding only takes place if the process parameters are within an optimal range. Additionally, the formation of intermetallic phases cannot be completely avoided in the weld of Al/Cu dissimilar pair by MPW. Depending on the process parameters, the intermetallic compounds can appear as continuous layer or small pockets. The thickness and the composition of the intermetallic layer depend on the processing parameters. Different intermetallic phases can be identified, meaning that different temperature-time regimes can occur during the process. It is also found that lower pulse energies are preferred. The relationship between energy increase and melting is possibly related to multiple sources of heating. Higher values of pulse energy are associated with higher induced currents in the part, meaning that more Joule heating will be generated. In addition, more energy means higher flyer velocity, the air existing in the gap between the parts to be welded is expelled, and this aerodynamic drag (fluid friction) is proportional to the square of the velocity, further contributing to the generation of heat. As the kinetic energy also increases with the square of velocity, the dissipation of this energy through plastic work and jet generation will also contribute to an increase in temperature. To reduce intermetallic phases, porosity, and melt pockets, pulse energy should be minimized. The bond formation is affected not only by the gap, standoff distance, and energy but also by the mandrel’s surface conditions. No correlation was clearly identified between surface roughness/scratch orientation and joint strength. Nevertheless, the aspect of the interface (thickness of the intermetallic layer, porosity, presence of macro/microcracks) is clearly affected by the surface topology. Welding was not established on oil contaminated surfaces, meaning that the jet action is not enough to completely clean the surface.Keywords: bonding mechanisms, impact welding, intermetallic compounds, magnetic pulse welding, wave formation
Procedia PDF Downloads 211694 Unfolding Architectural Assemblages: Mapping Contemporary Spatial Objects' Affective Capacity
Authors: Panagiotis Roupas, Yota Passia
Abstract:
This paper aims at establishing an index of design mechanisms - immanent in spatial objects - based on the affective capacity of their material formations. While spatial objects (design objects, buildings, urban configurations, etc.) are regarded as systems composed of interacting parts, within the premises of assemblage theory, their ability to affect and to be affected has not yet been mapped or sufficiently explored. This ability lies in excess, a latent potentiality they contain, not transcendental but immanent in their pre-subjective aesthetic power. As spatial structures are theorized as assemblages - composed of heterogeneous elements that enter into relations with one another - and since all assemblages are parts of larger assemblages, their components' ability to engage is contingent. We thus seek to unfold the mechanisms inherent in spatial objects that allow to the constituent parts of design assemblages to perpetually enter into new assemblages. To map architectural assemblage's affective ability, spatial objects are analyzed in two axes. The first axis focuses on the relations that the assemblage's material and expressive components develop in order to enter the assemblages. Material components refer to those material elements that an assemblage requires in order to exist, while expressive components includes non-linguistic (sense impressions) as well as linguistic (beliefs). The second axis records the processes known as a-signifying signs or a-signs, which are the triggering mechanisms able to territorialize or deterritorialize, stabilize or destabilize the assemblage and thus allow it to assemble anew. As a-signs cannot be isolated from matter, we point to their resulting effects, which without entering the linguistic level they are expressed in terms of intensity fields: modulations, movements, speeds, rhythms, spasms, etc. They belong to a molecular level where they operate in the pre-subjective world of perceptions, effects, drives, and emotions. A-signs have been introduced as intensities that transform the object beyond meaning, beyond fixed or known cognitive procedures. To that end, from an archive of more than 100 spatial objects by contemporary architects and designers, we have created an effective mechanisms index is created, where each a-sign is now connected with the list of effects it triggers and which thoroughly defines it. And vice versa, the same effect can be triggered by different a-signs, allowing the design object to lie in a perpetual state of becoming. To define spatial objects, A-signs are categorized in terms of their aesthetic power to affect and to be affected on the basis of the general categories of form, structure and surface. Thus, different part's degree of contingency are evaluated and measured and finally, we introduce as material information that is immanent in the spatial object while at the same time they confer no meaning; they only convey some information without semantic content. Through this index, we are able to analyze and direct the final form of the spatial object while at the same time establishing the mechanism to measure its continuous transformation.Keywords: affective mechanisms index, architectural assemblages, a-signifying signs, cartography, virtual
Procedia PDF Downloads 127693 Mitigation of Risk Management Activities towards Accountability into Microfinance Environment: Malaysian Case Study
Authors: Nor Azlina A. Rahman, Jamaliah Said, Salwana Hassan
Abstract:
Prompt changes in global business environment, such as passionate competition, managerial/operational, changing governmental regulation and innovation in technology have significant impacts on the organizations. At present, global business environment demands for more proactive institutions on microfinance to provide an opportunity for the business success. Microfinance providers in Malaysia still accelerate its activities of funding by cash and cheque. These institutions are at high risk as the paper-based system is deemed to be slow and prone to human error, as well as requiring a major annual reconciliation process. The global transformation of financial services, growing involvement of technology, innovation and new business activities had progressively made risk management profile to be more subjective and diversified. The persistent, complex and dynamic nature of risk management activities in the institutions arise due to highly automated advancements of technology. This may thus manifest in a variety of ways throughout the financial services sector. This study seeks out to examine current operational risks management being experienced by microfinance providers in Malaysia; investigate the process of current practices on facilitator control factor mechanisms, and explore how the adoption of technology, innovation and use of management accounting practices would affect the risk management process of operation system in microfinance providers in Malaysia. A case study method was employed in this study. The case study also need to find that the vital past role of management accounting will be used for mitigation of risk management activities towards accountability as an information or guideline to microfinance provider. An empirical element obtainable with qualitative method is needed in this study, where multipart and in-depth information are essential to understand the issues of these institution phenomena. This study is expected to propose a theoretical model for implementation of technology, innovation and management accounting practices into the system of operation to improve internal control and subsequently lead to mitigation of risk management activities among microfinance providers to be more successful.Keywords: microfinance, accountability, operational risks, management accounting practices
Procedia PDF Downloads 439692 Numerical Investigation of Effect of Throat Design on the Performance of a Rectangular Ramjet Intake
Authors: Subrat Partha Sarathi Pattnaik, Rajan N.K.S.
Abstract:
Integrated rocket ramjet engines are highly suitable for long range missile applications. Designing the fixed geometry intakes for such missiles that can operate efficiently over a range of operating conditions is a highly challenging task. Hence, the present study aims to evaluate the effect of throat design on the performance of a rectangular mixed compression intake for operation in the Mach number range of 1.8 – 2.5. The analysis has been carried out at four different Mach numbers of 1.8, 2, 2.2, 2.5 and two angle-of-attacks of +5 and +10 degrees. For the throat design, three different throat heights have been considered, one corresponding to a 3- external shock design and two heights corresponding to a 2-external shock design leading to different internal contraction ratios. The on-design Mach number for the study is M 2.2. To obtain the viscous flow field in the intake, the theoretical designs have been considered for computational fluid dynamic analysis. For which Favre averaged Navier- Stokes (FANS) equations with two equation SST k-w model have been solved. The analysis shows that for zero angle of attack at on-design and high off-design Mach number operations the three-ramp design leads to a higher total pressure recovery (TPR) compared to the two-ramp design at both contraction ratios maintaining same mass flow ratio (MFR). But at low off-design Mach numbers the total pressure shows an opposite trend that is maximum for the two-ramp low contraction ratio design due to lower shock loss across the external shocks similarly the MFR is higher for low contraction ratio design as the external ramp shocks move closer to the cowl. At both the angle of attack conditions and complete range of Mach numbers the total pressure recovery and mass flow ratios are highest for two ramp low contraction design due to lower stagnation pressure loss across the detached bow shock formed at the ramp and lower mass spillage. Hence, low contraction design is found to be suitable for higher off-design performance.Keywords: internal contraction ratio, mass flow ratio, mixed compression intake, performance, supersonic flows
Procedia PDF Downloads 108691 An Introspective look into Hotel Employees Career Satisfaction
Authors: Anastasios Zopiatis, Antonis L. Theocharous
Abstract:
In the midst of a fierce war for talent, the hospitality industry is seeking new and innovative ways to enrich its image as an employer of choice and not a necessity. Historically, the industry’s professions are portrayed as ‘unattractive’ due to their repetitious nature, long and unsocial working schedules, below average remunerations, and the mental and physical demands of the job. Aligning with the industry, hospitality and tourism scholars embarked on a journey to investigate pertinent topics with the aim of enhancing our conceptual understanding of the elements that influence employees at the hospitality world of work. Topics such as job involvement, commitment, job and career satisfaction, and turnover intentions became the focal points in a multitude of relevant empirical and conceptual investigations. Nevertheless, gaps or inconsistencies in existing theories, as a result of both the volatile complexity of the relationships governing human behavior in the hospitality workplace, and the academic community’s unopposed acceptance of theoretical frameworks mainly propounded in the United States and United Kingdom years ago, necessitate our continuous vigilance. Thus, in an effort to enhance and enrich the discourse, we set out to investigate the relationship between intrinsic and extrinsic job satisfaction traits and the individual’s career satisfaction, and subsequent intention to remain in the hospitality industry. Reflecting on existing literature, a quantitative survey was developed and administered, face-to-face, to 650 individuals working as full-time employees in 4- and 5- star hotel establishments in Cyprus, whereas a multivariate statistical analysis method, namely Structural Equation Modeling (SEM), was utilized to determine whether relationships existed between constructs as a means to either accept or reject the hypothesized theory. Findings, of interest to both industry stakeholders and academic scholars, suggest that the individual’s future intention to remain within the industry is primarily associated with extrinsic job traits. Our findings revealed that positive associations exist between extrinsic job traits, and both career satisfaction and future intention. In contrast, when investigating the relationship of intrinsic traits, a positive association was revealed only with career satisfaction. Apparently, the local industry’s environmental factors of seasonality, excessive turnover, overdependence on seasonal, and part-time migrant workers, prohibit industry stakeholders in effectively investing the time and resources in the development and professional growth of their employees. Consequently intrinsic job satisfaction factors such as advancement, growth, and achievement, take backstage to the more materialistic extrinsic factors. Findings from the subsequent mediation analysis support the notion that intrinsic traits can positively influence future intentions indirectly only through career satisfaction, whereas extrinsic traits can positively impact both career satisfaction and future intention both directly and indirectly.Keywords: career satisfaction, Cyprus, hotel employees, structural equation modeling, SEM
Procedia PDF Downloads 287690 The Effects of Extreme Precipitation Events on Ecosystem Services
Authors: Szu-Hua Wang, Yi-Wen Chen
Abstract:
Urban ecosystems are complex coupled human-environment systems. They contain abundant natural resources for producing natural assets and attract urban assets to consume natural resources for urban development. Urban ecosystems provide several ecosystem services, including provisioning services, regulating services, cultural services, and supporting services. Rapid global climate change makes urban ecosystems and their ecosystem services encountering various natural disasters. Lots of natural disasters have occurred around the world under the constant changes in the frequency and intensity of extreme weather events in the past two decades. In Taiwan, hydrological disasters have been paid more attention due to the potential high sensitivity of Taiwan’s cities to climate change, and it impacts. However, climate change not only causes extreme weather events directly but also affects the interactions among human, ecosystem services and their dynamic feedback processes indirectly. Therefore, this study adopts a systematic method, solar energy synthesis, based on the concept of the eco-energy analysis. The Taipei area, the most densely populated area in Taiwan, is selected as the study area. The changes of ecosystem services between 2015 and Typhoon Soudelor have been compared in order to investigate the impacts of extreme precipitation events on ecosystem services. The results show that the forest areas are the largest contributions of energy to ecosystem services in the Taipei area generally. Different soil textures of different subsystem have various upper limits of water contents or substances. The major contribution of ecosystem services of the study area is natural hazard regulation provided by the surface water resources areas. During the period of Typhoon Soudelor, the freshwater supply in the forest areas had become the main contribution. Erosion control services were the main ecosystem service affected by Typhoon Soudelor. The second and third main ecosystem services were hydrologic regulation and food supply. Due to the interactions among ecosystem services, fresh water supply, water purification, and waste treatment had been affected severely.Keywords: ecosystem, extreme precipitation events, ecosystem services, solar energy synthesis
Procedia PDF Downloads 148689 Performance Demonstration of Extendable NSPO Space-Borne GPS Receiver
Authors: Hung-Yuan Chang, Wen-Lung Chiang, Kuo-Liang Wu, Chen-Tsung Lin
Abstract:
National Space Organization (NSPO) has completed in 2014 the development of a space-borne GPS receiver, including design, manufacture, comprehensive functional test, environmental qualification test and so on. The main performance of this receiver include 8-meter positioning accuracy, 0.05 m/sec speed-accuracy, the longest 90 seconds of cold start time, and up to 15g high dynamic scenario. The receiver will be integrated in the autonomous FORMOSAT-7 NSPO-Built satellite scheduled to be launched in 2019 to execute pre-defined scientific missions. The flight model of this receiver manufactured in early 2015 will pass comprehensive functional tests and environmental acceptance tests, etc., which are expected to be completed by the end of 2015. The space-borne GPS receiver is a pure software design in which all GPS baseband signal processing are executed by a digital signal processor (DSP), currently only 50% of its throughput being used. In response to the booming global navigation satellite systems, NSPO will gradually expand this receiver to become a multi-mode, multi-band, high-precision navigation receiver, and even a science payload, such as the reflectometry receiver of a global navigation satellite system. The fundamental purpose of this extension study is to port some software algorithms such as signal acquisition and correlation, reused code and large amount of computation load to the FPGA whose processor is responsible for operational control, navigation solution, and orbit propagation and so on. Due to the development and evolution of the FPGA is pretty fast, the new system architecture upgraded via an FPGA should be able to achieve the goal of being a multi-mode, multi-band high-precision navigation receiver, or scientific receiver. Finally, the results of tests show that the new system architecture not only retains the original overall performance, but also sets aside more resources available for future expansion possibility. This paper will explain the detailed DSP/FPGA architecture, development, test results, and the goals of next development stage of this receiver.Keywords: space-borne, GPS receiver, DSP, FPGA, multi-mode multi-band
Procedia PDF Downloads 369688 A System for Preventing Inadvertent Exposition of Staff Present outside the Operating Theater: Description and Clinical Test
Authors: Aya Al Masri, Kamel Guerchouche, Youssef Laynaoui, Safoin Aktaou, Malorie Martin, Fouad Maaloul
Abstract:
Introduction: Mobile C-arms move throughout operating rooms of the operating theater. Being designed to move between rooms, they are not equipped with relays to retrieve the exposition information and export it outside the room. Therefore, no light signaling is available outside the room to warn the X-ray emission for staff. Inadvertent exposition of staff outside the operating theater is a real problem for radiation protection. The French standard NFC 15-160 require that: (1) access to any room containing an X-ray emitting device must be controlled by a light signage so that it cannot be inadvertently crossed, and (2) setting up an emergency button to stop the X-ray emission. This study presents a system that we developed to meet these requirements and the results of its clinical test. Materials and methods: The system is composed of two communicating boxes: o The "DetectBox" is to be installed inside the operating theater. It identifies the various operation states of the C-arm by analyzing its power supply signal. The DetectBox communicates (in wireless mode) with the second box (AlertBox). o The "AlertBox" can operate in socket or battery mode and is to be installed outside the operating theater. It detects and reports the state of the C-arm by emitting a real time light signal. This latter can have three different colors: red when the C-arm is emitting X-rays, orange when it is powered on but does not emit X-rays, and green when it is powered off. The two boxes communicate on a radiofrequency link exclusively carried out in the ‘Industrial, Scientific and Medical (ISM)’ frequency bands and allows the coexistence of several on-site warning systems without communication conflicts (interference). Taking into account the complexity of performing electrical works in the operating theater (for reasons of hygiene and continuity of medical care), this system (having a size <10 cm²) works in complete safety without any intrusion in the mobile C-arm and does not require specific electrical installation work. The system is equipped with emergency button that stops X-ray emission. The system has been clinically tested. Results: The clinical test of the system shows that: it detects X-rays having both high and low energy (50 – 150 kVp), high and low photon flow (0.5 – 200 mA: even when emitted for a very short time (<1 ms)), Probability of false detection < 10-5, it operates under all acquisition modes (continuous, pulsed, fluoroscopy mode, image mode, subtraction and movie mode), it is compatible with all C-arm models and brands. We have also tested the communication between the two boxes (DetectBox and AlertBox) in several conditions: (1) Unleaded room, (2) leaded room, and (3) rooms with particular configuration (sas, great distances, concrete walls, 3 mm of lead). The result of these last tests was positive. Conclusion: This system is a reliable tool to alert the staff present outside the operating room for X-ray emission and insure their radiation protection.Keywords: Clinical test, Inadvertent staff exposition, Light signage, Operating theater
Procedia PDF Downloads 126687 In Silico Screening, Identification and Validation of Cryptosporidium hominis Hypothetical Protein and Virtual Screening of Inhibitors as Therapeutics
Authors: Arpit Kumar Shrivastava, Subrat Kumar, Rajani Kanta Mohapatra, Priyadarshi Soumyaranjan Sahu
Abstract:
Computational approaches to predict structure, function and other biological characteristics of proteins are becoming more common in comparison to the traditional methods in drug discovery. Cryptosporidiosis is a major zoonotic diarrheal disease particularly in children, which is caused primarily by Cryptosporidium hominis and Cryptosporidium parvum. Currently, there are no vaccines for cryptosporidiosis and recommended drugs are not effective. With the availability of complete genome sequence of C. hominis, new targets have been recognized for the development of effective and better drugs and/or vaccines. We identified a unique hypothetical epitopic protein in C. hominis genome through BLASTP analysis. A 3D model of the hypothetical protein was generated using I-Tasser server through threading methodology. The quality of the model was validated through Ramachandran plot by PROCHECK server. The functional annotation of the hypothetical protein through DALI server revealed structural similarity with human Transportin 3. Phylogenetic analysis for this hypothetical protein also showed C. hominis hypothetical protein (CUV04613) was the closely related to human transportin 3 protein. The 3D protein model is further subjected to virtual screening study with inhibitors from the Zinc Database by using Dock Blaster software. Docking study reported N-(3-chlorobenzyl) ethane-1,2-diamine as the best inhibitor in terms of docking score. Docking analysis elucidated that Leu 525, Ile 526, Glu 528, Glu 529 are critical residues for ligand–receptor interactions. The molecular dynamic simulation was done to access the reliability of the binding pose of inhibitor and protein complex using GROMACS software at 10ns time point. Trajectories were analyzed at each 2.5 ns time interval, among which, H-bond with LEU-525 and GLY- 530 are significantly present in MD trajectories. Furthermore, antigenic determinants of the protein were determined with the help of DNA Star software. Our study findings showed a great potential in order to provide insights in the development of new drug(s) or vaccine(s) for control as well as prevention of cryptosporidiosis among humans and animals.Keywords: cryptosporidium hominis, hypothetical protein, molecular docking, molecular dynamics simulation
Procedia PDF Downloads 365686 The Role of Emotional Intelligence in the Manager's Psychophysiological Activity during a Performance-Review Discussion
Authors: Mikko Salminen, Niklas Ravaja
Abstract:
Emotional intelligence (EI) consists of skills for monitoring own emotions and emotions of others, skills for discriminating different emotions, and skills for using this information in thinking and actions. EI enhances, for example, work outcomes and organizational climate. We suggest that the role and manifestations of EI should also be studied in real leadership situations, especially during the emotional, social interaction. Leadership is essentially a process to influence others for reaching a certain goal. This influencing happens by managerial processes and computer-mediated communication (e.g. e-mail) but also by face-to-face, where facial expressions have a significant role in conveying emotional information. Persons with high EI are typically perceived more positively, and they have better social skills. We hypothesize, that during social interaction high EI enhances the ability to detect other’s emotional state and controlling own emotional expressions. We suggest, that emotionally intelligent leader’s experience less stress during social leadership situations, since they have better skills in dealing with the related emotional work. Thus the high-EI leaders would be more able to enjoy these situations, but also be more efficient in choosing appropriate expressions for building constructive dialogue. We suggest, that emotionally intelligent leaders show more positive emotional expressions than low-EI leaders. To study these hypotheses we observed performance review discussions of 40 leaders (24 female) with 78 (45 female) of their followers. Each leader held a discussion with two followers. Psychophysiological methods were chosen because they provide objective and continuous data from the whole duration of the discussions. We recorded sweating of the hands (electrodermal activation) by electrodes placed to the fingers of the non-dominant hand to assess the stress-related physiological arousal of the leaders. In addition, facial electromyography was recorded from cheek (zygomaticus major, activated during e.g. smiling) and periocular (orbicularis oculi, activated during smiling) muscles using electrode pairs placed on the left side of the face. Leader’s trait EI was measured with a 360 questionnaire, filled by each leader’s followers, peers, managers and by themselves. High-EI leaders had less sweating of the hands (p = .007) than the low-EI leaders. It is thus suggested that the high-EI leaders experienced less physiological stress during the discussions. Also, high scores in the factor “Using of emotions” were related to more facial muscle activation indicating positive emotional expressions (cheek muscle: p = .048; periocular muscle: p = .076, almost statistically significant). The results imply that emotionally intelligent managers are positively relaxed during s social leadership situations such as a performance review discussion. The current study also highlights the importance of EI in face-to-face social interaction, given the central role facial expressions have in interaction situations. The study also offers new insight to the biological basis of trait EI. It is suggested that the identification, forming, and intelligently using of facial expressions are skills that could be trained during leadership development courses.Keywords: emotional intelligence, leadership, performance review discussion, psychophysiology, social interaction
Procedia PDF Downloads 245685 Revolutionizing Accounting: Unleashing the Power of Artificial Intelligence
Authors: Sogand Barghi
Abstract:
The integration of artificial intelligence (AI) in accounting practices is reshaping the landscape of financial management. This paper explores the innovative applications of AI in the realm of accounting, emphasizing its transformative impact on efficiency, accuracy, decision-making, and financial insights. By harnessing AI's capabilities in data analysis, pattern recognition, and automation, accounting professionals can redefine their roles, elevate strategic decision-making, and unlock unparalleled value for businesses. This paper delves into AI-driven solutions such as automated data entry, fraud detection, predictive analytics, and intelligent financial reporting, highlighting their potential to revolutionize the accounting profession. Artificial intelligence has swiftly emerged as a game-changer across industries, and accounting is no exception. This paper seeks to illuminate the profound ways in which AI is reshaping accounting practices, transcending conventional boundaries, and propelling the profession toward a new era of efficiency and insight-driven decision-making. One of the most impactful applications of AI in accounting is automation. Tasks that were once labor-intensive and time-consuming, such as data entry and reconciliation, can now be streamlined through AI-driven algorithms. This not only reduces the risk of errors but also allows accountants to allocate their valuable time to more strategic and analytical tasks. AI's ability to analyze vast amounts of data in real time enables it to detect irregularities and anomalies that might go unnoticed by traditional methods. Fraud detection algorithms can continuously monitor financial transactions, flagging any suspicious patterns and thereby bolstering financial security. AI-driven predictive analytics can forecast future financial trends based on historical data and market variables. This empowers organizations to make informed decisions, optimize resource allocation, and develop proactive strategies that enhance profitability and sustainability. Traditional financial reporting often involves extensive manual effort and data manipulation. With AI, reporting becomes more intelligent and intuitive. Automated report generation not only saves time but also ensures accuracy and consistency in financial statements. While the potential benefits of AI in accounting are undeniable, there are challenges to address. Data privacy and security concerns, the need for continuous learning to keep up with evolving AI technologies, and potential biases within algorithms demand careful attention. The convergence of AI and accounting marks a pivotal juncture in the evolution of financial management. By harnessing the capabilities of AI, accounting professionals can transcend routine tasks, becoming strategic advisors and data-driven decision-makers. The applications discussed in this paper underline the transformative power of AI, setting the stage for an accounting landscape that is smarter, more efficient, and more insightful than ever before. The future of accounting is here, and it's driven by artificial intelligence.Keywords: artificial intelligence, accounting, automation, predictive analytics, financial reporting
Procedia PDF Downloads 71684 TRAC: A Software Based New Track Circuit for Traffic Regulation
Authors: Jérôme de Reffye, Marc Antoni
Abstract:
Following the development of the ERTMS system, we think it is interesting to develop another software-based track circuit system which would fit secondary railway lines with an easy-to-work implementation and a low sensitivity to rail-wheel impedance variations. We called this track circuit 'Track Railway by Automatic Circuits.' To be internationally implemented, this system must not have any mechanical component and must be compatible with existing track circuit systems. For example, the system is independent from the French 'Joints Isolants Collés' that isolate track sections from one another, and it is equally independent from component used in Germany called 'Counting Axles,' in French 'compteur d’essieux.' This track circuit is fully interoperable. Such universality is obtained by replacing the train detection mechanical system with a space-time filtering of train position. The various track sections are defined by the frequency of a continuous signal. The set of frequencies related to the track sections is a set of orthogonal functions in a Hilbert Space. Thus the failure probability of track sections separation is precisely calculated on the basis of signal-to-noise ratio. SNR is a function of the level of traction current conducted by rails. This is the reason why we developed a very powerful algorithm to reject noise and jamming to obtain an SNR compatible with the precision required for the track circuit and SIL 4 level. The SIL 4 level is thus reachable by an adjustment of the set of orthogonal functions. Our major contributions to railway engineering signalling science are i) Train space localization is precisely defined by a calibration system. The operation bypasses the GSM-R radio system of the ERTMS system. Moreover, the track circuit is naturally protected against radio-type jammers. After the calibration operation, the track circuit is autonomous. ii) A mathematical topology adapted to train space localization by following the train through a linear time filtering of the received signal. Track sections are numerically defined and can be modified with a software update. The system was numerically simulated, and results were beyond our expectations. We achieved a precision of one meter. Rail-ground and rail-wheel impedance sensitivity analysis gave excellent results. Results are now complete and ready to be published. This work was initialised as a research project of the French Railways developed by the Pi-Ramses Company under SNCF contract and required five years to obtain the results. This track circuit is already at Level 3 of the ERTMS system, and it will be much cheaper to implement and to work. The traffic regulation is based on variable length track sections. As the traffic growths, the maximum speed is reduced, and the track section lengths are decreasing. It is possible if the elementary track section is correctly defined for the minimum speed and if every track section is able to emit with variable frequencies.Keywords: track section, track circuits, space-time crossing, adaptive track section, automatic railway signalling
Procedia PDF Downloads 331683 Reduction Shrinkage of Concrete without Use Reinforcement
Authors: Martin Tazky, Rudolf Hela, Lucia Osuska, Petr Novosad
Abstract:
Concrete’s volumetric changes are natural process caused by silicate minerals’ hydration. These changes can lead to cracking and subsequent destruction of cementitious material’s matrix. In most cases, cracks can be assessed as a negative effect of hydration, and in all cases, they lead to an acceleration of degradation processes. Preventing the formation of these cracks is, therefore, the main effort. Once of the possibility how to eliminate this natural concrete shrinkage process is by using different types of dispersed reinforcement. For this application of concrete shrinking, steel and polymer reinforcement are preferably used. Despite ordinarily used reinforcement in concrete to eliminate shrinkage it is possible to look at this specific problematic from the beginning by itself concrete mix composition. There are many secondary raw materials, which are helpful in reduction of hydration heat and also with shrinkage of concrete during curing. The new science shows the possibilities of shrinkage reduction also by the controlled formation of hydration products, which could act by itself morphology as a traditionally used dispersed reinforcement. This contribution deals with the possibility of controlled formation of mono- and tri-sulfate which are considered like degradation minerals. Mono- and tri- sulfate's controlled formation in a cementitious composite can be classified as a self-healing ability. Its crystal’s growth acts directly against the shrinking tension – this reduces the risk of cracks development. Controlled formation means that these crystals start to grow in the fresh state of the material (e.g. concrete) but stop right before it could cause any damage to the hardened material. Waste materials with the suitable chemical composition are very attractive precursors because of their added value in the form of landscape pollution’s reduction and, of course, low cost. In this experiment, the possibilities of using the fly ash from fluidized bed combustion as a mono- and tri-sulphate formation additive were investigated. The experiment itself was conducted on cement paste and concrete and specimens were subjected to a thorough analysis of physicomechanical properties as well as microstructure from the moment of mixing up to 180 days. In cement composites, were monitored the process of hydration and shrinkage. In a mixture with the used admixture of fluidized bed combustion fly ash, possible failures were specified by electronic microscopy and dynamic modulus of elasticity. The results of experiments show the possibility of shrinkage concrete reduction without using traditionally dispersed reinforcement.Keywords: shrinkage, monosulphates, trisulphates, self-healing, fluidized fly ash
Procedia PDF Downloads 186682 Investigation on Development of Pv and Wind Power with Hydro Pumped Storage to Increase Renewable Energy Penetration: A Parallel Analysis of Taiwan and Greece
Authors: Robel Habtemariam
Abstract:
Globally, wind energy and photovoltaics (PV) solar energy are among the leading renewable energy sources (RES) in terms of installed capacity. In order to increase the contribution of RES to the power supply system, large scale energy integration is required, mainly due to wind energy and PV. In this paper, an investigation has been made on the electrical power supply systems of Taiwan and Greece in order to integrate high level of wind and photovoltaic (PV) to increase the penetration of renewable energy resources. Currently, both countries heavily depend on fossil fuels to meet the demand and to generate adequate electricity. Therefore, this study is carried out to look into the two cases power supply system by developing a methodology that includes major power units. To address the analysis, an approach for simulation of power systems is formulated and applied. The simulation is based on the non-dynamic analysis of the electrical system. This simulation results in calculating the energy contribution of different types of power units; namely the wind, PV, non-flexible and flexible power units. The calculation is done for three different scenarios (2020, 2030, & 2050), where the first two scenarios are based on national targets and scenario 2050 is a reflection of ambitious global targets. By 2030 in Taiwan, the input of the power units is evaluated as 4.3% (wind), 3.7% (PV), 65.2 (non-flexible), 25.3% (flexible), and 1.5% belongs to hydropower plants. In Greece, much higher renewable energy contribution is observed for the same scenario with 21.7% (wind), 14.3% (PV), 38.7% (non-flexible), 14.9% (flexible), and 10.3% (hydro). Moreover, it examines the ability of the power systems to deal with the variable nature of the wind and PV generation. For this reason, an investigation has also been done on the use of the combined wind power with pumped storage systems (WPS) to enable the system to exploit the curtailed wind energy & surplus PV and thus increase the wind and PV installed capacity and replace the peak supply by conventional power units. Results show that the feasibility of pumped storage can be justified in the high scenario (that is the scenario of 2050) of RES integration especially in the case of Greece.Keywords: large scale energy integration, photovoltaics solar energy, pumped storage systems, renewable energy sources
Procedia PDF Downloads 277681 Co-Synthesis of Exopolysaccharides and Polyhydroxyalkanoates Using Waste Streams: Solid-State Fermentation as an Alternative Approach
Authors: Laura Mejias, Sandra Monteagudo, Oscar Martinez-Avila, Sergio Ponsa
Abstract:
Bioplastics are gaining attention as potential substitutes of conventional fossil-derived plastics and new components of specialized applications in different industries. Besides, these constitute a sustainable alternative since they are biodegradable and can be obtained starting from renewable sources. Thus, agro-industrial wastes appear as potential substrates for bioplastics production using microorganisms, considering they are a suitable source for nutrients, low-cost, and available worldwide. Therefore, this approach contributes to the biorefinery and circular economy paradigm. The present study assesses the solid-state fermentation (SSF) technology for the co-synthesis of exopolysaccharides (EPS) and polyhydroxyalkanoates (PHA), two attractive biodegradable bioplastics, using the leftover of the brewery industry brewer's spent grain (BSG). After an initial screening of diverse PHA-producer bacteria, it was found that Burkholderia cepacia presented the highest EPS and PHA production potential via SSF of BSG. Thus, B. cepacia served to identify the most relevant aspects affecting the EPS+PHA co-synthesis at a lab-scale (100g). Since these are growth-dependent processes, they were monitored online through oxygen consumption using a dynamic respirometric system, but also quantifying the biomass production (gravimetric) and the obtained products (EtOH precipitation for EPS and solid-liquid extraction coupled with GC-FID for PHA). Results showed that B. cepacia has grown up to 81 mg per gram of dry BSG (gDM) at 30°C after 96 h, representing up to 618 times higher than the other tested strains' findings. Hence, the crude EPS production was 53 mg g-1DM (2% carbohydrates), but purity reached 98% after a dialysis purification step. Simultaneously, B. cepacia accumulated up to 36% (dry basis) of the produced biomass as PHA, mainly composed of polyhydroxybutyrate (P3HB). The maximum PHA production was reached after 48 h with 12.1 mg g⁻¹DM, representing threefold the levels previously reported using SSF. Moisture content and aeration strategy resulted in the most significant variables affecting the simultaneous production. Results show the potential of co-synthesis via SSF as an attractive alternative to enhance bioprocess feasibility for obtaining these bioplastics in residue-based systems.Keywords: bioplastics, brewer’s spent grain, circular economy, solid-state fermentation, waste to product
Procedia PDF Downloads 144680 Simon Says: What Should I Study?
Authors: Fonteyne Lot
Abstract:
SIMON (Study capacities and Interest Monitor is a freely accessible online self-assessment tool that allows secondary education pupils to evaluate their interests and capacities in order to choose a post-secondary major that maximally suits their potential. The tool consists of two broad domains that correspond with two general questions pupils ask: 'What study fields interest me?' and 'Am I capable to succeed in this field of study?'. The first question is addressed by a RIASEC-type interest inventory that links personal interests to post-secondary majors. Pupils are provided with a personal profile and an overview of majors with their degree of congruence. The output is dynamic: respondents can manipulate their score and they can compare their results to the profile of all fields of study. That way they are stimulated to explore the broad range of majors. To answer whether pupils are capable of succeeding in a preferred major, a battery of tests is provided. This battery comprises a range of factors that are predictive of academic success. Traditional predictors such as (educational) background and cognitive variables (mathematical and verbal skills) are included. Moreover, non-cognitive predictors of academic success (such as 'motivation', 'test anxiety', 'academic self-efficacy' and 'study skills') are assessed. These non-cognitive factors are generally not included in admission decisions although research shows they are incrementally predictive of success and are less discriminating. These tests inform pupils on potential causes of success and failure. More important, pupils receive their personal chances of success per major. These differential probabilities are validated through the underlying research on academic success of students. For example, the research has shown that we can identify 22 % of the failing students in psychology and educational sciences. In this group, our prediction is 95% accurate. SIMON leads more students to a suitable major which in turn alleviates student success and retention. Apart from these benefits, the instrument grants insight into risk factors of academic failure. It also supports and fosters the development of evidence-based remedial interventions and therefore gives way to a more efficient use of means.Keywords: academic success, online self-assessment, student retention, vocational choice
Procedia PDF Downloads 404679 The Need for a One Health and Welfare Approach to Animal Welfare in Industrial Animal Farming
Authors: Clinton Adas
Abstract:
Antibiotic resistance has been identified by the World Health Organisation as a real possibility for the 21st Century. While many factors contribute to this, one of the more significant is industrial animal farming and its effect on the food chain and environment. Livestock consumes a significant portion of antibiotics sold globally, and these are used to make animals grow faster for profit purposes, to prevent illness caused by inhumane living conditions, and to treat disease when it breaks out. Many of these antibiotics provide little benefit to animals, and most are the same as those used by humans - including those deemed critical to human health that should therefore be used sparingly. Antibiotic resistance contributes to growing numbers of illnesses and death in humans, and the excess usage of these medications results in waste that enters the environment and is harmful to many ecological processes. This combination of antimicrobial resistance and environmental degradation furthermore harms the economic well-being and prospects of many. Using an interdisciplinary approach including medical, environmental, economic, and legal studies, the paper evaluates the dynamic between animal welfare and commerce and argues that while animal welfare is not of great concern to many, this approach is ultimately harming human welfare too. It is, however, proposed that both could be addressed under a One Health and Welfare approach, as we cannot continue to ignore the linkages between animals, the environment, and people. The evaluation of industrial animal farming is therefore considered through three aspects – the environmental impact, which is measured by pollution that causes environmental degradation; the human impact, which is measured by the rise of illnesses from pollution and antibiotics resistance; and the economic impact, which is measured through costs to the health care system and the financial implications of industrial farming on the economic well-being of many. These three aspects are considered in light of the Sustainable Development Goals that provide additional tangible metrics to evidence the negative impacts. While the research addresses the welfare of farmed animals, there is potential for these principles to be extrapolated into other contexts, including wildlife and habitat protection. It must be noted that while the question of animal rights in industrial animal farming is acknowledged and of importance, this is a separate matter that is not addressed here.Keywords: animal and human welfare, industrial animal farming, one health and welfare, sustainable development goals
Procedia PDF Downloads 84678 Improving Fingerprinting-Based Localization (FPL) System Using Generative Artificial Intelligence (GAI)
Authors: Getaneh Berie Tarekegn, Li-Chia Tai
Abstract:
With the rapid advancement of artificial intelligence, low-power built-in sensors on Internet of Things devices, and communication technologies, location-aware services have become increasingly popular and have permeated every aspect of people’s lives. Global navigation satellite systems (GNSSs) are the default method of providing continuous positioning services for ground and aerial vehicles, as well as consumer devices (smartphones, watches, notepads, etc.). However, the environment affects satellite positioning systems, particularly indoors, in dense urban and suburban cities enclosed by skyscrapers, or when deep shadows obscure satellite signals. This is because (1) indoor environments are more complicated due to the presence of many objects surrounding them; (2) reflection within the building is highly dependent on the surrounding environment, including the positions of objects and human activity; and (3) satellite signals cannot be reached in an indoor environment, and GNSS doesn't have enough power to penetrate building walls. GPS is also highly power-hungry, which poses a severe challenge for battery-powered IoT devices. Due to these challenges, IoT applications are limited. Consequently, precise, seamless, and ubiquitous Positioning, Navigation and Timing (PNT) systems are crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarming, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a novel semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. We also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 0.39 m, and more than 90% of the errors are less than 0.82 m. According to numerical results, SRCLoc improves positioning performance and reduces radio map construction costs significantly compared to traditional methods.Keywords: location-aware services, feature extraction technique, generative adversarial network, long short-term memory, support vector machine
Procedia PDF Downloads 47