Search results for: continuous passive motion machine
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6823

Search results for: continuous passive motion machine

373 Enhancing Project Management Performance in Prefabricated Building Construction under Uncertainty: A Comprehensive Approach

Authors: Niyongabo Elyse

Abstract:

Prefabricated building construction is a pioneering approach that combines design, production, and assembly to attain energy efficiency, environmental sustainability, and economic feasibility. Despite continuous development in the industry in China, the low technical maturity of standardized design, factory production, and construction assembly introduces uncertainties affecting prefabricated component production and on-site assembly processes. This research focuses on enhancing project management performance under uncertainty to help enterprises navigate these challenges and optimize project resources. The study introduces a perspective on how uncertain factors influence the implementation of prefabricated building construction projects. It proposes a theoretical model considering project process management ability, adaptability to uncertain environments, and collaboration ability of project participants. The impact of uncertain factors is demonstrated through case studies and quantitative analysis, revealing constraints on implementation time, cost, quality, and safety. To address uncertainties in prefabricated component production scheduling, a fuzzy model is presented, expressing processing times in interval values. The model utilizes a cooperative co-evolution evolution algorithm (CCEA) to optimize scheduling, demonstrated through a real case study showcasing reduced project duration and minimized effects of processing time disturbances. Additionally, the research addresses on-site assembly construction scheduling, considering the relationship between task processing times and assigned resources. A multi-objective model with fuzzy activity durations is proposed, employing a hybrid cooperative co-evolution evolution algorithm (HCCEA) to optimize project scheduling. Results from real case studies indicate improved project performance in terms of duration, cost, and resilience to processing time delays and resource changes. The study also introduces a multistage dynamic process control model, utilizing IoT technology for real-time monitoring during component production and construction assembly. This approach dynamically adjusts schedules when constraints arise, leading to enhanced project management performance, as demonstrated in a real prefabricated housing project. Key contributions include a fuzzy prefabricated components production scheduling model, a multi-objective multi-mode resource-constrained construction project scheduling model with fuzzy activity durations, a multi-stage dynamic process control model, and a cooperative co-evolution evolution algorithm. The integrated mathematical model addresses the complexity of prefabricated building construction project management, providing a theoretical foundation for practical decision-making in the field.

Keywords: prefabricated construction, project management performance, uncertainty, fuzzy scheduling

Procedia PDF Downloads 48
372 Investigating the Neural Heterogeneity of Developmental Dyscalculia

Authors: Fengjuan Wang, Azilawati Jamaludin

Abstract:

Developmental Dyscalculia (DD) is defined as a particular learning difficulty with continuous challenges in learning requisite math skills that cannot be explained by intellectual disability or educational deprivation. Recent studies have increasingly recognized that DD is a heterogeneous, instead of monolithic, learning disorder with not only cognitive and behavioral deficits but so too neural dysfunction. In recent years, neuroimaging studies employed group comparison to explore the neural underpinnings of DD, which contradicted the heterogenous nature of DD and may obfuscate critical individual differences. This research aimed to investigate the neural heterogeneity of DD using case studies with functional near-infrared spectroscopy (fNIRS). A total of 54 aged 6-7 years old of children participated in this study, comprising two comprehensive cognitive assessments, an 8-minute resting state, and an 8-minute one-digit addition task. Nine children met the criteria of DD and scored at or below 85 (i.e., the 16th percentile) on the Mathematics or Math Fluency subtest of the Wechsler Individual Achievement Test, Third Edition (WIAT-III) (both subtest scores were 90 and below). The remaining 45 children formed the typically developing (TD) group. Resting-state data and brain activation in the inferior frontal gyrus (IFG), superior frontal gyrus (SFG), and intraparietal sulcus (IPS) were collected for comparison between each case and the TD group. Graph theory was used to analyze the brain network under the resting state. This theory represents the brain network as a set of nodes--brain regions—and edges—pairwise interactions across areas to reveal the architectural organizations of the nervous network. Next, a single-case methodology developed by Crawford et al. in 2010 was used to compare each case’s brain network indicators and brain activation against 45 TD children’s average data. Results showed that three out of the nine DD children displayed significant deviation from TD children’s brain indicators. Case 1 had inefficient nodal network properties. Case 2 showed inefficient brain network properties and weaker activation in the IFG and IPS areas. Case 3 displayed inefficient brain network properties with no differences in activation patterns. As a rise above, the present study was able to distill differences in architectural organizations and brain activation of DD vis-à-vis TD children using fNIRS and single-case methodology. Although DD is regarded as a heterogeneous learning difficulty, it is noted that all three cases showed lower nodal efficiency in the brain network, which may be one of the neural sources of DD. Importantly, although the current “brain norm” established for the 45 children is tentative, the results from this study provide insights not only for future work in “developmental brain norm” with reliable brain indicators but so too the viability of single-case methodology, which could be used to detect differential brain indicators of DD children for early detection and interventions.

Keywords: brain activation, brain network, case study, developmental dyscalculia, functional near-infrared spectroscopy, graph theory, neural heterogeneity

Procedia PDF Downloads 50
371 Electrophysiological Correlates of Statistical Learning in Children with and without Developmental Language Disorder

Authors: Ana Paula Soares, Alexandrina Lages, Helena Oliveira, Francisco-Javier Gutiérrez-Domínguez, Marisa Lousada

Abstract:

From an early age, exposure to a spoken language allows us to implicitly capture the structure underlying the succession of the speech sounds in that language and to segment it into meaningful units (words). Statistical learning (SL), i.e., the ability to pick up patterns in the sensory environment even without intention or consciousness of doing it, is thus assumed to play a central role in the acquisition of the rule-governed aspects of language and possibly to lie behind the language difficulties exhibited by children with development language disorder (DLD). The research conducted so far has, however, led to inconsistent results, which might stem from the behavioral tasks used to test SL. In a classic SL experiment, participants are first exposed to a continuous stream (e.g., syllables) in which, unbeknownst to the participants, stimuli are grouped into triplets that always appear together in the stream (e.g., ‘tokibu’, ‘tipolu’), with no pauses between each other (e.g., ‘tokibutipolugopilatokibu’) and without any information regarding the task or the stimuli. Following exposure, SL is assessed by asking participants to discriminate between triplets previously presented (‘tokibu’) from new sequences never presented together during exposure (‘kipopi’), i.e., to perform a two-alternative-forced-choice (2-AFC) task. Despite the widespread use of the 2-AFC to test SL, it has come under increasing criticism as it is an offline post-learning task that only assesses the result of the learning that had occurred during the previous exposure phase and that might be affected by other factors beyond the computation of regularities embedded in the input, typically the likelihood two syllables occurring together, a statistic known as transitional probability (TP). One solution to overcome these limitations is to assess SL as exposure to the stream unfolds using online techniques such as event-related potentials (ERP) that is highly sensitive to the time-course of the learning in the brain. Here we collected ERPs to examine the neurofunctional correlates of SL in preschool children with DLD, and chronological-age typical language development (TLD) controls who were exposed to an auditory stream in which eight three-syllable nonsense words, four of which presenting high-TPs and the other four low-TPs, to further analyze whether the ability of DLD and TLD children to extract-word-like units from the steam was modulated by words’ predictability. Moreover, to ascertain if the previous knowledge of the to-be-learned-regularities affected the neural responses to high- and low-TP words, children performed the auditory SL task, firstly, under implicit, and, subsequently, under explicit conditions. Although behavioral evidence of SL was not obtained in either group, the neural responses elicited during the exposure phases of the SL tasks differentiated children with DLD from children with TLD. Specifically, the results indicated that only children from the TDL group showed neural evidence of SL, particularly in the SL task performed under explicit conditions, firstly, for the low-TP, and, subsequently, for the high-TP ‘words’. Taken together, these findings support the view that children with DLD showed deficits in the extraction of the regularities embedded in the auditory input which might underlie the language difficulties.

Keywords: development language disorder, statistical learning, transitional probabilities, word segmentation

Procedia PDF Downloads 186
370 Thermal Stress and Computational Fluid Dynamics Analysis of Coatings for High-Temperature Corrosion

Authors: Ali Kadir, O. Anwar Beg

Abstract:

Thermal barrier coatings are among the most popular methods for providing corrosion protection in high temperature applications including aircraft engine systems, external spacecraft structures, rocket chambers etc. Many different materials are available for such coatings, of which ceramics generally perform the best. Motivated by these applications, the current investigation presents detailed finite element simulations of coating stress analysis for a 3- dimensional, 3-layered model of a test sample representing a typical gas turbine component scenario. Structural steel is selected for the main inner layer, Titanium (Ti) alloy for the middle layer and Silicon Carbide (SiC) for the outermost layer. The model dimensions are 20 mm (width), 10 mm (height) and three 1mm deep layers. ANSYS software is employed to conduct three types of analysis- static structural, thermal stress analysis and also computational fluid dynamic erosion/corrosion analysis (via ANSYS FLUENT). The specified geometry which corresponds to corrosion test samples exactly is discretized using a body-sizing meshing approach, comprising mainly of tetrahedron cells. Refinements were concentrated at the connection points between the layers to shift the focus towards the static effects dissipated between them. A detailed grid independence study is conducted to confirm the accuracy of the selected mesh densities. To recreate gas turbine scenarios; in the stress analysis simulations, static loading and thermal environment conditions of up to 1000 N and 1000 degrees Kelvin are imposed. The default solver was used to set the controls for the simulation with the fixed support being set as one side of the model while subjecting the opposite side to a tabular force of 500 and 1000 Newtons. Equivalent elastic strain, total deformation, equivalent stress and strain energy were computed for all cases. Each analysis was duplicated twice to remove one of the layers each time, to allow testing of the static and thermal effects with each of the coatings. ANSYS FLUENT simulation was conducted to study the effect of corrosion on the model under similar thermal conditions. The momentum and energy equations were solved and the viscous heating option was applied to represent improved thermal physics of heat transfer between the layers of the structures. A Discrete Phase Model (DPM) in ANSYS FLUENT was employed which allows for the injection of continuous uniform air particles onto the model, thereby enabling an option for calculating the corrosion factor caused by hot air injection (particles prescribed 5 m/s velocity and 1273.15 K). Extensive visualization of results is provided. The simulations reveal interesting features associated with coating response to realistic gas turbine loading conditions including significantly different stress concentrations with different coatings.

Keywords: thermal coating, corrosion, ANSYS FEA, CFD

Procedia PDF Downloads 134
369 Influence of Mandrel’s Surface on the Properties of Joints Produced by Magnetic Pulse Welding

Authors: Ines Oliveira, Ana Reis

Abstract:

Magnetic Pulse Welding (MPW) is a cold solid-state welding process, accomplished by the electromagnetically driven, high-speed and low-angle impact between two metallic surfaces. It has the same working principle of Explosive Welding (EXW), i.e. is based on the collision of two parts at high impact speed, in this case, propelled by electromagnetic force. Under proper conditions, i.e., flyer velocity and collision point angle, a permanent metallurgical bond can be achieved between widely dissimilar metals. MPW has been considered a promising alternative to the conventional welding processes and advantageous when compared to other impact processes. Nevertheless, MPW current applications are mostly academic. Despite the existing knowledge, the lack of consensus regarding several aspects of the process calls for further investigation. As a result, the mechanical resistance, morphology and structure of the weld interface in MPW of Al/Cu dissimilar pair were investigated. The effect of process parameters, namely gap, standoff distance and energy, were studied. It was shown that welding only takes place if the process parameters are within an optimal range. Additionally, the formation of intermetallic phases cannot be completely avoided in the weld of Al/Cu dissimilar pair by MPW. Depending on the process parameters, the intermetallic compounds can appear as continuous layer or small pockets. The thickness and the composition of the intermetallic layer depend on the processing parameters. Different intermetallic phases can be identified, meaning that different temperature-time regimes can occur during the process. It is also found that lower pulse energies are preferred. The relationship between energy increase and melting is possibly related to multiple sources of heating. Higher values of pulse energy are associated with higher induced currents in the part, meaning that more Joule heating will be generated. In addition, more energy means higher flyer velocity, the air existing in the gap between the parts to be welded is expelled, and this aerodynamic drag (fluid friction) is proportional to the square of the velocity, further contributing to the generation of heat. As the kinetic energy also increases with the square of velocity, the dissipation of this energy through plastic work and jet generation will also contribute to an increase in temperature. To reduce intermetallic phases, porosity, and melt pockets, pulse energy should be minimized. The bond formation is affected not only by the gap, standoff distance, and energy but also by the mandrel’s surface conditions. No correlation was clearly identified between surface roughness/scratch orientation and joint strength. Nevertheless, the aspect of the interface (thickness of the intermetallic layer, porosity, presence of macro/microcracks) is clearly affected by the surface topology. Welding was not established on oil contaminated surfaces, meaning that the jet action is not enough to completely clean the surface.

Keywords: bonding mechanisms, impact welding, intermetallic compounds, magnetic pulse welding, wave formation

Procedia PDF Downloads 206
368 Unfolding Architectural Assemblages: Mapping Contemporary Spatial Objects' Affective Capacity

Authors: Panagiotis Roupas, Yota Passia

Abstract:

This paper aims at establishing an index of design mechanisms - immanent in spatial objects - based on the affective capacity of their material formations. While spatial objects (design objects, buildings, urban configurations, etc.) are regarded as systems composed of interacting parts, within the premises of assemblage theory, their ability to affect and to be affected has not yet been mapped or sufficiently explored. This ability lies in excess, a latent potentiality they contain, not transcendental but immanent in their pre-subjective aesthetic power. As spatial structures are theorized as assemblages - composed of heterogeneous elements that enter into relations with one another - and since all assemblages are parts of larger assemblages, their components' ability to engage is contingent. We thus seek to unfold the mechanisms inherent in spatial objects that allow to the constituent parts of design assemblages to perpetually enter into new assemblages. To map architectural assemblage's affective ability, spatial objects are analyzed in two axes. The first axis focuses on the relations that the assemblage's material and expressive components develop in order to enter the assemblages. Material components refer to those material elements that an assemblage requires in order to exist, while expressive components includes non-linguistic (sense impressions) as well as linguistic (beliefs). The second axis records the processes known as a-signifying signs or a-signs, which are the triggering mechanisms able to territorialize or deterritorialize, stabilize or destabilize the assemblage and thus allow it to assemble anew. As a-signs cannot be isolated from matter, we point to their resulting effects, which without entering the linguistic level they are expressed in terms of intensity fields: modulations, movements, speeds, rhythms, spasms, etc. They belong to a molecular level where they operate in the pre-subjective world of perceptions, effects, drives, and emotions. A-signs have been introduced as intensities that transform the object beyond meaning, beyond fixed or known cognitive procedures. To that end, from an archive of more than 100 spatial objects by contemporary architects and designers, we have created an effective mechanisms index is created, where each a-sign is now connected with the list of effects it triggers and which thoroughly defines it. And vice versa, the same effect can be triggered by different a-signs, allowing the design object to lie in a perpetual state of becoming. To define spatial objects, A-signs are categorized in terms of their aesthetic power to affect and to be affected on the basis of the general categories of form, structure and surface. Thus, different part's degree of contingency are evaluated and measured and finally, we introduce as material information that is immanent in the spatial object while at the same time they confer no meaning; they only convey some information without semantic content. Through this index, we are able to analyze and direct the final form of the spatial object while at the same time establishing the mechanism to measure its continuous transformation.

Keywords: affective mechanisms index, architectural assemblages, a-signifying signs, cartography, virtual

Procedia PDF Downloads 126
367 An Introspective look into Hotel Employees Career Satisfaction

Authors: Anastasios Zopiatis, Antonis L. Theocharous

Abstract:

In the midst of a fierce war for talent, the hospitality industry is seeking new and innovative ways to enrich its image as an employer of choice and not a necessity. Historically, the industry’s professions are portrayed as ‘unattractive’ due to their repetitious nature, long and unsocial working schedules, below average remunerations, and the mental and physical demands of the job. Aligning with the industry, hospitality and tourism scholars embarked on a journey to investigate pertinent topics with the aim of enhancing our conceptual understanding of the elements that influence employees at the hospitality world of work. Topics such as job involvement, commitment, job and career satisfaction, and turnover intentions became the focal points in a multitude of relevant empirical and conceptual investigations. Nevertheless, gaps or inconsistencies in existing theories, as a result of both the volatile complexity of the relationships governing human behavior in the hospitality workplace, and the academic community’s unopposed acceptance of theoretical frameworks mainly propounded in the United States and United Kingdom years ago, necessitate our continuous vigilance. Thus, in an effort to enhance and enrich the discourse, we set out to investigate the relationship between intrinsic and extrinsic job satisfaction traits and the individual’s career satisfaction, and subsequent intention to remain in the hospitality industry. Reflecting on existing literature, a quantitative survey was developed and administered, face-to-face, to 650 individuals working as full-time employees in 4- and 5- star hotel establishments in Cyprus, whereas a multivariate statistical analysis method, namely Structural Equation Modeling (SEM), was utilized to determine whether relationships existed between constructs as a means to either accept or reject the hypothesized theory. Findings, of interest to both industry stakeholders and academic scholars, suggest that the individual’s future intention to remain within the industry is primarily associated with extrinsic job traits. Our findings revealed that positive associations exist between extrinsic job traits, and both career satisfaction and future intention. In contrast, when investigating the relationship of intrinsic traits, a positive association was revealed only with career satisfaction. Apparently, the local industry’s environmental factors of seasonality, excessive turnover, overdependence on seasonal, and part-time migrant workers, prohibit industry stakeholders in effectively investing the time and resources in the development and professional growth of their employees. Consequently intrinsic job satisfaction factors such as advancement, growth, and achievement, take backstage to the more materialistic extrinsic factors. Findings from the subsequent mediation analysis support the notion that intrinsic traits can positively influence future intentions indirectly only through career satisfaction, whereas extrinsic traits can positively impact both career satisfaction and future intention both directly and indirectly.

Keywords: career satisfaction, Cyprus, hotel employees, structural equation modeling, SEM

Procedia PDF Downloads 281
366 Effect of Laser Ablation OTR Films on the Storability of Endive and Pak Choi by Baby Vegetables in Modified Atmosphere Condition

Authors: In-Lee Choi, Min Jae Jeong, Jun Pill Baek, Ho-Min Kang

Abstract:

As the consumption trends of vegetables become different from the past, it is increased using vegetable more convenience such as fresh-cut vegetables, sprouts, baby vegetables rather than an existing hole piece of vegetables. Selected baby vegetables have various functional materials but they have short shelf life. This study was conducted to improve storability by using suitable laser ablation OTR (oxygen transmission rate) films. Baby vegetable of endive (Cichorium endivia L.) and pak choi (Brassica rapa chinensis) for this research, around 10 cm height, cultivated in glass greenhouse during 3 weeks. Harvested endive and pak choi were stored at 8 ℃ for 5 days and were packed by PP (Polypropylene) container and covered different types of laser ablation OTR film (DaeRyung Co., Ltd.) such as 1,300 cc, 10,000 cc, 20,000 cc, 40,000 cc /m2•day•atm, and control (perforated film) with heat sealing machine (SC200-IP, Kumkang, Korea). All the samples conducted 5 times replication. Statistical analysis was carried out using a Microsoft Excel 2010 program and results were expressed as standard deviations. The fresh weight loss rate of both baby vegetables were less than 0.3 % in treated films as maximum weight loss rate. On the other hands, control in the final storage day had around 3.0 % weight loss rate and it followed decreasing quantity. Endive had less 2.0 % carbon dioxide contents as maximum contents in 20,000 cc and 40,000 cc. Oxygen contents was maintained between 17 and 20 % in endive, 19 and 20 % in pak choi. Ethylene concentration of both vegetables maintained little lower contents in 20,000 cc treatments than others at final storage day without statistical significance. In the case of hardness, 40,000 cc film was shown little higher value at both baby vegetables without statistical significance. Visual quality was good at 10,000 cc and 20,000 cc in endive and pak choi, and off-flavor was not appeard any off-flavor in both vegetables. Chlorophyll (SPAD-502, Minolta, Japan) value of endive was shown as similar result with initial in all treatments except 20,000 cc as little lower. And chlorophyll value of pak choi decreased in all treatments compared with initial value but was not shown significantly difference each other. Color of leaves (CR-400, Minolta, Japan) changed significantly in 40,000 cc at endive. In an event of pak choi, all the treatments started yellowing by increasing hunter b value, among them control increased substantially. As above the result, 10,000 cc film was most reasonable packaging film for storing at endive and 20,000 cc at pak choi with good quality.

Keywords: carbon dioxide, shelf-life, visual quality, pak choi

Procedia PDF Downloads 787
365 Numerical Analyses of Dynamics of Deployment of PW-Sat2 Deorbit Sail Compared with Results of Experiment under Micro-Gravity and Low Pressure Conditions

Authors: P. Brunne, K. Ciechowska, K. Gajc, K. Gawin, M. Gawin, M. Kania, J. Kindracki, Z. Kusznierewicz, D. Pączkowska, F. Perczyński, K. Pilarski, D. Rafało, E. Ryszawa, M. Sobiecki, I. Uwarowa

Abstract:

Big amount of space debris constitutes nowadays a real thread for operating space crafts; therefore the main purpose of PW-Sat2’ team was to create a system that could help cleanse the Earth’s orbit after each small satellites’ mission. After 4 years of development, the motorless, low energy consumption and low weight system has been created. During series of tests, the system has shown high reliable efficiency. The PW-Sat2’s deorbit system is a square-shaped sail which covers an area of 4m². The sail surface is made of 6 μm aluminized Mylar film which is stretched across 4 diagonally placed arms, each consisting of two C-shaped flat springs and enveloped in Mylar sleeves. The sail is coiled using a special, custom designed folding stand that provides automation and repeatability of the sail unwinding tests and placed in a container with inner diameter of 85 mm. In the final configuration the deorbit system weights ca. 600 g and occupies 0.6U (in accordance with CubeSat standard). The sail’s releasing system requires minimal amount of power based on thermal knife that burns out the Dyneema wire, which holds the system before deployment. The Sail is being pushed out of the container within a safe distance (20 cm away) from the satellite. The energy for the deployment is completely assured by coiled C-shaped flat springs, which during the release, unfold the sail surface. To avoid dynamic effects on the satellite’s structure, there is the rotational link between the sail and satellite’s main body. To obtain complete knowledge about complex dynamics of the deployment, a number of experiments have been performed in varied environments. The numerical model of the dynamics of the Sail’s deployment has been built and is still under continuous development. Currently, the integration of the flight model and Deorbit Sail is performed. The launch is scheduled for February 2018. At the same time, in cooperation with United Nations Office for Outer Space Affairs, sail models and requested facilities are being prepared for the sail deployment experiment under micro-gravity and low pressure conditions at Bremen Drop Tower, Germany. Results of those tests will provide an ultimate and wide knowledge about deployment in space environment to which system will be exposed during its mission. Outcomes of the numerical model and tests will be compared afterwards and will help the team in building a reliable and correct model of a very complex phenomenon of deployment of 4 c-shaped flat springs with surface attached. The verified model could be used inter alia to investigate if the PW-Sat2’s sail is scalable and how far is it possible to go with enlarging when creating systems for bigger satellites.

Keywords: cubesat, deorbitation, sail, space, debris

Procedia PDF Downloads 285
364 A System for Preventing Inadvertent Exposition of Staff Present outside the Operating Theater: Description and Clinical Test

Authors: Aya Al Masri, Kamel Guerchouche, Youssef Laynaoui, Safoin Aktaou, Malorie Martin, Fouad Maaloul

Abstract:

Introduction: Mobile C-arms move throughout operating rooms of the operating theater. Being designed to move between rooms, they are not equipped with relays to retrieve the exposition information and export it outside the room. Therefore, no light signaling is available outside the room to warn the X-ray emission for staff. Inadvertent exposition of staff outside the operating theater is a real problem for radiation protection. The French standard NFC 15-160 require that: (1) access to any room containing an X-ray emitting device must be controlled by a light signage so that it cannot be inadvertently crossed, and (2) setting up an emergency button to stop the X-ray emission. This study presents a system that we developed to meet these requirements and the results of its clinical test. Materials and methods: The system is composed of two communicating boxes: o The "DetectBox" is to be installed inside the operating theater. It identifies the various operation states of the C-arm by analyzing its power supply signal. The DetectBox communicates (in wireless mode) with the second box (AlertBox). o The "AlertBox" can operate in socket or battery mode and is to be installed outside the operating theater. It detects and reports the state of the C-arm by emitting a real time light signal. This latter can have three different colors: red when the C-arm is emitting X-rays, orange when it is powered on but does not emit X-rays, and green when it is powered off. The two boxes communicate on a radiofrequency link exclusively carried out in the ‘Industrial, Scientific and Medical (ISM)’ frequency bands and allows the coexistence of several on-site warning systems without communication conflicts (interference). Taking into account the complexity of performing electrical works in the operating theater (for reasons of hygiene and continuity of medical care), this system (having a size <10 cm²) works in complete safety without any intrusion in the mobile C-arm and does not require specific electrical installation work. The system is equipped with emergency button that stops X-ray emission. The system has been clinically tested. Results: The clinical test of the system shows that: it detects X-rays having both high and low energy (50 – 150 kVp), high and low photon flow (0.5 – 200 mA: even when emitted for a very short time (<1 ms)), Probability of false detection < 10-5, it operates under all acquisition modes (continuous, pulsed, fluoroscopy mode, image mode, subtraction and movie mode), it is compatible with all C-arm models and brands. We have also tested the communication between the two boxes (DetectBox and AlertBox) in several conditions: (1) Unleaded room, (2) leaded room, and (3) rooms with particular configuration (sas, great distances, concrete walls, 3 mm of lead). The result of these last tests was positive. Conclusion: This system is a reliable tool to alert the staff present outside the operating room for X-ray emission and insure their radiation protection.

Keywords: Clinical test, Inadvertent staff exposition, Light signage, Operating theater

Procedia PDF Downloads 123
363 Integration of “FAIR” Data Principles in Longitudinal Mental Health Research in Africa: Lessons from a Landscape Analysis

Authors: Bylhah Mugotitsa, Jim Todd, Agnes Kiragga, Jay Greenfield, Evans Omondi, Lukoye Atwoli, Reinpeter Momanyi

Abstract:

The INSPIRE network aims to build an open, ethical, sustainable, and FAIR (Findable, Accessible, Interoperable, Reusable) data science platform, particularly for longitudinal mental health (MH) data. While studies have been done at the clinical and population level, there still exists limitations in data and research in LMICs, which pose a risk of underrepresentation of mental disorders. It is vital to examine the existing longitudinal MH data, focusing on how FAIR datasets are. This landscape analysis aimed to provide both overall level of evidence of availability of longitudinal datasets and degree of consistency in longitudinal studies conducted. Utilizing prompters proved instrumental in streamlining the analysis process, facilitating access, crafting code snippets, categorization, and analysis of extensive data repositories related to depression, anxiety, and psychosis in Africa. While leveraging artificial intelligence (AI), we filtered through over 18,000 scientific papers spanning from 1970 to 2023. This AI-driven approach enabled the identification of 228 longitudinal research papers meeting inclusion criteria. Quality assurance revealed 10% incorrectly identified articles and 2 duplicates, underscoring the prevalence of longitudinal MH research in South Africa, focusing on depression. From the analysis, evaluating data and metadata adherence to FAIR principles remains crucial for enhancing accessibility and quality of MH research in Africa. While AI has the potential to enhance research processes, challenges such as privacy concerns and data security risks must be addressed. Ethical and equity considerations in data sharing and reuse are also vital. There’s need for collaborative efforts across disciplinary and national boundaries to improve the Findability and Accessibility of data. Current efforts should also focus on creating integrated data resources and tools to improve Interoperability and Reusability of MH data. Practical steps for researchers include careful study planning, data preservation, machine-actionable metadata, and promoting data reuse to advance science and improve equity. Metrics and recognition should be established to incentivize adherence to FAIR principles in MH research

Keywords: longitudinal mental health research, data sharing, fair data principles, Africa, landscape analysis

Procedia PDF Downloads 82
362 Prediction of Sepsis Illness from Patients Vital Signs Using Long Short-Term Memory Network and Dynamic Analysis

Authors: Marcio Freire Cruz, Naoaki Ono, Shigehiko Kanaya, Carlos Arthur Mattos Teixeira Cavalcante

Abstract:

The systems that record patient care information, known as Electronic Medical Record (EMR) and those that monitor vital signs of patients, such as heart rate, body temperature, and blood pressure have been extremely valuable for the effectiveness of the patient’s treatment. Several kinds of research have been using data from EMRs and vital signs of patients to predict illnesses. Among them, we highlight those that intend to predict, classify, or, at least identify patterns, of sepsis illness in patients under vital signs monitoring. Sepsis is an organic dysfunction caused by a dysregulated patient's response to an infection that affects millions of people worldwide. Early detection of sepsis is expected to provide a significant improvement in its treatment. Preceding works usually combined medical, statistical, mathematical and computational models to develop detection methods for early prediction, getting higher accuracies, and using the smallest number of variables. Among other techniques, we could find researches using survival analysis, specialist systems, machine learning and deep learning that reached great results. In our research, patients are modeled as points moving each hour in an n-dimensional space where n is the number of vital signs (variables). These points can reach a sepsis target point after some time. For now, the sepsis target point was calculated using the median of all patients’ variables on the sepsis onset. From these points, we calculate for each hour the position vector, the first derivative (velocity vector) and the second derivative (acceleration vector) of the variables to evaluate their behavior. And we construct a prediction model based on a Long Short-Term Memory (LSTM) Network, including these derivatives as explanatory variables. The accuracy of the prediction 6 hours before the time of sepsis, considering only the vital signs reached 83.24% and by including the vectors position, speed, and acceleration, we obtained 94.96%. The data are being collected from Medical Information Mart for Intensive Care (MIMIC) Database, a public database that contains vital signs, laboratory test results, observations, notes, and so on, from more than 60.000 patients.

Keywords: dynamic analysis, long short-term memory, prediction, sepsis

Procedia PDF Downloads 122
361 The Role of Emotional Intelligence in the Manager's Psychophysiological Activity during a Performance-Review Discussion

Authors: Mikko Salminen, Niklas Ravaja

Abstract:

Emotional intelligence (EI) consists of skills for monitoring own emotions and emotions of others, skills for discriminating different emotions, and skills for using this information in thinking and actions. EI enhances, for example, work outcomes and organizational climate. We suggest that the role and manifestations of EI should also be studied in real leadership situations, especially during the emotional, social interaction. Leadership is essentially a process to influence others for reaching a certain goal. This influencing happens by managerial processes and computer-mediated communication (e.g. e-mail) but also by face-to-face, where facial expressions have a significant role in conveying emotional information. Persons with high EI are typically perceived more positively, and they have better social skills. We hypothesize, that during social interaction high EI enhances the ability to detect other’s emotional state and controlling own emotional expressions. We suggest, that emotionally intelligent leader’s experience less stress during social leadership situations, since they have better skills in dealing with the related emotional work. Thus the high-EI leaders would be more able to enjoy these situations, but also be more efficient in choosing appropriate expressions for building constructive dialogue. We suggest, that emotionally intelligent leaders show more positive emotional expressions than low-EI leaders. To study these hypotheses we observed performance review discussions of 40 leaders (24 female) with 78 (45 female) of their followers. Each leader held a discussion with two followers. Psychophysiological methods were chosen because they provide objective and continuous data from the whole duration of the discussions. We recorded sweating of the hands (electrodermal activation) by electrodes placed to the fingers of the non-dominant hand to assess the stress-related physiological arousal of the leaders. In addition, facial electromyography was recorded from cheek (zygomaticus major, activated during e.g. smiling) and periocular (orbicularis oculi, activated during smiling) muscles using electrode pairs placed on the left side of the face. Leader’s trait EI was measured with a 360 questionnaire, filled by each leader’s followers, peers, managers and by themselves. High-EI leaders had less sweating of the hands (p = .007) than the low-EI leaders. It is thus suggested that the high-EI leaders experienced less physiological stress during the discussions. Also, high scores in the factor “Using of emotions” were related to more facial muscle activation indicating positive emotional expressions (cheek muscle: p = .048; periocular muscle: p = .076, almost statistically significant). The results imply that emotionally intelligent managers are positively relaxed during s social leadership situations such as a performance review discussion. The current study also highlights the importance of EI in face-to-face social interaction, given the central role facial expressions have in interaction situations. The study also offers new insight to the biological basis of trait EI. It is suggested that the identification, forming, and intelligently using of facial expressions are skills that could be trained during leadership development courses.

Keywords: emotional intelligence, leadership, performance review discussion, psychophysiology, social interaction

Procedia PDF Downloads 242
360 Revolutionizing Accounting: Unleashing the Power of Artificial Intelligence

Authors: Sogand Barghi

Abstract:

The integration of artificial intelligence (AI) in accounting practices is reshaping the landscape of financial management. This paper explores the innovative applications of AI in the realm of accounting, emphasizing its transformative impact on efficiency, accuracy, decision-making, and financial insights. By harnessing AI's capabilities in data analysis, pattern recognition, and automation, accounting professionals can redefine their roles, elevate strategic decision-making, and unlock unparalleled value for businesses. This paper delves into AI-driven solutions such as automated data entry, fraud detection, predictive analytics, and intelligent financial reporting, highlighting their potential to revolutionize the accounting profession. Artificial intelligence has swiftly emerged as a game-changer across industries, and accounting is no exception. This paper seeks to illuminate the profound ways in which AI is reshaping accounting practices, transcending conventional boundaries, and propelling the profession toward a new era of efficiency and insight-driven decision-making. One of the most impactful applications of AI in accounting is automation. Tasks that were once labor-intensive and time-consuming, such as data entry and reconciliation, can now be streamlined through AI-driven algorithms. This not only reduces the risk of errors but also allows accountants to allocate their valuable time to more strategic and analytical tasks. AI's ability to analyze vast amounts of data in real time enables it to detect irregularities and anomalies that might go unnoticed by traditional methods. Fraud detection algorithms can continuously monitor financial transactions, flagging any suspicious patterns and thereby bolstering financial security. AI-driven predictive analytics can forecast future financial trends based on historical data and market variables. This empowers organizations to make informed decisions, optimize resource allocation, and develop proactive strategies that enhance profitability and sustainability. Traditional financial reporting often involves extensive manual effort and data manipulation. With AI, reporting becomes more intelligent and intuitive. Automated report generation not only saves time but also ensures accuracy and consistency in financial statements. While the potential benefits of AI in accounting are undeniable, there are challenges to address. Data privacy and security concerns, the need for continuous learning to keep up with evolving AI technologies, and potential biases within algorithms demand careful attention. The convergence of AI and accounting marks a pivotal juncture in the evolution of financial management. By harnessing the capabilities of AI, accounting professionals can transcend routine tasks, becoming strategic advisors and data-driven decision-makers. The applications discussed in this paper underline the transformative power of AI, setting the stage for an accounting landscape that is smarter, more efficient, and more insightful than ever before. The future of accounting is here, and it's driven by artificial intelligence.

Keywords: artificial intelligence, accounting, automation, predictive analytics, financial reporting

Procedia PDF Downloads 65
359 TRAC: A Software Based New Track Circuit for Traffic Regulation

Authors: Jérôme de Reffye, Marc Antoni

Abstract:

Following the development of the ERTMS system, we think it is interesting to develop another software-based track circuit system which would fit secondary railway lines with an easy-to-work implementation and a low sensitivity to rail-wheel impedance variations. We called this track circuit 'Track Railway by Automatic Circuits.' To be internationally implemented, this system must not have any mechanical component and must be compatible with existing track circuit systems. For example, the system is independent from the French 'Joints Isolants Collés' that isolate track sections from one another, and it is equally independent from component used in Germany called 'Counting Axles,' in French 'compteur d’essieux.' This track circuit is fully interoperable. Such universality is obtained by replacing the train detection mechanical system with a space-time filtering of train position. The various track sections are defined by the frequency of a continuous signal. The set of frequencies related to the track sections is a set of orthogonal functions in a Hilbert Space. Thus the failure probability of track sections separation is precisely calculated on the basis of signal-to-noise ratio. SNR is a function of the level of traction current conducted by rails. This is the reason why we developed a very powerful algorithm to reject noise and jamming to obtain an SNR compatible with the precision required for the track circuit and SIL 4 level. The SIL 4 level is thus reachable by an adjustment of the set of orthogonal functions. Our major contributions to railway engineering signalling science are i) Train space localization is precisely defined by a calibration system. The operation bypasses the GSM-R radio system of the ERTMS system. Moreover, the track circuit is naturally protected against radio-type jammers. After the calibration operation, the track circuit is autonomous. ii) A mathematical topology adapted to train space localization by following the train through a linear time filtering of the received signal. Track sections are numerically defined and can be modified with a software update. The system was numerically simulated, and results were beyond our expectations. We achieved a precision of one meter. Rail-ground and rail-wheel impedance sensitivity analysis gave excellent results. Results are now complete and ready to be published. This work was initialised as a research project of the French Railways developed by the Pi-Ramses Company under SNCF contract and required five years to obtain the results. This track circuit is already at Level 3 of the ERTMS system, and it will be much cheaper to implement and to work. The traffic regulation is based on variable length track sections. As the traffic growths, the maximum speed is reduced, and the track section lengths are decreasing. It is possible if the elementary track section is correctly defined for the minimum speed and if every track section is able to emit with variable frequencies.

Keywords: track section, track circuits, space-time crossing, adaptive track section, automatic railway signalling

Procedia PDF Downloads 327
358 A Conceptual Model of the 'Driver – Highly Automated Vehicle' System

Authors: V. A. Dubovsky, V. V. Savchenko, A. A. Baryskevich

Abstract:

The current trend in the automotive industry towards automatic vehicles is creating new challenges related to human factors. This occurs due to the fact that the driver is increasingly relieved of the need to be constantly involved in driving the vehicle, which can negatively impact his/her situation awareness when manual control is required, and decrease driving skills and abilities. These new problems need to be studied in order to provide road safety during the transition towards self-driving vehicles. For this purpose, it is important to develop an appropriate conceptual model of the interaction between the driver and the automated vehicle, which could serve as a theoretical basis for the development of mathematical and simulation models to explore different aspects of driver behaviour in different road situations. Well-known driver behaviour models describe the impact of different stages of the driver's cognitive process on driving performance but do not describe how the driver controls and adjusts his actions. A more complete description of the driver's cognitive process, including the evaluation of the results of his/her actions, will make it possible to more accurately model various aspects of the human factor in different road situations. This paper presents a conceptual model of the 'driver – highly automated vehicle' system based on the P.K. Anokhin's theory of functional systems, which is a theoretical framework for describing internal processes in purposeful living systems based on such notions as goal, desired and actual results of the purposeful activity. A central feature of the proposed model is a dynamic coupling mechanism between the decision-making of a driver to perform a particular action and changes of road conditions due to driver’s actions. This mechanism is based on the stage by stage evaluation of the deviations of the actual values of the driver’s action results parameters from the expected values. The overall functional structure of the highly automated vehicle in the proposed model includes a driver/vehicle/environment state analyzer to coordinate the interaction between driver and vehicle. The proposed conceptual model can be used as a framework to investigate different aspects of human factors in transitions between automated and manual driving for future improvements in driving safety, and for understanding how driver-vehicle interface must be designed for comfort and safety. A major finding of this study is the demonstration that the theory of functional systems is promising and has the potential to describe the interaction of the driver with the vehicle and the environment.

Keywords: automated vehicle, driver behavior, human factors, human-machine system

Procedia PDF Downloads 141
357 The Quantum Theory of Music and Human Languages

Authors: Mballa Abanda Luc Aurelien Serge, Henda Gnakate Biba, Kuate Guemo Romaric, Akono Rufine Nicole, Zabotom Yaya Fadel Biba, Petfiang Sidonie, Bella Suzane Jenifer

Abstract:

The main hypotheses proposed around the definition of the syllable and of music, of the common origin of music and language, should lead the reader to reflect on the cross-cutting questions raised by the debate on the notion of universals in linguistics and musicology. These are objects of controversy, and there lies its interest: the debate raises questions that are at the heart of theories on language. It is an inventive, original, and innovative research thesis. A contribution to the theoretical, musicological, ethno musicological, and linguistic conceptualization of languages, giving rise to the practice of interlocution between the social and cognitive sciences, the activities of artistic creation, and the question of modeling in the human sciences: mathematics, computer science, translation automation, and artificial intelligence. When you apply this theory to any text of a folksong of a world-tone language, you do not only piece together the exact melody, rhythm, and harmonies of that song as if you knew it in advance but also the exact speaking of this language. The author believes that the issue of the disappearance of tonal languages and their preservation has been structurally resolved, as well as one of the greatest cultural equations related to the composition and creation of tonal, polytonal, and random music. The experimentation confirming the theorization, I designed a semi-digital, semi-analog application that translates the tonal languages of Africa (about 2,100 languages) into blues, jazz, world music, polyphonic music, tonal and anatonal music, and deterministic and random music). To test this application, I use music reading and writing software that allows me to collect the data extracted from my mother tongue, which is already modeled in the musical staves saved in the ethnographic (semiotic) dictionary for automatic translation ( volume 2 of the book). The translation is done (from writing to writing, from writing to speech, and from writing to music). Mode of operation: you type a text on your computer, a structured song (chorus-verse), and you command the machine a melody of blues, jazz, and world music or variety, etc. The software runs, giving you the option to choose harmonies, and then you select your melody.

Keywords: language, music, sciences, quantum entenglement

Procedia PDF Downloads 76
356 Post Liberal Perspective on Minorities Visibility in Contemporary Visual Culture: The Case of Mizrahi Jews

Authors: Merav Alush Levron, Sivan Rajuan Shtang

Abstract:

From as early as their emergence in Europe and the US, postmodern and post-colonial paradigm have formed the backbone of the visual culture field of study. The self-representation project of political minorities is studied, described and explained within the premises and perspectives drawn from these paradigms, addressing the key issues they had raised: modernism’s crisis of representation. The struggle for self-representation, agency and multicultural visibility sought to challenge the liberal pretense of universality and equality, hitting at its different blind spots, on issues such as class, gender, race, sex, and nationality. This struggle yielded subversive identity and hybrid performances, including reclaiming, mimicry and masquerading. These performances sought to defy the uniform, universal self, which forms the basis for the liberal, rational, enlightened subject. The argument of this research runs that this politics of representation itself is confined within liberal thought. Alongside post-colonialism and multiculturalism’s contribution in undermining oppressive structures of power, generating diversity in cultural visibility, and exposing the failure of liberal colorblindness, this subversion is constituted in the visual field by way of confrontation, flying in the face of the universal law and relying on its ongoing comparison and attribution to this law. Relying on Deleuze and Guattari, this research set out to draw theoretic and empiric attention to an alternative, post-liberal occurrence which has been taking place in the visual field in parallel to the contra-hegemonic phase and as a product of political reality in the aftermath of the crisis of representation. It is no longer a counter-representation; rather, it is a motion of organic minor desire, progressing in the form of flows and generating what Deleuze and Guattari termed deterritorialization of social structures. This discussion shall have its focus on current post-liberal performances of ‘Mizrahim’ (Jewish Israelis of Arab and Muslim extraction) in the visual field in Israel. In television, video art and photography, these performances challenge the issue of representation and generate concrete peripheral Mizrahiness, realized in the visual organization of the photographic frame. Mizrahiness then transforms from ‘confrontational’ representation into a 'presence', flooding the visual sphere in our plain sight, in a process of 'becoming'. The Mizrahi desire is exerted on the plains of sound, spoken language, the body and the space where they appear. It removes from these plains the coding and stratification engendered by European dominance and rational, liberal enlightenment. This stratification, adhering to the hegemonic surface, is flooded not by way of resisting false consciousness or employing hybridity, but by way of the Mizrahi identity’s own productive, material immanent yearning. The Mizrahi desire reverberates with Mizrahi peripheral 'worlds of meaning', where post-colonial interpretation almost invariably identifies a product of internalized oppression, and a recurrence thereof, rather than a source in itself - an ‘offshoot, never a wellspring’, as Nissim Mizrachi clarifies in his recent pioneering work. The peripheral Mizrahi performance ‘unhook itself’, in Deleuze and Guattari words, from the point of subjectification and interpretation and does not correspond with the partialness, absence, and split that mark post-colonial identities.

Keywords: desire, minority, Mizrahi Jews, post-colonialism, post-liberalism, visibility, Deleuze and Guattari

Procedia PDF Downloads 323
355 Modeling and Optimizing of Sinker Electric Discharge Machine Process Parameters on AISI 4140 Alloy Steel by Central Composite Rotatable Design Method

Authors: J. Satya Eswari, J. Sekhar Babub, Meena Murmu, Govardhan Bhat

Abstract:

Electrical Discharge Machining (EDM) is an unconventional manufacturing process based on removal of material from a part by means of a series of repeated electrical sparks created by electric pulse generators at short intervals between a electrode tool and the part to be machined emmersed in dielectric fluid. In this paper, a study will be performed on the influence of the factors of peak current, pulse on time, interval time and power supply voltage. The output responses measured were material removal rate (MRR) and surface roughness. Finally, the parameters were optimized for maximum MRR with the desired surface roughness. RSM involves establishing mathematical relations between the design variables and the resulting responses and optimizing the process conditions. RSM is not free from problems when it is applied to multi-factor and multi-response situations. Design of experiments (DOE) technique to select the optimum machining conditions for machining AISI 4140 using EDM. The purpose of this paper is to determine the optimal factors of the electro-discharge machining (EDM) process investigate feasibility of design of experiment techniques. The work pieces used were rectangular plates of AISI 4140 grade steel alloy. The study of optimized settings of key machining factors like pulse on time, gap voltage, flushing pressure, input current and duty cycle on the material removal, surface roughness is been carried out using central composite design. The objective is to maximize the Material removal rate (MRR). Central composite design data is used to develop second order polynomial models with interaction terms. The insignificant coefficients’ are eliminated with these models by using student t test and F test for the goodness of fit. CCD is first used to establish the determine the optimal factors of the electro-discharge machining (EDM) for maximizing the MRR. The responses are further treated through a objective function to establish the same set of key machining factors to satisfy the optimization problem of the electro-discharge machining (EDM) process. The results demonstrate the better performance of CCD data based RSM for optimizing the electro-discharge machining (EDM) process.

Keywords: electric discharge machining (EDM), modeling, optimization, CCRD

Procedia PDF Downloads 339
354 Electroforming of 3D Digital Light Processing Printed Sculptures Used as a Low Cost Option for Microcasting

Authors: Cecile Meier, Drago Diaz Aleman, Itahisa Perez Conesa, Jose Luis Saorin Perez, Jorge De La Torre Cantero

Abstract:

In this work, two ways of creating small-sized metal sculptures are proposed: the first by means of microcasting and the second by electroforming from models printed in 3D using an FDM (Fused Deposition Modeling‎) printer or using a DLP (Digital Light Processing) printer. It is viable to replace the wax in the processes of the artistic foundry with 3D printed objects. In this technique, the digital models are manufactured with resin using a low-cost 3D FDM printer in polylactic acid (PLA). This material is used, because its properties make it a viable substitute to wax, within the processes of artistic casting with the technique of lost wax through Ceramic Shell casting. This technique consists of covering a sculpture of wax or in this case PLA with several layers of thermoresistant material. This material is heated to melt the PLA, obtaining an empty mold that is later filled with the molten metal. It is verified that the PLA models reduce the cost and time compared with the hand modeling of the wax. In addition, one can manufacture parts with 3D printing that are not possible to create with manual techniques. However, the sculptures created with this technique have a size limit. The problem is that when printed pieces with PLA are very small, they lose detail, and the laminar texture hides the shape of the piece. DLP type printer allows obtaining more detailed and smaller pieces than the FDM. Such small models are quite difficult and complex to melt using the lost wax technique of Ceramic Shell casting. But, as an alternative, there are microcasting and electroforming, which are specialized in creating small metal pieces such as jewelry ones. The microcasting is a variant of the lost wax that consists of introducing the model in a cylinder in which the refractory material is also poured. The molds are heated in an oven to melt the model and cook them. Finally, the metal is poured into the still hot cylinders that rotate in a machine at high speed to properly distribute all the metal. Because microcasting requires expensive material and machinery to melt a piece of metal, electroforming is an alternative for this process. The electroforming uses models in different materials; for this study, micro-sculptures printed in 3D are used. These are subjected to an electroforming bath that covers the pieces with a very thin layer of metal. This work will investigate the recommended size to use 3D printers, both with PLA and resin and first tests are being done to validate use the electroforming process of microsculptures, which are printed in resin using a DLP printer.

Keywords: sculptures, DLP 3D printer, microcasting, electroforming, fused deposition modeling

Procedia PDF Downloads 132
353 Test Rig Development for Up-to-Date Experimental Study of Multi-Stage Flash Distillation Process

Authors: Marek Vondra, Petr Bobák

Abstract:

Vacuum evaporation is a reliable and well-proven technology with a wide application range which is frequently used in food, chemical or pharmaceutical industries. Recently, numerous remarkable studies have been carried out to investigate utilization of this technology in the area of wastewater treatment. One of the most successful applications of vacuum evaporation principal is connected with seawater desalination. Since 1950’s, multi-stage flash distillation (MSF) has been the leading technology in this field and it is still irreplaceable in many respects, despite a rapid increase in cheaper reverse-osmosis-based installations in recent decades. MSF plants are conveniently operated in countries with a fluctuating seawater quality and at locations where a sufficient amount of waste heat is available. Nowadays, most of the MSF research is connected with alternative heat sources utilization and with hybridization, i.e. merging of different types of desalination technologies. Some of the studies are concerned with basic principles of the static flash phenomenon, but only few scientists have lately focused on the fundamentals of continuous multi-stage evaporation. Limited measurement possibilities at operating plants and insufficiently equipped experimental facilities may be the reasons. The aim of the presented study was to design, construct and test an up-to-date test rig with an advanced measurement system which will provide real time monitoring options of all the important operational parameters under various conditions. The whole system consists of a conventionally designed MSF unit with 8 evaporation chambers, versatile heating circuit for different kinds of feed water (e.g. seawater, waste water), sophisticated system for acquisition and real-time visualization of all the related quantities (temperature, pressure, flow rate, weight, conductivity, pH, water level, power input), access to a wide spectrum of operational media (salt, fresh and softened water, steam, natural gas, compressed air, electrical energy) and integrated transparent features which enable a direct visual control of selected physical mechanisms (water evaporation in chambers, water level right before brine and distillate pumps). Thanks to the adjustable process parameters, it is possible to operate the test unit at desired operational conditions. This allows researchers to carry out statistical design and analysis of experiments. Valuable results obtained in this manner could be further employed in simulations and process modeling. First experimental tests confirm correctness of the presented approach and promise interesting outputs in the future. The presented experimental apparatus enables flexible and efficient research of the whole MSF process.

Keywords: design of experiment, multi-stage flash distillation, test rig, vacuum evaporation

Procedia PDF Downloads 384
352 University Curriculum Policy Processes in Chile: A Case Study

Authors: Victoria C. Valdebenito

Abstract:

Located within the context of accelerating globalization in the 21st-century knowledge society, this paper focuses on one selected university in Chile at which radical curriculum policy changes have been taking place, diverging from the traditional curriculum in Chile at the undergraduate level as a section of a larger investigation. Using a ‘policy trajectory’ framework, and guided by the interpretivist approach to research, interview transcripts and institutional documents were analyzed in relation to the meso (university administration) and the micro (academics) level. Inside the case study, participants from the university administration and academic levels were selected both via snow-ball technique and purposive selection, thus they had different levels of seniority, with some participating actively in the curriculum reform processes. Guided by an interpretivist approach to research, documents and interview transcripts were analyzed to reveal major themes emerging from the data. A further ‘bigger picture’ analysis guided by critical theory was then undertaken, involving interrogation of underlying ideologies and how political and economic interests influence the cultural production of policy. The case-study university was selected because it represents a traditional and old case of university setting in the country, undergoing curriculum changes based on international trends such as the competency model and the liberal arts. Also, it is representative of a particular socioeconomic sector of the country. Access to the university was gained through email contact. Qualitative research methods were used, namely interviews and analysis of institutional documents. In all, 18 people were interviewed. The number was defined by when the saturation criterion was met. Semi-structured interview schedules were based on the four research questions about influences, policy texts, policy enactment and longer-term outcomes. Triangulation of information was used for the analysis. While there was no intention to generalize the specific findings of the case study, the results of the research were used as a focus for engagement with broader themes, often evident in global higher education policy developments. The research results were organized around major themes in three of the four contexts of the ‘policy trajectory’. Regarding the context of influences and the context of policy text production, themes relate to hegemony exercised by first world countries’ universities in the higher education field, its associated neoliberal ideology, with accountability and the discourse of continuous improvement, the local responses to those pressures, and the value of interdisciplinarity. Finally, regarding the context of policy practices and effects (enactment), themes emerged around the impacts of the curriculum changes on university staff, students, and resistance amongst academics. The research concluded with a few recommendations that potentially provide ‘food for thought’ beyond the localized settings of this study, as well as possibilities for further research.

Keywords: curriculum, global-local dynamics, higher education, policy, sociology of education

Procedia PDF Downloads 73
351 Data-Driven Strategies for Enhancing Food Security in Vulnerable Regions: A Multi-Dimensional Analysis of Crop Yield Predictions, Supply Chain Optimization, and Food Distribution Networks

Authors: Sulemana Ibrahim

Abstract:

Food security remains a paramount global challenge, with vulnerable regions grappling with issues of hunger and malnutrition. This study embarks on a comprehensive exploration of data-driven strategies aimed at ameliorating food security in such regions. Our research employs a multifaceted approach, integrating data analytics to predict crop yields, optimizing supply chains, and enhancing food distribution networks. The study unfolds as a multi-dimensional analysis, commencing with the development of robust machine learning models harnessing remote sensing data, historical crop yield records, and meteorological data to foresee crop yields. These predictive models, underpinned by convolutional and recurrent neural networks, furnish critical insights into anticipated harvests, empowering proactive measures to confront food insecurity. Subsequently, the research scrutinizes supply chain optimization to address food security challenges, capitalizing on linear programming and network optimization techniques. These strategies intend to mitigate loss and wastage while streamlining the distribution of agricultural produce from field to fork. In conjunction, the study investigates food distribution networks with a particular focus on network efficiency, accessibility, and equitable food resource allocation. Network analysis tools, complemented by data-driven simulation methodologies, unveil opportunities for augmenting the efficacy of these critical lifelines. This study also considers the ethical implications and privacy concerns associated with the extensive use of data in the realm of food security. The proposed methodology outlines guidelines for responsible data acquisition, storage, and usage. The ultimate aspiration of this research is to forge a nexus between data science and food security policy, bestowing actionable insights to mitigate the ordeal of food insecurity. The holistic approach converging data-driven crop yield forecasts, optimized supply chains, and improved distribution networks aspire to revitalize food security in the most vulnerable regions, elevating the quality of life for millions worldwide.

Keywords: data-driven strategies, crop yield prediction, supply chain optimization, food distribution networks

Procedia PDF Downloads 59
350 Hydrogen Production from Auto-Thermal Reforming of Ethanol Catalyzed by Tri-Metallic Catalyst

Authors: Patrizia Frontera, Anastasia Macario, Sebastiano Candamano, Fortunato Crea, Pierluigi Antonucci

Abstract:

The increasing of the world energy demand makes today biomass an attractive energy source, based on the minimizing of CO2 emission and on the global warming reduction purposes. Recently, COP-21, the international meeting on global climate change, defined the roadmap for sustainable worldwide development, based on low-carbon containing fuel. Hydrogen is an energy vector able to substitute the conventional fuels from petroleum. Ethanol for hydrogen production represents a valid alternative to the fossil sources due to its low toxicity, low production costs, high biodegradability, high H2 content and renewability. Ethanol conversion to generate hydrogen by a combination of partial oxidation and steam reforming reactions is generally called auto-thermal reforming (ATR). The ATR process is advantageous due to the low energy requirements and to the reduced carbonaceous deposits formation. Catalyst plays a pivotal role in the ATR process, especially towards the process selectivity and the carbonaceous deposits formation. Bimetallic or trimetallic catalysts, as well as catalysts with doped-promoters supports, may exhibit high activity, selectivity and deactivation resistance with respect to the corresponding monometallic ones. In this work, NiMoCo/GDC, NiMoCu/GDC and NiMoRe/GDC (where GDC is Gadolinia Doped Ceria support and the metal composition is 60:30:10 for all catalyst) have been prepared by impregnation method. The support, Gadolinia 0.2 Doped Ceria 0.8, was impregnated by metal precursors solubilized in aqueous ethanol solution (50%) at room temperature for 6 hours. After this, the catalysts were dried at 100°C for 8 hours and, subsequently, calcined at 600°C in order to have the metal oxides. Finally, active catalysts were obtained by reduction procedure (H2 atmosphere at 500°C for 6 hours). All sample were characterized by different analytical techniques (XRD, SEM-EDX, XPS, CHNS, H2-TPR and Raman Spectorscopy). Catalytic experiments (auto-thermal reforming of ethanol) were carried out in the temperature range 500-800°C under atmospheric pressure, using a continuous fixed-bed microreactor. Effluent gases from the reactor were analyzed by two Varian CP4900 chromarographs with a TCD detector. The analytical investigation focused on the preventing of the coke deposition, the metals sintering effect and the sulfur poisoning. Hydrogen productivity, ethanol conversion and products distribution were measured and analyzed. At 600°C, all tri-metallic catalysts show the best performance: H2 + CO reaching almost the 77 vol.% in the final gases. While NiMoCo/GDC catalyst shows the best selectivity to hydrogen whit respect to the other tri-metallic catalysts (41 vol.% at 600°C). On the other hand, NiMoCu/GDC and NiMoRe/GDC demonstrated high sulfur poisoning resistance (up to 200 cc/min) with respect to the NiMoCo/GDC catalyst. The correlation among catalytic results and surface properties of the catalysts will be discussed.

Keywords: catalysts, ceria, ethanol, gadolinia, hydrogen, Nickel

Procedia PDF Downloads 151
349 In-Flight Aircraft Performance Model Enhancement Using Adaptive Lookup Tables

Authors: Georges Ghazi, Magali Gelhaye, Ruxandra Botez

Abstract:

Over the years, the Flight Management System (FMS) has experienced a continuous improvement of its many features, to the point of becoming the pilot’s primary interface for flight planning operation on the airplane. With the assistance of the FMS, the concept of distance and time has been completely revolutionized, providing the crew members with the determination of the optimized route (or flight plan) from the departure airport to the arrival airport. To accomplish this function, the FMS needs an accurate Aircraft Performance Model (APM) of the aircraft. In general, APMs that equipped most modern FMSs are established before the entry into service of an individual aircraft, and results from the combination of a set of ordinary differential equations and a set of performance databases. Unfortunately, an aircraft in service is constantly exposed to dynamic loads that degrade its flight characteristics. These degradations endow two main origins: airframe deterioration (control surfaces rigging, seals missing or damaged, etc.) and engine performance degradation (fuel consumption increase for a given thrust). Thus, after several years of service, the performance databases and the APM associated to a specific aircraft are no longer representative enough of the actual aircraft performance. It is important to monitor the trend of the performance deterioration and correct the uncertainties of the aircraft model in order to improve the accuracy the flight management system predictions. The basis of this research lies in the new ability to continuously update an Aircraft Performance Model (APM) during flight using an adaptive lookup table technique. This methodology was developed and applied to the well-known Cessna Citation X business aircraft. For the purpose of this study, a level D Research Aircraft Flight Simulator (RAFS) was used as a test aircraft. According to Federal Aviation Administration the level D is the highest certification level for the flight dynamics modeling. Basically, using data available in the Flight Crew Operating Manual (FCOM), a first APM describing the variation of the engine fan speed and aircraft fuel flow w.r.t flight conditions was derived. This model was next improved using the proposed methodology. To do that, several cruise flights were performed using the RAFS. An algorithm was developed to frequently sample the aircraft sensors measurements during the flight and compare the model prediction with the actual measurements. Based on these comparisons, a correction was performed on the actual APM in order to minimize the error between the predicted data and the measured data. In this way, as the aircraft flies, the APM will be continuously enhanced, making the FMS more and more precise and the prediction of trajectories more realistic and more reliable. The results obtained are very encouraging. Indeed, using the tables initialized with the FCOM data, only a few iterations were needed to reduce the fuel flow prediction error from an average relative error of 12% to 0.3%. Similarly, the FCOM prediction regarding the engine fan speed was reduced from a maximum error deviation of 5.0% to 0.2% after only ten flights.

Keywords: aircraft performance, cruise, trajectory optimization, adaptive lookup tables, Cessna Citation X

Procedia PDF Downloads 259
348 Assessment of Biochemical Marker Profiles and Their Impact on Morbidity and Mortality of COVID-19 Patients in Tigray, Ethiopia

Authors: Teklay Gebrecherkos, Mahmud Abdulkadir

Abstract:

Abstract: The emergence and subsequent rapid worldwide spread of the COVID-19 pandemic have posed a global crisis, with a tremendously increasing burden of infection, morbidity, and mortality risks. Recent studies have suggested that severe cases of COVID-19 are characterized by massive biochemical, hematological, and inflammatory alterations whose synergistic effect is estimated to progress to multiple organ damage and failure. In this regard, biochemical monitoring of COVID-19 patients, based on comprehensive laboratory assessments and findings, is expected to play a crucial role in effective clinical management and improving the survival rates of patients. However, biochemical markers that can be informative of COVID-19 patient risk stratification and predictor of clinical outcomes are currently scarcely available. The study aims to investigate the profiles of common biochemical markers and their influence on the severity of the COVID-19 infection in Tigray, Ethiopia. Methods: A laboratory-based cross-sectional study was conducted from July to August 2020 at Quiha College of Engineering, Mekelle University COVID-19 isolation and treatment center. Sociodemographic and clinical data were collected using a structured questionnaire. Whole blood was collected from each study participant, and serum samples were separated after being delivered to the laboratory. Hematological biomarkers were analyzed using FACS count, while organ tests and serum electrolytes were analyzed using ion-selective electrode methods using a Cobas-6000 series machine. Data was analyzed using SPSS Vs 20. Results: A total of 120 SARS-CoV-2 patients were enrolled during the study. The participants ranged between 18 and 91 years, with a mean age of 52 (±108.8). The majority (40%) of participants were between the ages of 60 and above. Patients with multiple comorbidities developed severe COVID-19, though not statistically significant (p=0.34). Mann-Whitney U test analysis showed that biochemical tests such as neuropile count (p=0.003), AST levels (p=0.050), serum creatinine (p=0.000), and serum sodium (p=0.015) were significantly correlated with severe COVID-19 disease as compared to non-severe disease. Conclusion: The severity of COVID-19 was associated with higher age, organ tests AST and creatinine, serum Na+, and elevated total neutrophile count. Thus, further study needs to be conducted to evaluate the alterations of biochemical biomarkers and their impact on COVID-19.

Keywords: COVID-19, biomarkers, mortality, Tigray, Ethiopia

Procedia PDF Downloads 29
347 Evaluation of Cryoablation Procedures in Treatment of Atrial Fibrillation from 3 Years' Experiences in a Single Heart Center

Authors: J. Yan, B. Pieper, B. Bucsky, B. Nasseri, S. Klotz, H. H. Sievers, S. Mohamed

Abstract:

Cryoablation is evermore applied for interventional treatment of paroxysmal (PAAF) or persistent atrial fibrillation (PEAF). In the cardiac surgery, this procedure is often combined with coronary arterial bypass graft (CABG) and valve operations. Three different methods are feasible in this sense in respect to practicing extents and mechanisms such as lone left atrial cryoablation, Cox-Maze IV and III in our heart center. 415 patients (68 ± 0.8ys, male 68.2%) with predisposed atrial fibrillation who initially required either coronary or valve operations were enrolled and divided into 3 matched groups according to deployed procedures: CryoLA-group (cryoablation of lone left atrium, n=94); Cox-Maze-IV-group (n=93) and Cox-Maze-III-group (n=8). All patients additionally received closure of the left atrial appendage (LAA) and regularly underwent three-year ambulant follow-up assessments (3, 6, 9, 12, 18, 24, 30 and 36 months). Burdens of atrial fibrillation were assessed directly by means of cardiac monitor (Reveal XT, Medtronic) or of 3-day Holter electrocardiogram. Herewith, attacks frequencies of AF and their circadian patterns were systemically analyzed. Furthermore, anticoagulants and regular rate-/rhythm-controlling medications were evaluated and listed in terms of anti-rate and anti-rhythm regimens. Concerning PAAF treatment, Cox Maze IV procedure provided therapeutically acceptable effect as lone left atrium (LA) cryoablation did (5.25 ± 5.25% vs. 10.39 ± 9.96% AF-burden, p > 0.05). Interestingly, Cox Maze III method presented a better short-term effect in the PEAF therapy in comparison to lone cryoablation of LA and Cox Maze IV (0.25 ± 0.23% vs. 15.31 ± 5.99% and 9.10 ± 3.73% AF-burden within the first year, p < 0.05). But this therapeutic advantage went lost during ongoing follow-ups (26.65 ± 24.50% vs. 8.33 ± 8.06% and 15.73 ± 5.88% in 3rd follow-up year). In this way, lone LA-cryoablation established its antiarrhythmic efficacy and 69.5% patients were released from the Vit-K-antagonists, while Cox Maze IV liberated 67.2% patients from continuous anticoagulant medication. The AF-recurrences mostly performed such attacks property as less than 60min duration for all 3 procedures (p > 0.05). In the sense of the circadian distribution of the recurrence attacks, weighted by ongoing follow-ups, lone LA cryoablation achieved and stabilized the antiarrhythmic effects over time, which was especially observed in the treatment of PEAF, while Cox Maze IV and III had their antiarrhythmic effects weakened progressively. This phenomenon was likewise evaluable in the therapy of circadian rhythm of reverting AF-attacks. Furthermore, the strategy of rate control was much more often applied to support and maintain therapeutic successes obtained than the one of rhythm control. Derived from experiences in our heart center, lone LA cryoablation presented equivalent effects in the treatment of AF in comparison to Cox Maze IV and III procedures. These therapeutic successes were especially investigable in the patients suffering from persistent AF (PEAF). Additional supportive strategies such as rate control regime should be initialized and implemented to improve the therapeutic effects of the cryoablations according to appropriate criteria.

Keywords: AF-burden, atrial fibrillation, cardiac monitor, COX MAZE, cryoablation, Holter, LAA

Procedia PDF Downloads 198
346 The Digital Desert in Global Business: Digital Analytics as an Oasis of Hope for Sub-Saharan Africa

Authors: David Amoah Oduro

Abstract:

In the ever-evolving terrain of international business, a profound revolution is underway, guided by the swift integration and advancement of disruptive technologies like digital analytics. In today's international business landscape, where competition is fierce, and decisions are data-driven, the essence of this paper lies in offering a tangible roadmap for practitioners. It is a guide that bridges the chasm between theory and actionable insights, helping businesses, investors, and entrepreneurs navigate the complexities of international expansion into sub-Saharan Africa. This practitioner paper distils essential insights, methodologies, and actionable recommendations for businesses seeking to leverage digital analytics in their pursuit of market entry and expansion across the African continent. What sets this paper apart is its unwavering focus on a region ripe with potential: sub-Saharan Africa. The adoption and adaptation of digital analytics are not mere luxuries but essential strategic tools for evaluating countries and entering markets within this dynamic region. With the spotlight firmly fixed on sub-Saharan Africa, the aim is to provide a compelling resource to guide practitioners in their quest to unearth the vast opportunities hidden within sub-Saharan Africa's digital desert. The paper illuminates the pivotal role of digital analytics in providing a data-driven foundation for market entry decisions. It highlights the ability to uncover market trends, consumer behavior, and competitive landscapes. By understanding Africa's incredible diversity, the paper underscores the importance of tailoring market entry strategies to account for unique cultural, economic, and regulatory factors. For practitioners, this paper offers a set of actionable recommendations, including the creation of cross-functional teams, the integration of local expertise, and the cultivation of long-term partnerships to ensure sustainable market entry success. It advocates for a commitment to continuous learning and flexibility in adapting strategies as the African market evolves. This paper represents an invaluable resource for businesses, investors, and entrepreneurs who are keen on unlocking the potential of digital analytics for informed market entry in Africa. It serves as a guiding light, equipping practitioners with the essential tools and insights needed to thrive in this dynamic and diverse continent. With these key insights, methodologies, and recommendations, this paper is a roadmap to prosperous and sustainable market entry in Africa. It is vital for anyone looking to harness the transformational potential of digital analytics to create prosperous and sustainable ventures in a region brimming with promise. In the ever-advancing digital age, this practitioner paper becomes a lodestar, guiding businesses and visionaries toward success amidst the unique challenges and rewards of sub-Saharan Africa's international business landscape.

Keywords: global analytics, digital analytics, sub-Saharan Africa, data analytics

Procedia PDF Downloads 67
345 Private Coded Computation of Matrix Multiplication

Authors: Malihe Aliasgari, Yousef Nejatbakhsh

Abstract:

The era of Big Data and the immensity of real-life datasets compels computation tasks to be performed in a distributed fashion, where the data is dispersed among many servers that operate in parallel. However, massive parallelization leads to computational bottlenecks due to faulty servers and stragglers. Stragglers refer to a few slow or delay-prone processors that can bottleneck the entire computation because one has to wait for all the parallel nodes to finish. The problem of straggling processors, has been well studied in the context of distributed computing. Recently, it has been pointed out that, for the important case of linear functions, it is possible to improve over repetition strategies in terms of the tradeoff between performance and latency by carrying out linear precoding of the data prior to processing. The key idea is that, by employing suitable linear codes operating over fractions of the original data, a function may be completed as soon as enough number of processors, depending on the minimum distance of the code, have completed their operations. The problem of matrix-matrix multiplication in the presence of practically big sized of data sets faced with computational and memory related difficulties, which makes such operations are carried out using distributed computing platforms. In this work, we study the problem of distributed matrix-matrix multiplication W = XY under storage constraints, i.e., when each server is allowed to store a fixed fraction of each of the matrices X and Y, which is a fundamental building of many science and engineering fields such as machine learning, image and signal processing, wireless communication, optimization. Non-secure and secure matrix multiplication are studied. We want to study the setup, in which the identity of the matrix of interest should be kept private from the workers and then obtain the recovery threshold of the colluding model, that is, the number of workers that need to complete their task before the master server can recover the product W. The problem of secure and private distributed matrix multiplication W = XY which the matrix X is confidential, while matrix Y is selected in a private manner from a library of public matrices. We present the best currently known trade-off between communication load and recovery threshold. On the other words, we design an achievable PSGPD scheme for any arbitrary privacy level by trivially concatenating a robust PIR scheme for arbitrary colluding workers and private databases and the proposed SGPD code that provides a smaller computational complexity at the workers.

Keywords: coded distributed computation, private information retrieval, secret sharing, stragglers

Procedia PDF Downloads 119
344 Prevalence of Breast Cancer Molecular Subtypes at a Tertiary Cancer Institute

Authors: Nahush Modak, Meena Pangarkar, Anand Pathak, Ankita Tamhane

Abstract:

Background: Breast cancer is the prominent cause of cancer and mortality among women. This study was done to show the statistical analysis of a cohort of over 250 patients detected with breast cancer diagnosed by oncologists using Immunohistochemistry (IHC). IHC was performed by using ER; PR; HER2; Ki-67 antibodies. Materials and methods: Formalin fixed Paraffin embedded tissue samples were obtained by surgical manner and standard protocol was followed for fixation, grossing, tissue processing, embedding, cutting and IHC. The Ventana Benchmark XT machine was used for automated IHC of the samples. Antibodies used were supplied by F. Hoffmann-La Roche Ltd. Statistical analysis was performed by using SPSS for windows. Statistical tests performed were chi-squared test and Correlation tests with p<.01. The raw data was collected and provided by National Cancer Insitute, Jamtha, India. Result: Luminal B was the most prevailing molecular subtype of Breast cancer at our institute. Chi squared test of homogeneity was performed to find equality in distribution and Luminal B was the most prevalent molecular subtype. The worse prognostic indicator for breast cancer depends upon expression of Ki-67 and her2 protein in cancerous cells. Our study was done at p <.01 and significant dependence was observed. There exists no dependence of age on molecular subtype of breast cancer. Similarly, age is an independent variable while considering Ki-67 expression. Chi square test performed on Human epidermal growth factor receptor 2 (HER2) statuses of patients and strong dependence was observed in percentage of Ki-67 expression and Her2 (+/-) character which shows that, value of Ki depends upon Her2 expression in cancerous cells (p<.01). Surprisingly, dependence was observed in case of Ki-67 and Pr, at p <.01. This shows that Progesterone receptor proteins (PR) are over-expressed when there is an elevation in expression of Ki-67 protein. Conclusion: We conclude from that Luminal B is the most prevalent molecular subtype at National Cancer Institute, Jamtha, India. There was found no significant correlation between age and Ki-67 expression in any molecular subtype. And no dependence or correlation exists between patients’ age and molecular subtype. We also found that, when the diagnosis is Luminal A, out of the cohort of 257 patients, no patient shows >14% Ki-67 value. Statistically, extremely significant values were observed for dependence of PR+Her2- and PR-Her2+ scores on Ki-67 expression. (p<.01). Her2 is an important prognostic factor in breast cancer. Chi squared test for Her2 and Ki-67 shows that the expression of Ki depends upon Her2 statuses. Moreover, Ki-67 cannot be used as a standalone prognostic factor for determining breast cancer.

Keywords: breast cancer molecular subtypes , correlation, immunohistochemistry, Ki-67 and HR, statistical analysis

Procedia PDF Downloads 118