Search results for: bare machine computing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3906

Search results for: bare machine computing

276 Sound Selection for Gesture Sonification and Manipulation of Virtual Objects

Authors: Benjamin Bressolette, S´ebastien Denjean, Vincent Roussarie, Mitsuko Aramaki, Sølvi Ystad, Richard Kronland-Martinet

Abstract:

New sensors and technologies – such as microphones, touchscreens or infrared sensors – are currently making their appearance in the automotive sector, introducing new kinds of Human-Machine Interfaces (HMIs). The interactions with such tools might be cognitively expensive, thus unsuitable for driving tasks. It could for instance be dangerous to use touchscreens with a visual feedback while driving, as it distracts the driver’s visual attention away from the road. Furthermore, new technologies in car cockpits modify the interactions of the users with the central system. In particular, touchscreens are preferred to arrays of buttons for space improvement and design purposes. However, the buttons’ tactile feedback is no more available to the driver, which makes such interfaces more difficult to manipulate while driving. Gestures combined with an auditory feedback might therefore constitute an interesting alternative to interact with the HMI. Indeed, gestures can be performed without vision, which means that the driver’s visual attention can be totally dedicated to the driving task. In fact, the auditory feedback can both inform the driver with respect to the task performed on the interface and on the performed gesture, which might constitute a possible solution to the lack of tactile information. As audition is a relatively unused sense in automotive contexts, gesture sonification can contribute to reducing the cognitive load thanks to the proposed multisensory exploitation. Our approach consists in using a virtual object (VO) to sonify the consequences of the gesture rather than the gesture itself. This approach is motivated by an ecological point of view: Gestures do not make sound, but their consequences do. In this experiment, the aim was to identify efficient sound strategies, to transmit dynamic information of VOs to users through sound. The swipe gesture was chosen for this purpose, as it is commonly used in current and new interfaces. We chose two VO parameters to sonify, the hand-VO distance and the VO velocity. Two kinds of sound parameters can be chosen to sonify the VO behavior: Spectral or temporal parameters. Pitch and brightness were tested as spectral parameters, and amplitude modulation as a temporal parameter. Performances showed a positive effect of sound compared to a no-sound situation, revealing the usefulness of sounds to accomplish the task.

Keywords: auditory feedback, gesture sonification, sound perception, virtual object

Procedia PDF Downloads 302
275 Investigation of the Effects of Aerobic Exercise Programs on Hematological Parameters of Sedentary People

Authors: Sanjeev Kumar, Swati Choudhary

Abstract:

Background: A variety of studies warn that sedentary lifestyles can contribute to many preventable causes of death. This study was taken to determine the effects of two types of aerobic training programs on erythrocytes, leukocytes, hemoglobin concentration (Hb), platelets and hematocrit of sedentary people (N=60) with age group 20 to 30 years. Methods: All the subjects were randomly divided into three groups i.e. two experiments groups (aerobic dance & cardio fitness) and control group. Each group having 10 male and 10 females. Experimental groups undergone 60 minutes of training 5 times a week for 12 weeks whereas the control group did not participate in any training program except their daily routine. The aerobic dance group was chosen to perform exercise like step –touch, side-to-side, V-step and hand and body movements, etc. The cardio fitness group was chosen to perform exercises with modern fitness equipment like treadmill, elliptical trainer, stationary bike and rowing machine. Rating of perceived exertion (RPE) scale developed by Gunner Borg was used to monitor the intensity of the workout. Aerobic programs were encompassed of low-impact (0- 4 week & perceived exertion from 6 to 12), moderate-impact (4-8 week and perceived exertion from 12 to 16) and high-impact (8- 12 week & perceived exertion from 16 to 20). Results: To test the effectiveness of training programs paired t-test was used and significant difference (p<0.05) was observed in erythrocytes, hemoglobin concentration, platelets, hematocrit but no significant effects of training was found in leukocytes (p>0.05). Paired t-test also showed that no effect of time was seen in the control group in all the cases (p>0.05). Further analysis of covariance was used to know which program was more effective and it was seen that F value was found significant in the case of erythrocytes, hemoglobin concentration, platelets, and hematocrit as their associated p-value (p<0.05) is lesser than 0.05. As F value was found significant for hematological parameters, fishers least significant difference test was used and results of post hoc mean comparison indicated that experimental groups (aerobic dance group and cardio fitness group) had significant difference with control group in erythrocytes, hemoglobin concentration, platelets and hematocrit and insignificant difference was found between aerobic dance group & cardio fitness group in all the cases. Thus, it may be concluded that in general, both the aerobic training programs had adequate effects on all the hematological parameters except leukocytes.

Keywords: aerobic dance, cardio fitness, hematological variables, rating perceived exertion scale

Procedia PDF Downloads 274
274 Innovations and Challenges: Multimodal Learning in Cybersecurity

Authors: Tarek Saadawi, Rosario Gennaro, Jonathan Akeley

Abstract:

There is rapidly growing demand for professionals to fill positions in Cybersecurity. This is recognized as a national priority both by government agencies and the private sector. Cybersecurity is a very wide technical area which encompasses all measures that can be taken in an electronic system to prevent criminal or unauthorized use of data and resources. This requires defending computers, servers, networks, and their users from any kind of malicious attacks. The need to address this challenge has been recognized globally but is particularly acute in the New York metropolitan area, home to some of the largest financial institutions in the world, which are prime targets of cyberattacks. In New York State alone, there are currently around 57,000 jobs in the Cybersecurity industry, with more than 23,000 unfilled positions. The Cybersecurity Program at City College is a collaboration between the Departments of Computer Science and Electrical Engineering. In Fall 2020, The City College of New York matriculated its first students in theCybersecurity Master of Science program. The program was designed to fill gaps in the previous offerings and evolved out ofan established partnership with Facebook on Cybersecurity Education. City College has designed a program where courses, curricula, syllabi, materials, labs, etc., are developed in cooperation and coordination with industry whenever possible, ensuring that students graduating from the program will have the necessary background to seamlessly segue into industry jobs. The Cybersecurity Program has created multiple pathways for prospective students to obtain the necessary prerequisites to apply in order to build a more diverse student population. The program can also be pursued on a part-time basis which makes it available to working professionals. Since City College’s Cybersecurity M.S. program was established to equip students with the advanced technical skills needed to thrive in a high-demand, rapidly-evolving field, it incorporates a range of pedagogical formats. From its outset, the Cybersecurity program has sought to provide both the theoretical foundations necessary for meaningful work in the field along with labs and applied learning projects aligned with skillsets required by industry. The efforts have involved collaboration with outside organizations and with visiting professors designing new courses on topics such as Adversarial AI, Data Privacy, Secure Cloud Computing, and blockchain. Although the program was initially designed with a single asynchronous course in the curriculum with the rest of the classes designed to be offered in-person, the advent of the COVID-19 pandemic necessitated a move to fullyonline learning. The shift to online learning has provided lessons for future development by providing examples of some inherent advantages to the medium in addition to its drawbacks. This talk will address the structure of the newly-implemented Cybersecurity Master’s Program and discuss the innovations, challenges, and possible future directions.

Keywords: cybersecurity, new york, city college, graduate degree, master of science

Procedia PDF Downloads 148
273 Postmortem Genetic Testing to Sudden and Unexpected Deaths Using the Next Generation Sequencing

Authors: Eriko Ochiai, Fumiko Satoh, Keiko Miyashita, Yu Kakimoto, Motoki Osawa

Abstract:

Sudden and unexpected deaths from unknown causes occur in infants and youths. Recently, molecular links between a part of these deaths and several genetic diseases are examined in the postmortem. For instance, hereditary long QT syndrome and Burgada syndrome are occasionally fatal through critical ventricular tachyarrhythmia. There are a large number of target genes responsible for such diseases, the conventional analysis using the Sanger’s method has been laborious. In this report, we attempted to analyze sudden deaths comprehensively using the next generation sequencing (NGS) technique. Multiplex PCR to subject’s DNA was performed using Ion AmpliSeq Library Kits 2.0 and Ion AmpliSeq Inherited Disease Panel (Life Technologies). After the library was constructed by emulsion PCR, the amplicons were sequenced 500 flows on Ion Personal Genome Machine System (Life Technologies) according to the manufacture instruction. SNPs and indels were analyzed to the sequence reads that were mapped on hg19 of reference sequences. This project has been approved by the ethical committee of Tokai University School of Medicine. As a representative case, the molecular analysis to a 40 years old male who received a diagnosis of Brugada syndrome demonstrated a total of 584 SNPs or indels. Non-synonymous and frameshift nucleotide substitutions were selected in the coding region of heart disease related genes of ANK2, AKAP9, CACNA1C, DSC2, KCNQ1, MYLK, SCN1B, and STARD3. In particular, c.629T-C transition in exon 3 of the SCN1B gene, resulting in a leu210-to-pro (L210P) substitution is predicted “damaging” by the SIFT program. Because the mutation has not been reported, it was unclear if the substitution was pathogenic. Sudden death that failed in determining the cause of death constitutes one of the most important unsolved subjects in forensic pathology. The Ion AmpliSeq Inherited Disease Panel can amplify the exons of 328 genes at one time. We realized the difficulty in selection of the true source from a number of candidates, but postmortem genetic testing using NGS analysis deserves of a diagnostic to date. We now extend this analysis to SIDS suspected subjects and young sudden death victims.

Keywords: postmortem genetic testing, sudden death, SIDS, next generation sequencing

Procedia PDF Downloads 360
272 Microchip-Integrated Computational Models for Studying Gait and Motor Control Deficits in Autism

Authors: Noah Odion, Honest Jimu, Blessing Atinuke Afuape

Abstract:

Introduction: Motor control and gait abnormalities are commonly observed in individuals with autism spectrum disorder (ASD), affecting their mobility and coordination. Understanding the underlying neurological and biomechanical factors is essential for designing effective interventions. This study focuses on developing microchip-integrated wearable devices to capture real-time movement data from individuals with autism. By applying computational models to the collected data, we aim to analyze motor control patterns and gait abnormalities, bridging a crucial knowledge gap in autism-related motor dysfunction. Methods: We designed microchip-enabled wearable devices capable of capturing precise kinematic data, including joint angles, acceleration, and velocity during movement. A cross-sectional study was conducted on individuals with ASD and a control group to collect comparative data. Computational modelling was applied using machine learning algorithms to analyse motor control patterns, focusing on gait variability, balance, and coordination. Finite element models were also used to simulate muscle and joint dynamics. The study employed descriptive and analytical methods to interpret the motor data. Results: The wearable devices effectively captured detailed movement data, revealing significant gait variability in the ASD group. For example, gait cycle time was 25% longer, and stride length was reduced by 15% compared to the control group. Motor control analysis showed a 30% reduction in balance stability in individuals with autism. Computational models successfully predicted movement irregularities and helped identify motor control deficits, particularly in the lower limbs. Conclusions: The integration of microchip-based wearable devices with computational models offers a powerful tool for diagnosing and treating motor control deficits in autism. These results have significant implications for patient care, providing objective data to guide personalized therapeutic interventions. The findings also contribute to the broader field of neuroscience by improving our understanding of the motor dysfunctions associated with ASD and other neurodevelopmental disorders.

Keywords: motor control, gait abnormalities, autism, wearable devices, microchips, computational modeling, kinematic analysis, neurodevelopmental disorders

Procedia PDF Downloads 25
271 Model-Based Diagnostics of Multiple Tooth Cracks in Spur Gears

Authors: Ahmed Saeed Mohamed, Sadok Sassi, Mohammad Roshun Paurobally

Abstract:

Gears are important machine components that are widely used to transmit power and change speed in many rotating machines. Any breakdown of these vital components may cause severe disturbance to production and incur heavy financial losses. One of the most common causes of gear failure is the tooth fatigue crack. Early detection of teeth cracks is still a challenging task for engineers and maintenance personnel. So far, to analyze the vibration behavior of gears, different approaches have been tried based on theoretical developments, numerical simulations, or experimental investigations. The objective of this study was to develop a numerical model that could be used to simulate the effect of teeth cracks on the resulting vibrations and hence to permit early fault detection for gear transmission systems. Unlike the majority of published papers, where only one single crack has been considered, this work is more realistic, since it incorporates the possibility of multiple simultaneous cracks with different lengths. As cracks significantly alter the gear mesh stiffness, we performed a finite element analysis using SolidWorks software to determine the stiffness variation with respect to the angular position for different combinations of crack lengths. A simplified six degrees of freedom non-linear lumped parameter model of a one-stage gear system is proposed to study the vibration of a pair of spur gears, with and without tooth cracks. The model takes several physical properties into account, including variable gear mesh stiffness and the effect of friction, but ignores the lubrication effect. The vibration simulation results of the gearbox were obtained via Matlab and Simulink. The results were found to be consistent with the results from previously published works. The effect of one crack with different levels was studied and very similar changes in the total mesh stiffness and the vibration response, both were observed and compared to what has been found in previous studies. The effect of the crack length on various statistical time domain parameters was considered and the results show that these parameters were not equally sensitive to the crack percentage. Multiple cracks are introduced at different locations and the vibration response and the statistical parameters were obtained.

Keywords: dynamic simulation, gear mesh stiffness, simultaneous tooth cracks, spur gear, vibration-based fault detection

Procedia PDF Downloads 212
270 Engine Thrust Estimation by Strain Gauging of Engine Mount Assembly

Authors: Rohit Vashistha, Amit Kumar Gupta, G. P. Ravishankar, Mahesh P. Padwale

Abstract:

Accurate thrust measurement is required for aircraft during takeoff and after ski-jump. In a developmental aircraft, takeoff from ship is extremely critical and thrust produced by the engine should be known to the pilot before takeoff so that if thrust produced is not sufficient then take-off can be aborted and accident can be avoided. After ski-jump, thrust produced by engine is required because the horizontal speed of aircraft is less than the normal takeoff speed. Engine should be able to produce enough thrust to provide nominal horizontal takeoff speed to the airframe within prescribed time limit. The contemporary low bypass gas turbine engines generally have three mounts where the two side mounts transfer the engine thrust to the airframe. The third mount only takes the weight component. It does not take any thrust component. In the present method of thrust estimation, the strain gauging of the two side mounts is carried out. The strain produced at various power settings is used to estimate the thrust produced by the engine. The quarter Wheatstone bridge is used to acquire the strain data. The engine mount assembly is subjected to Universal Test Machine for determination of equivalent elasticity of assembly. This elasticity value is used in the analytical approach for estimation of engine thrust. The estimated thrust is compared with the test bed load cell thrust data. The experimental strain data is also compared with strain data obtained from FEM analysis. Experimental setup: The strain gauge is mounted on the tapered portion of the engine mount sleeve. Two strain gauges are mounted on diametrically opposite locations. Both of the strain gauges on the sleeve were in the horizontal plane. In this way, these strain gauges were not taking any strain due to the weight of the engine (except negligible strain due to material's poison's ratio) or the hoop's stress. Only the third mount strain gauge will show strain when engine is not running i.e. strain due to weight of engine. When engine starts running, all the load will be taken by the side mounts. The strain gauge on the forward side of the sleeve was showing a compressive strain and the strain gauge on the rear side of the sleeve shows a tensile strain. Results and conclusion: the analytical calculation shows that the hoop stresses dominate the bending stress. The estimated thrust by strain gauge shows good accuracy at higher power setting as compared to lower power setting. The accuracy of estimated thrust at max power setting is 99.7% whereas at lower power setting is 78%.

Keywords: engine mounts, finite elements analysis, strain gauge, stress

Procedia PDF Downloads 485
269 Experimental Investigation of Cutting Forces and Temperature in Bone Drilling

Authors: Vishwanath Mali, Hemant Warhatkar, Raju Pawade

Abstract:

Drilling of bone has been always challenging for surgeons due to the adverse effect it may impart to bone tissues. Force has to be applied manually by the surgeon while performing conventional bone drilling which may lead to permanent death of bone tissues and nerves. During bone drilling the temperature of the bone tissues increases to higher values above 47 ⁰C that causes thermal osteonecrosis resulting into screw loosening and subsequent implant failures. An attempt has been made here to study the input drilling parameters and surgical drill bit geometry affecting bone health during bone drilling. A One Factor At a Time (OFAT) method is used to plan the experiments. Input drilling parameters studied include spindle speed and feed rate. The drill bit geometry parameter studied include point angle and helix angle. The output variables are drilling thrust force and bone temperature. The experiments were conducted on goat femur bone at room temperature 30 ⁰C. For measurement of thrust forces KISTLER cutting force dynamometer Type 9257BA was used. For continuous data acquisition of temperature NI LabVIEW software was used. Fixture was made on RPT machine for holding the bone specimen while performing drilling operation. Bone specimen were preserved in deep freezer (LABTOP make) under -40 ⁰C. In case of drilling parameters, it is observed that at constant feed rate when spindle speed increases, thrust force as well as temperature decreases and at constant spindle speed when feed rate increases thrust force as well as temperature increases. The effect of drill bit geometry shows that at constant helix angle when point angle increases thrust force as well as temperature increases and at constant point angle when helix angle increase thrust force as well as temperature decreases. Hence it is concluded that as the thrust force increases temperature increases. In case of drilling parameter, the lowest thrust force and temperature i.e. 35.55 N and 36.04 ⁰C respectively were recorded at spindle speed 2000 rpm and feed rate 0.04 mm/rev. In case of drill bit geometry parameter, the lowest thrust force and temperature i.e. 40.81 N and 34 ⁰C respectively were recorded at point angle 70⁰ and helix angle 25⁰ Hence to avoid thermal necrosis of bone it is recommended to use higher spindle speed, lower feed rate, low point angle and high helix angle. The hard nature of cortical bone contributes to a greater rise in temperature whereas a considerable drop in temperature is observed during cancellous bone drilling.

Keywords: bone drilling, helix angle, point angle, thrust force, temperature, thermal necrosis

Procedia PDF Downloads 310
268 Generalized Up-downlink Transmission using Black-White Hole Entanglement Generated by Two-level System Circuit

Authors: Muhammad Arif Jalil, Xaythavay Luangvilay, Montree Bunruangses, Somchat Sonasang, Preecha Yupapin

Abstract:

Black and white holes form the entangled pair⟨BH│WH⟩, where a white hole occurs when the particle moves at the same speed as light. The entangled black-white hole pair is at the center with the radian between the gap. When the speed of particle motion is slower than light, the black hole is gravitational (positive gravity), where the white hole is smaller than the black hole. On the downstream side, the entangled pair appears to have a black hole outside the gap increases until the white holes disappear, which is the emptiness paradox. On the upstream side, when moving faster than light, white holes form times tunnels, with black holes becoming smaller. It will continue to move faster and further when the black hole disappears and becomes a wormhole (Singularity) that is only a white hole in emptiness (Emptiness). This research studies use of black and white holes generated by a two-level circuit for communication transmission carriers, in which high ability and capacity of data transmission can be obtained. The black and white hole pair can be generated by the two-level system circuit when the speech of a particle on the circuit is equal to the speed of light. The black hole forms when the particle speed has increased from slower to equal to the light speed, while the white hole is established when the particle comes down faster than light. They are bound by the entangled pair, signal and idler, ⟨Signal│Idler⟩, and the virtual ones for the white hole, which has an angular displacement of half of π radian. A two-level system is made from an electronic circuit to create black and white holes bound by the entangled bits that are immune or cloning-free from thieves. Start by creating a wave-particle behavior when its speed is equal to light black hole is in the middle of the entangled pair, which is the two bit gate. The required information can be input into the system and wrapped by the black hole carrier. A timeline (Tunnel) occurs when the wave-particle speed is faster than light, from which the entangle pair is collapsed. The transmitted information is safely in the time tunnel. The required time and space can be modulated via the input for the downlink operation. The downlink is established when the particle speed is given by a frequency(energy) form is down and entered into the entangled gap, where this time the white hole is established. The information with the required destination is wrapped by the white hole and retrieved by the clients at the destination. The black and white holes are disappeared, and the information can be recovered and used.

Keywords: cloning free, time machine, teleportation, two-level system

Procedia PDF Downloads 76
267 Local Binary Patterns-Based Statistical Data Analysis for Accurate Soccer Match Prediction

Authors: Mohammad Ghahramani, Fahimeh Saei Manesh

Abstract:

Winning a soccer game is based on thorough and deep analysis of the ongoing match. On the other hand, giant gambling companies are in vital need of such analysis to reduce their loss against their customers. In this research work, we perform deep, real-time analysis on every soccer match around the world that distinguishes our work from others by focusing on particular seasons, teams and partial analytics. Our contributions are presented in the platform called “Analyst Masters.” First, we introduce various sources of information available for soccer analysis for teams around the world that helped us record live statistical data and information from more than 50,000 soccer matches a year. Our second and main contribution is to introduce our proposed in-play performance evaluation. The third contribution is developing new features from stable soccer matches. The statistics of soccer matches and their odds before and in-play are considered in the image format versus time including the halftime. Local Binary patterns, (LBP) is then employed to extract features from the image. Our analyses reveal incredibly interesting features and rules if a soccer match has reached enough stability. For example, our “8-minute rule” implies if 'Team A' scores a goal and can maintain the result for at least 8 minutes then the match would end in their favor in a stable match. We could also make accurate predictions before the match of scoring less/more than 2.5 goals. We benefit from the Gradient Boosting Trees, GBT, to extract highly related features. Once the features are selected from this pool of data, the Decision trees decide if the match is stable. A stable match is then passed to a post-processing stage to check its properties such as betters’ and punters’ behavior and its statistical data to issue the prediction. The proposed method was trained using 140,000 soccer matches and tested on more than 100,000 samples achieving 98% accuracy to select stable matches. Our database from 240,000 matches shows that one can get over 20% betting profit per month using Analyst Masters. Such consistent profit outperforms human experts and shows the inefficiency of the betting market. Top soccer tipsters achieve 50% accuracy and 8% monthly profit in average only on regional matches. Both our collected database of more than 240,000 soccer matches from 2012 and our algorithm would greatly benefit coaches and punters to get accurate analysis.

Keywords: soccer, analytics, machine learning, database

Procedia PDF Downloads 239
266 Effect of Laser Ablation OTR Films on the Storability of Endive and Pak Choi by Baby Vegetables in Modified Atmosphere Condition

Authors: In-Lee Choi, Min Jae Jeong, Jun Pill Baek, Ho-Min Kang

Abstract:

As the consumption trends of vegetables become different from the past, it is increased using vegetable more convenience such as fresh-cut vegetables, sprouts, baby vegetables rather than an existing hole piece of vegetables. Selected baby vegetables have various functional materials but they have short shelf life. This study was conducted to improve storability by using suitable laser ablation OTR (oxygen transmission rate) films. Baby vegetable of endive (Cichorium endivia L.) and pak choi (Brassica rapa chinensis) for this research, around 10 cm height, cultivated in glass greenhouse during 3 weeks. Harvested endive and pak choi were stored at 8 ℃ for 5 days and were packed by PP (Polypropylene) container and covered different types of laser ablation OTR film (DaeRyung Co., Ltd.) such as 1,300 cc, 10,000 cc, 20,000 cc, 40,000 cc /m2•day•atm, and control (perforated film) with heat sealing machine (SC200-IP, Kumkang, Korea). All the samples conducted 5 times replication. Statistical analysis was carried out using a Microsoft Excel 2010 program and results were expressed as standard deviations. The fresh weight loss rate of both baby vegetables were less than 0.3 % in treated films as maximum weight loss rate. On the other hands, control in the final storage day had around 3.0 % weight loss rate and it followed decreasing quantity. Endive had less 2.0 % carbon dioxide contents as maximum contents in 20,000 cc and 40,000 cc. Oxygen contents was maintained between 17 and 20 % in endive, 19 and 20 % in pak choi. Ethylene concentration of both vegetables maintained little lower contents in 20,000 cc treatments than others at final storage day without statistical significance. In the case of hardness, 40,000 cc film was shown little higher value at both baby vegetables without statistical significance. Visual quality was good at 10,000 cc and 20,000 cc in endive and pak choi, and off-flavor was not appeard any off-flavor in both vegetables. Chlorophyll (SPAD-502, Minolta, Japan) value of endive was shown as similar result with initial in all treatments except 20,000 cc as little lower. And chlorophyll value of pak choi decreased in all treatments compared with initial value but was not shown significantly difference each other. Color of leaves (CR-400, Minolta, Japan) changed significantly in 40,000 cc at endive. In an event of pak choi, all the treatments started yellowing by increasing hunter b value, among them control increased substantially. As above the result, 10,000 cc film was most reasonable packaging film for storing at endive and 20,000 cc at pak choi with good quality.

Keywords: carbon dioxide, shelf-life, visual quality, pak choi

Procedia PDF Downloads 790
265 Integration of “FAIR” Data Principles in Longitudinal Mental Health Research in Africa: Lessons from a Landscape Analysis

Authors: Bylhah Mugotitsa, Jim Todd, Agnes Kiragga, Jay Greenfield, Evans Omondi, Lukoye Atwoli, Reinpeter Momanyi

Abstract:

The INSPIRE network aims to build an open, ethical, sustainable, and FAIR (Findable, Accessible, Interoperable, Reusable) data science platform, particularly for longitudinal mental health (MH) data. While studies have been done at the clinical and population level, there still exists limitations in data and research in LMICs, which pose a risk of underrepresentation of mental disorders. It is vital to examine the existing longitudinal MH data, focusing on how FAIR datasets are. This landscape analysis aimed to provide both overall level of evidence of availability of longitudinal datasets and degree of consistency in longitudinal studies conducted. Utilizing prompters proved instrumental in streamlining the analysis process, facilitating access, crafting code snippets, categorization, and analysis of extensive data repositories related to depression, anxiety, and psychosis in Africa. While leveraging artificial intelligence (AI), we filtered through over 18,000 scientific papers spanning from 1970 to 2023. This AI-driven approach enabled the identification of 228 longitudinal research papers meeting inclusion criteria. Quality assurance revealed 10% incorrectly identified articles and 2 duplicates, underscoring the prevalence of longitudinal MH research in South Africa, focusing on depression. From the analysis, evaluating data and metadata adherence to FAIR principles remains crucial for enhancing accessibility and quality of MH research in Africa. While AI has the potential to enhance research processes, challenges such as privacy concerns and data security risks must be addressed. Ethical and equity considerations in data sharing and reuse are also vital. There’s need for collaborative efforts across disciplinary and national boundaries to improve the Findability and Accessibility of data. Current efforts should also focus on creating integrated data resources and tools to improve Interoperability and Reusability of MH data. Practical steps for researchers include careful study planning, data preservation, machine-actionable metadata, and promoting data reuse to advance science and improve equity. Metrics and recognition should be established to incentivize adherence to FAIR principles in MH research

Keywords: longitudinal mental health research, data sharing, fair data principles, Africa, landscape analysis

Procedia PDF Downloads 95
264 Prediction of Sepsis Illness from Patients Vital Signs Using Long Short-Term Memory Network and Dynamic Analysis

Authors: Marcio Freire Cruz, Naoaki Ono, Shigehiko Kanaya, Carlos Arthur Mattos Teixeira Cavalcante

Abstract:

The systems that record patient care information, known as Electronic Medical Record (EMR) and those that monitor vital signs of patients, such as heart rate, body temperature, and blood pressure have been extremely valuable for the effectiveness of the patient’s treatment. Several kinds of research have been using data from EMRs and vital signs of patients to predict illnesses. Among them, we highlight those that intend to predict, classify, or, at least identify patterns, of sepsis illness in patients under vital signs monitoring. Sepsis is an organic dysfunction caused by a dysregulated patient's response to an infection that affects millions of people worldwide. Early detection of sepsis is expected to provide a significant improvement in its treatment. Preceding works usually combined medical, statistical, mathematical and computational models to develop detection methods for early prediction, getting higher accuracies, and using the smallest number of variables. Among other techniques, we could find researches using survival analysis, specialist systems, machine learning and deep learning that reached great results. In our research, patients are modeled as points moving each hour in an n-dimensional space where n is the number of vital signs (variables). These points can reach a sepsis target point after some time. For now, the sepsis target point was calculated using the median of all patients’ variables on the sepsis onset. From these points, we calculate for each hour the position vector, the first derivative (velocity vector) and the second derivative (acceleration vector) of the variables to evaluate their behavior. And we construct a prediction model based on a Long Short-Term Memory (LSTM) Network, including these derivatives as explanatory variables. The accuracy of the prediction 6 hours before the time of sepsis, considering only the vital signs reached 83.24% and by including the vectors position, speed, and acceleration, we obtained 94.96%. The data are being collected from Medical Information Mart for Intensive Care (MIMIC) Database, a public database that contains vital signs, laboratory test results, observations, notes, and so on, from more than 60.000 patients.

Keywords: dynamic analysis, long short-term memory, prediction, sepsis

Procedia PDF Downloads 126
263 Detection of Curvilinear Structure via Recursive Anisotropic Diffusion

Authors: Sardorbek Numonov, Hyohun Kim, Dongwha Shin, Yeonseok Kim, Ji-Su Ahn, Dongeun Choi, Byung-Woo Hong

Abstract:

The detection of curvilinear structures often plays an important role in the analysis of images. In particular, it is considered as a crucial step for the diagnosis of chronic respiratory diseases to localize the fissures in chest CT imagery where the lung is divided into five lobes by the fissures that are characterized by linear features in appearance. However, the characteristic linear features for the fissures are often shown to be subtle due to the high intensity variability, pathological deformation or image noise involved in the imaging procedure, which leads to the uncertainty in the quantification of anatomical or functional properties of the lung. Thus, it is desired to enhance the linear features present in the chest CT images so that the distinctiveness in the delineation of the lobe is improved. We propose a recursive diffusion process that prefers coherent features based on the analysis of structure tensor in an anisotropic manner. The local image features associated with certain scales and directions can be characterized by the eigenanalysis of the structure tensor that is often regularized via isotropic diffusion filters. However, the isotropic diffusion filters involved in the computation of the structure tensor generally blur geometrically significant structure of the features leading to the degradation of the characteristic power in the feature space. Thus, it is required to take into consideration of local structure of the feature in scale and direction when computing the structure tensor. We apply an anisotropic diffusion in consideration of scale and direction of the features in the computation of the structure tensor that subsequently provides the geometrical structure of the features by its eigenanalysis that determines the shape of the anisotropic diffusion kernel. The recursive application of the anisotropic diffusion with the kernel the shape of which is derived from the structure tensor leading to the anisotropic scale-space where the geometrical features are preserved via the eigenanalysis of the structure tensor computed from the diffused image. The recursive interaction between the anisotropic diffusion based on the geometry-driven kernels and the computation of the structure tensor that determines the shape of the diffusion kernels yields a scale-space where geometrical properties of the image structure are effectively characterized. We apply our recursive anisotropic diffusion algorithm to the detection of curvilinear structure in the chest CT imagery where the fissures present curvilinear features and define the boundary of lobes. It is shown that our algorithm yields precise detection of the fissures while overcoming the subtlety in defining the characteristic linear features. The quantitative evaluation demonstrates the robustness and effectiveness of the proposed algorithm for the detection of fissures in the chest CT in terms of the false positive and the true positive measures. The receiver operating characteristic curves indicate the potential of our algorithm as a segmentation tool in the clinical environment. This work was supported by the MISP(Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by the IITP(Institute for Information and Communications Technology Promotion).

Keywords: anisotropic diffusion, chest CT imagery, chronic respiratory disease, curvilinear structure, fissure detection, structure tensor

Procedia PDF Downloads 233
262 Application of Deep Learning and Ensemble Methods for Biomarker Discovery in Diabetic Nephropathy through Fibrosis and Propionate Metabolism Pathways

Authors: Oluwafunmibi Omotayo Fasanya, Augustine Kena Adjei

Abstract:

Diabetic nephropathy (DN) is a major complication of diabetes, with fibrosis and propionate metabolism playing critical roles in its progression. Identifying biomarkers linked to these pathways may provide novel insights into DN diagnosis and treatment. This study aims to identify biomarkers associated with fibrosis and propionate metabolism in DN. Analyze the biological pathways and regulatory mechanisms of these biomarkers. Develop a machine learning model to predict DN-related biomarkers and validate their functional roles. Publicly available transcriptome datasets related to DN (GSE96804 and GSE104948) were obtained from the GEO database (https://www.ncbi.nlm.nih.gov/gds), and 924 propionate metabolism-related genes (PMRGs) and 656 fibrosis-related genes (FRGs) were identified. The analysis began with the extraction of DN-differentially expressed genes (DN-DEGs) and propionate metabolism-related DEGs (PM-DEGs), followed by the intersection of these with fibrosis-related genes to identify key intersected genes. Instead of relying on traditional models, we employed a combination of deep neural networks (DNNs) and ensemble methods such as Gradient Boosting Machines (GBM) and XGBoost to enhance feature selection and biomarker discovery. Recursive feature elimination (RFE) was coupled with these advanced algorithms to refine the selection of the most critical biomarkers. Functional validation was conducted using convolutional neural networks (CNN) for gene set enrichment and immunoinfiltration analysis, revealing seven significant biomarkers—SLC37A4, ACOX2, GPD1, ACE2, SLC9A3, AGT, and PLG. These biomarkers are involved in critical biological processes such as fatty acid metabolism and glomerular development, providing a mechanistic link to DN progression. Furthermore, a TF–miRNA–mRNA regulatory network was constructed using natural language processing models to identify 8 transcription factors and 60 miRNAs that regulate these biomarkers, while a drug–gene interaction network revealed potential therapeutic targets such as UROKINASE–PLG and ATENOLOL–AGT. This integrative approach, leveraging deep learning and ensemble models, not only enhances the accuracy of biomarker discovery but also offers new perspectives on DN diagnosis and treatment, specifically targeting fibrosis and propionate metabolism pathways.

Keywords: diabetic nephropathy, deep neural networks, gradient boosting machines (GBM), XGBoost

Procedia PDF Downloads 12
261 The Effect of Artificial Intelligence on Mobile Phones and Communication Systems

Authors: Ibram Khalafalla Roshdy Shokry

Abstract:

This paper gives service feel multiple get entry to (CSMA) verbal exchange model based totally totally on SoC format method. Such model can be used to guide the modelling of the complex c084d04ddacadd4b971ae3d98fecfb2a communique systems, consequently use of such communication version is an crucial method in the creation of excessive general overall performance conversation. SystemC has been selected as it gives a homogeneous format drift for complicated designs (i.e. SoC and IP based format). We use a swarm device to validate CSMA designed version and to expose how advantages of incorporating communication early within the layout process. The wireless conversation created via the modeling of CSMA protocol that may be used to attain conversation among all of the retailers and to coordinate get proper of entry to to the shared medium (channel).The device of automobiles with wi-fiwireless communique abilities is expected to be the important thing to the evolution to next era intelligent transportation systems (ITS). The IEEE network has been continuously operating at the development of an wireless vehicular communication protocol for the enhancement of wi-fi get admission to in Vehicular surroundings (WAVE). Vehicular verbal exchange systems, known as V2X, help car to car (V2V) and automobile to infrastructure (V2I) communications. The wi-ficiencywireless of such communication systems relies upon on several elements, amongst which the encircling surroundings and mobility are prominent. as a result, this observe makes a speciality of the evaluation of the actual performance of vehicular verbal exchange with unique cognizance on the effects of the actual surroundings and mobility on V2X verbal exchange. It begins by wi-fi the actual most range that such conversation can guide and then evaluates V2I and V2V performances. The Arada LocoMate OBU transmission device changed into used to check and evaluate the effect of the transmission range in V2X verbal exchange. The evaluation of V2I and V2V communique takes the real effects of low and excessive mobility on transmission under consideration.Multiagent systems have received sizeable attention in numerous wi-fields, which include robotics, independent automobiles, and allotted computing, where a couple of retailers cooperate and speak to reap complicated duties. wi-figreen communication among retailers is a critical thing of these systems, because it directly influences their usual performance and scalability. This scholarly work gives an exploration of essential communication factors and conducts a comparative assessment of diverse protocols utilized in multiagent systems. The emphasis lies in scrutinizing the strengths, weaknesses, and applicability of those protocols across diverse situations. The studies additionally sheds light on rising tendencies within verbal exchange protocols for multiagent systems, together with the incorporation of device mastering strategies and the adoption of blockchain-based totally solutions to make sure comfy communique. those developments offer valuable insights into the evolving landscape of multiagent structures and their verbal exchange protocols.

Keywords: communication, multi-agent systems, protocols, consensussystemC, modelling, simulation, CSMA

Procedia PDF Downloads 28
260 Design, Numerical Simulation, Fabrication and Physical Experimentation of the Tesla’s Cohesion Type Bladeless Turbine

Authors: M.Sivaramakrishnaiah, D. S .Nasan, P. V. Subhanjeneyulu, J. A. Sandeep Kumar, N. Sreenivasulu, B. V. Amarnath Reddy, B. Veeralingam

Abstract:

Design, numerical simulation, fabrication, and physical experimentation of the Tesla’s Bladeless centripetal turbine for generating electrical power are presented in this research paper. 29 Pressurized air combined with water via a nozzle system is made to pass tangentially through a set of parallel smooth discs surfaces, which impart rotational motion to the discs fastened common shaft for the power generation. The power generated depends upon the fluid speed parameter leaving the nozzle inlet. Physically due to laminar boundary layer phenomena at smooth disc surface, the high speed fluid layers away from the plate moving against the low speed fluid layers nearer to the plate develop a tangential drag from the viscous shear forces. This compels the nearer layers to drag along with the high layers causing the disc to spin. Solid Works design software and fluid mechanics and machine elements design theories was used to compute mechanical design specifications of turbine parts like 48 mm diameter discs, common shaft, central exhaust, plenum chamber, swappable nozzle inlets, etc. Also, ANSYS CFX 2018 was used for the numerical 2 simulation of the physical phenomena encountered in the turbine working. When various numerical simulation and physical experimental results were verified, there is good agreement between them 6, both quantitatively and qualitatively. The sources of input and size of the blades may affect the power generated and turbine efficiency, respectively. The results may change if there is a change in the fluid flowing between the discs. The inlet fluid pressure versus turbine efficiency and the number of discs versus turbine power studies based on both results were carried out to develop the 8 relationships between the inlet and outlet parameters of the turbine. The present research work obtained the turbine efficiency in the range of 7-10%, and for this range; the electrical power output generated was 50-60 W.

Keywords: tesla turbine, cohesion type bladeless turbine, boundary layer theory, cohesion type bladeless turbine, tangential fluid flow, viscous and adhesive forces, plenum chamber, pico hydro systems

Procedia PDF Downloads 88
259 A Conceptual Model of the 'Driver – Highly Automated Vehicle' System

Authors: V. A. Dubovsky, V. V. Savchenko, A. A. Baryskevich

Abstract:

The current trend in the automotive industry towards automatic vehicles is creating new challenges related to human factors. This occurs due to the fact that the driver is increasingly relieved of the need to be constantly involved in driving the vehicle, which can negatively impact his/her situation awareness when manual control is required, and decrease driving skills and abilities. These new problems need to be studied in order to provide road safety during the transition towards self-driving vehicles. For this purpose, it is important to develop an appropriate conceptual model of the interaction between the driver and the automated vehicle, which could serve as a theoretical basis for the development of mathematical and simulation models to explore different aspects of driver behaviour in different road situations. Well-known driver behaviour models describe the impact of different stages of the driver's cognitive process on driving performance but do not describe how the driver controls and adjusts his actions. A more complete description of the driver's cognitive process, including the evaluation of the results of his/her actions, will make it possible to more accurately model various aspects of the human factor in different road situations. This paper presents a conceptual model of the 'driver – highly automated vehicle' system based on the P.K. Anokhin's theory of functional systems, which is a theoretical framework for describing internal processes in purposeful living systems based on such notions as goal, desired and actual results of the purposeful activity. A central feature of the proposed model is a dynamic coupling mechanism between the decision-making of a driver to perform a particular action and changes of road conditions due to driver’s actions. This mechanism is based on the stage by stage evaluation of the deviations of the actual values of the driver’s action results parameters from the expected values. The overall functional structure of the highly automated vehicle in the proposed model includes a driver/vehicle/environment state analyzer to coordinate the interaction between driver and vehicle. The proposed conceptual model can be used as a framework to investigate different aspects of human factors in transitions between automated and manual driving for future improvements in driving safety, and for understanding how driver-vehicle interface must be designed for comfort and safety. A major finding of this study is the demonstration that the theory of functional systems is promising and has the potential to describe the interaction of the driver with the vehicle and the environment.

Keywords: automated vehicle, driver behavior, human factors, human-machine system

Procedia PDF Downloads 147
258 The Quantum Theory of Music and Human Languages

Authors: Mballa Abanda Luc Aurelien Serge, Henda Gnakate Biba, Kuate Guemo Romaric, Akono Rufine Nicole, Zabotom Yaya Fadel Biba, Petfiang Sidonie, Bella Suzane Jenifer

Abstract:

The main hypotheses proposed around the definition of the syllable and of music, of the common origin of music and language, should lead the reader to reflect on the cross-cutting questions raised by the debate on the notion of universals in linguistics and musicology. These are objects of controversy, and there lies its interest: the debate raises questions that are at the heart of theories on language. It is an inventive, original, and innovative research thesis. A contribution to the theoretical, musicological, ethno musicological, and linguistic conceptualization of languages, giving rise to the practice of interlocution between the social and cognitive sciences, the activities of artistic creation, and the question of modeling in the human sciences: mathematics, computer science, translation automation, and artificial intelligence. When you apply this theory to any text of a folksong of a world-tone language, you do not only piece together the exact melody, rhythm, and harmonies of that song as if you knew it in advance but also the exact speaking of this language. The author believes that the issue of the disappearance of tonal languages and their preservation has been structurally resolved, as well as one of the greatest cultural equations related to the composition and creation of tonal, polytonal, and random music. The experimentation confirming the theorization, I designed a semi-digital, semi-analog application that translates the tonal languages of Africa (about 2,100 languages) into blues, jazz, world music, polyphonic music, tonal and anatonal music, and deterministic and random music). To test this application, I use music reading and writing software that allows me to collect the data extracted from my mother tongue, which is already modeled in the musical staves saved in the ethnographic (semiotic) dictionary for automatic translation ( volume 2 of the book). The translation is done (from writing to writing, from writing to speech, and from writing to music). Mode of operation: you type a text on your computer, a structured song (chorus-verse), and you command the machine a melody of blues, jazz, and world music or variety, etc. The software runs, giving you the option to choose harmonies, and then you select your melody.

Keywords: language, music, sciences, quantum entenglement

Procedia PDF Downloads 78
257 Modeling and Optimizing of Sinker Electric Discharge Machine Process Parameters on AISI 4140 Alloy Steel by Central Composite Rotatable Design Method

Authors: J. Satya Eswari, J. Sekhar Babub, Meena Murmu, Govardhan Bhat

Abstract:

Electrical Discharge Machining (EDM) is an unconventional manufacturing process based on removal of material from a part by means of a series of repeated electrical sparks created by electric pulse generators at short intervals between a electrode tool and the part to be machined emmersed in dielectric fluid. In this paper, a study will be performed on the influence of the factors of peak current, pulse on time, interval time and power supply voltage. The output responses measured were material removal rate (MRR) and surface roughness. Finally, the parameters were optimized for maximum MRR with the desired surface roughness. RSM involves establishing mathematical relations between the design variables and the resulting responses and optimizing the process conditions. RSM is not free from problems when it is applied to multi-factor and multi-response situations. Design of experiments (DOE) technique to select the optimum machining conditions for machining AISI 4140 using EDM. The purpose of this paper is to determine the optimal factors of the electro-discharge machining (EDM) process investigate feasibility of design of experiment techniques. The work pieces used were rectangular plates of AISI 4140 grade steel alloy. The study of optimized settings of key machining factors like pulse on time, gap voltage, flushing pressure, input current and duty cycle on the material removal, surface roughness is been carried out using central composite design. The objective is to maximize the Material removal rate (MRR). Central composite design data is used to develop second order polynomial models with interaction terms. The insignificant coefficients’ are eliminated with these models by using student t test and F test for the goodness of fit. CCD is first used to establish the determine the optimal factors of the electro-discharge machining (EDM) for maximizing the MRR. The responses are further treated through a objective function to establish the same set of key machining factors to satisfy the optimization problem of the electro-discharge machining (EDM) process. The results demonstrate the better performance of CCD data based RSM for optimizing the electro-discharge machining (EDM) process.

Keywords: electric discharge machining (EDM), modeling, optimization, CCRD

Procedia PDF Downloads 343
256 Electroforming of 3D Digital Light Processing Printed Sculptures Used as a Low Cost Option for Microcasting

Authors: Cecile Meier, Drago Diaz Aleman, Itahisa Perez Conesa, Jose Luis Saorin Perez, Jorge De La Torre Cantero

Abstract:

In this work, two ways of creating small-sized metal sculptures are proposed: the first by means of microcasting and the second by electroforming from models printed in 3D using an FDM (Fused Deposition Modeling‎) printer or using a DLP (Digital Light Processing) printer. It is viable to replace the wax in the processes of the artistic foundry with 3D printed objects. In this technique, the digital models are manufactured with resin using a low-cost 3D FDM printer in polylactic acid (PLA). This material is used, because its properties make it a viable substitute to wax, within the processes of artistic casting with the technique of lost wax through Ceramic Shell casting. This technique consists of covering a sculpture of wax or in this case PLA with several layers of thermoresistant material. This material is heated to melt the PLA, obtaining an empty mold that is later filled with the molten metal. It is verified that the PLA models reduce the cost and time compared with the hand modeling of the wax. In addition, one can manufacture parts with 3D printing that are not possible to create with manual techniques. However, the sculptures created with this technique have a size limit. The problem is that when printed pieces with PLA are very small, they lose detail, and the laminar texture hides the shape of the piece. DLP type printer allows obtaining more detailed and smaller pieces than the FDM. Such small models are quite difficult and complex to melt using the lost wax technique of Ceramic Shell casting. But, as an alternative, there are microcasting and electroforming, which are specialized in creating small metal pieces such as jewelry ones. The microcasting is a variant of the lost wax that consists of introducing the model in a cylinder in which the refractory material is also poured. The molds are heated in an oven to melt the model and cook them. Finally, the metal is poured into the still hot cylinders that rotate in a machine at high speed to properly distribute all the metal. Because microcasting requires expensive material and machinery to melt a piece of metal, electroforming is an alternative for this process. The electroforming uses models in different materials; for this study, micro-sculptures printed in 3D are used. These are subjected to an electroforming bath that covers the pieces with a very thin layer of metal. This work will investigate the recommended size to use 3D printers, both with PLA and resin and first tests are being done to validate use the electroforming process of microsculptures, which are printed in resin using a DLP printer.

Keywords: sculptures, DLP 3D printer, microcasting, electroforming, fused deposition modeling

Procedia PDF Downloads 135
255 Data-Driven Strategies for Enhancing Food Security in Vulnerable Regions: A Multi-Dimensional Analysis of Crop Yield Predictions, Supply Chain Optimization, and Food Distribution Networks

Authors: Sulemana Ibrahim

Abstract:

Food security remains a paramount global challenge, with vulnerable regions grappling with issues of hunger and malnutrition. This study embarks on a comprehensive exploration of data-driven strategies aimed at ameliorating food security in such regions. Our research employs a multifaceted approach, integrating data analytics to predict crop yields, optimizing supply chains, and enhancing food distribution networks. The study unfolds as a multi-dimensional analysis, commencing with the development of robust machine learning models harnessing remote sensing data, historical crop yield records, and meteorological data to foresee crop yields. These predictive models, underpinned by convolutional and recurrent neural networks, furnish critical insights into anticipated harvests, empowering proactive measures to confront food insecurity. Subsequently, the research scrutinizes supply chain optimization to address food security challenges, capitalizing on linear programming and network optimization techniques. These strategies intend to mitigate loss and wastage while streamlining the distribution of agricultural produce from field to fork. In conjunction, the study investigates food distribution networks with a particular focus on network efficiency, accessibility, and equitable food resource allocation. Network analysis tools, complemented by data-driven simulation methodologies, unveil opportunities for augmenting the efficacy of these critical lifelines. This study also considers the ethical implications and privacy concerns associated with the extensive use of data in the realm of food security. The proposed methodology outlines guidelines for responsible data acquisition, storage, and usage. The ultimate aspiration of this research is to forge a nexus between data science and food security policy, bestowing actionable insights to mitigate the ordeal of food insecurity. The holistic approach converging data-driven crop yield forecasts, optimized supply chains, and improved distribution networks aspire to revitalize food security in the most vulnerable regions, elevating the quality of life for millions worldwide.

Keywords: data-driven strategies, crop yield prediction, supply chain optimization, food distribution networks

Procedia PDF Downloads 63
254 Assessment of Biochemical Marker Profiles and Their Impact on Morbidity and Mortality of COVID-19 Patients in Tigray, Ethiopia

Authors: Teklay Gebrecherkos, Mahmud Abdulkadir

Abstract:

Abstract: The emergence and subsequent rapid worldwide spread of the COVID-19 pandemic have posed a global crisis, with a tremendously increasing burden of infection, morbidity, and mortality risks. Recent studies have suggested that severe cases of COVID-19 are characterized by massive biochemical, hematological, and inflammatory alterations whose synergistic effect is estimated to progress to multiple organ damage and failure. In this regard, biochemical monitoring of COVID-19 patients, based on comprehensive laboratory assessments and findings, is expected to play a crucial role in effective clinical management and improving the survival rates of patients. However, biochemical markers that can be informative of COVID-19 patient risk stratification and predictor of clinical outcomes are currently scarcely available. The study aims to investigate the profiles of common biochemical markers and their influence on the severity of the COVID-19 infection in Tigray, Ethiopia. Methods: A laboratory-based cross-sectional study was conducted from July to August 2020 at Quiha College of Engineering, Mekelle University COVID-19 isolation and treatment center. Sociodemographic and clinical data were collected using a structured questionnaire. Whole blood was collected from each study participant, and serum samples were separated after being delivered to the laboratory. Hematological biomarkers were analyzed using FACS count, while organ tests and serum electrolytes were analyzed using ion-selective electrode methods using a Cobas-6000 series machine. Data was analyzed using SPSS Vs 20. Results: A total of 120 SARS-CoV-2 patients were enrolled during the study. The participants ranged between 18 and 91 years, with a mean age of 52 (±108.8). The majority (40%) of participants were between the ages of 60 and above. Patients with multiple comorbidities developed severe COVID-19, though not statistically significant (p=0.34). Mann-Whitney U test analysis showed that biochemical tests such as neuropile count (p=0.003), AST levels (p=0.050), serum creatinine (p=0.000), and serum sodium (p=0.015) were significantly correlated with severe COVID-19 disease as compared to non-severe disease. Conclusion: The severity of COVID-19 was associated with higher age, organ tests AST and creatinine, serum Na+, and elevated total neutrophile count. Thus, further study needs to be conducted to evaluate the alterations of biochemical biomarkers and their impact on COVID-19.

Keywords: COVID-19, biomarkers, mortality, Tigray, Ethiopia

Procedia PDF Downloads 45
253 Prevalence of Breast Cancer Molecular Subtypes at a Tertiary Cancer Institute

Authors: Nahush Modak, Meena Pangarkar, Anand Pathak, Ankita Tamhane

Abstract:

Background: Breast cancer is the prominent cause of cancer and mortality among women. This study was done to show the statistical analysis of a cohort of over 250 patients detected with breast cancer diagnosed by oncologists using Immunohistochemistry (IHC). IHC was performed by using ER; PR; HER2; Ki-67 antibodies. Materials and methods: Formalin fixed Paraffin embedded tissue samples were obtained by surgical manner and standard protocol was followed for fixation, grossing, tissue processing, embedding, cutting and IHC. The Ventana Benchmark XT machine was used for automated IHC of the samples. Antibodies used were supplied by F. Hoffmann-La Roche Ltd. Statistical analysis was performed by using SPSS for windows. Statistical tests performed were chi-squared test and Correlation tests with p<.01. The raw data was collected and provided by National Cancer Insitute, Jamtha, India. Result: Luminal B was the most prevailing molecular subtype of Breast cancer at our institute. Chi squared test of homogeneity was performed to find equality in distribution and Luminal B was the most prevalent molecular subtype. The worse prognostic indicator for breast cancer depends upon expression of Ki-67 and her2 protein in cancerous cells. Our study was done at p <.01 and significant dependence was observed. There exists no dependence of age on molecular subtype of breast cancer. Similarly, age is an independent variable while considering Ki-67 expression. Chi square test performed on Human epidermal growth factor receptor 2 (HER2) statuses of patients and strong dependence was observed in percentage of Ki-67 expression and Her2 (+/-) character which shows that, value of Ki depends upon Her2 expression in cancerous cells (p<.01). Surprisingly, dependence was observed in case of Ki-67 and Pr, at p <.01. This shows that Progesterone receptor proteins (PR) are over-expressed when there is an elevation in expression of Ki-67 protein. Conclusion: We conclude from that Luminal B is the most prevalent molecular subtype at National Cancer Institute, Jamtha, India. There was found no significant correlation between age and Ki-67 expression in any molecular subtype. And no dependence or correlation exists between patients’ age and molecular subtype. We also found that, when the diagnosis is Luminal A, out of the cohort of 257 patients, no patient shows >14% Ki-67 value. Statistically, extremely significant values were observed for dependence of PR+Her2- and PR-Her2+ scores on Ki-67 expression. (p<.01). Her2 is an important prognostic factor in breast cancer. Chi squared test for Her2 and Ki-67 shows that the expression of Ki depends upon Her2 statuses. Moreover, Ki-67 cannot be used as a standalone prognostic factor for determining breast cancer.

Keywords: breast cancer molecular subtypes , correlation, immunohistochemistry, Ki-67 and HR, statistical analysis

Procedia PDF Downloads 123
252 The Use of Technology in Theatrical Performances as a Tool of Audience’S Engagement

Authors: Chrysoula Bousiouta

Abstract:

Throughout the history of theatre, technology has played an important role both in influencing the relationship between performance and audience and offering different kinds of experiences. The use of technology dates back in ancient times, when the introduction of artifacts, such as “Deus ex machine” in ancient Greek theatre, started. Taking into account the key techniques and experiences used throughout history, this paper investigates how technology, through new media, influences contemporary theatre. In the context of this research, technology is defined as projections, audio environments, video-projections, sensors, tele-connections, all alongside with the performance, challenging audience’s participation. The theoretical framework of the research covers, except for the history of theatre, the theory of “experience economy” that took over the service and goods economy. The research is based on the qualitative and comparative analysis of two case studies, Contact Theatre in Manchester (United Kingdom) and Bios in Athens (Greece). The data selection includes desk research and is complemented with semi structured interviews. Building on the results of the research one could claim that the intended experience of modern/contemporary theatre is that of engagement. In this context, technology -as defined above- plays a leading role in creating it. This experience passes through and exists in the middle of the realms of entertainment, education, estheticism and escapism. Furthermore, it is observed that nowadays, theatre is not only about acting but also about performing; it is that one where the performances are unfinished without the participation of the audience. Both case studies try to achieve the experience of engagement through practices that promote the attraction of attention, the increase of imagination, the interaction, the intimacy and the true activity. These practices are achieved through the script, the scenery, the language and the environment of a performance. Contact and Bios consider technology as an intimate tool in order to accomplish the above, and they make an extended use of it. The research completes a notable record of technological techniques that modern theatres use. The use of technology, inside or outside the limits of film technique’s, helps to rivet the attention of the audience, to make performances enjoyable, to give the sense of the “unfinished” or to be used for things that take place around the spectators and force them to take action, being spect-actors. The advantage of technology is that it can be used as a hook for interaction in all stages of a performance. Further research on the field could involve exploring alternative ways of binding technology and theatre or analyzing how the performance is perceived through the use of technological artifacts.

Keywords: experience of engagement, interactive theatre, modern theatre, performance, technology

Procedia PDF Downloads 251
251 Part Variation Simulations: An Industrial Case Study with an Experimental Validation

Authors: Narendra Akhadkar, Silvestre Cano, Christophe Gourru

Abstract:

Injection-molded parts are widely used in power system protection products. One of the biggest challenges in an injection molding process is shrinkage and warpage of the molded parts. All these geometrical variations may have an adverse effect on the quality of the product, functionality, cost, and time-to-market. The situation becomes more challenging in the case of intricate shapes and in mass production using multi-cavity tools. To control the effects of shrinkage and warpage, it is very important to correctly find out the input parameters that could affect the product performance. With the advances in the computer-aided engineering (CAE), different tools are available to simulate the injection molding process. For our case study, we used the MoldFlow insight tool. Our aim is to predict the spread of the functional dimensions and geometrical variations on the part due to variations in the input parameters such as material viscosity, packing pressure, mold temperature, melt temperature, and injection speed. The input parameters may vary during batch production or due to variations in the machine process settings. To perform the accurate product assembly variation simulation, the first step is to perform an individual part variation simulation to render realistic tolerance ranges. In this article, we present a method to simulate part variations coming from the input parameters variation during batch production. The method is based on computer simulations and experimental validation using the full factorial design of experiments (DoE). The robustness of the simulation model is verified through input parameter wise sensitivity analysis study performed using simulations and experiments; all the results show a very good correlation in the material flow direction. There exists a non-linear interaction between material and the input process variables. It is observed that the parameters such as packing pressure, material, and mold temperature play an important role in spread on functional dimensions and geometrical variations. This method will allow us in the future to develop accurate/realistic virtual prototypes based on trusted simulated process variation and, therefore, increase the product quality and potentially decrease the time to market.

Keywords: correlation, molding process, tolerance, sensitivity analysis, variation simulation

Procedia PDF Downloads 179
250 Investigating the Influence of Activation Functions on Image Classification Accuracy via Deep Convolutional Neural Network

Authors: Gulfam Haider, sana danish

Abstract:

Convolutional Neural Networks (CNNs) have emerged as powerful tools for image classification, and the choice of optimizers profoundly affects their performance. The study of optimizers and their adaptations remains a topic of significant importance in machine learning research. While numerous studies have explored and advocated for various optimizers, the efficacy of these optimization techniques is still subject to scrutiny. This work aims to address the challenges surrounding the effectiveness of optimizers by conducting a comprehensive analysis and evaluation. The primary focus of this investigation lies in examining the performance of different optimizers when employed in conjunction with the popular activation function, Rectified Linear Unit (ReLU). By incorporating ReLU, known for its favorable properties in prior research, the aim is to bolster the effectiveness of the optimizers under scrutiny. Specifically, we evaluate the adjustment of these optimizers with both the original Softmax activation function and the modified ReLU activation function, carefully assessing their impact on overall performance. To achieve this, a series of experiments are conducted using a well-established benchmark dataset for image classification tasks, namely the Canadian Institute for Advanced Research dataset (CIFAR-10). The selected optimizers for investigation encompass a range of prominent algorithms, including Adam, Root Mean Squared Propagation (RMSprop), Adaptive Learning Rate Method (Adadelta), Adaptive Gradient Algorithm (Adagrad), and Stochastic Gradient Descent (SGD). The performance analysis encompasses a comprehensive evaluation of the classification accuracy, convergence speed, and robustness of the CNN models trained with each optimizer. Through rigorous experimentation and meticulous assessment, we discern the strengths and weaknesses of the different optimization techniques, providing valuable insights into their suitability for image classification tasks. By conducting this in-depth study, we contribute to the existing body of knowledge surrounding optimizers in CNNs, shedding light on their performance characteristics for image classification. The findings gleaned from this research serve to guide researchers and practitioners in making informed decisions when selecting optimizers and activation functions, thus advancing the state-of-the-art in the field of image classification with convolutional neural networks.

Keywords: deep neural network, optimizers, RMsprop, ReLU, stochastic gradient descent

Procedia PDF Downloads 127
249 Peptide-Based Platform for Differentiation of Antigenic Variations within Influenza Virus Subtypes (Flutype)

Authors: Henry Memczak, Marc Hovestaedt, Bernhard Ay, Sandra Saenger, Thorsten Wolff, Frank F. Bier

Abstract:

The influenza viruses cause flu epidemics every year and serious pandemics in larger time intervals. The only cost-effective protection against influenza is vaccination. Due to rapid mutation continuously new subtypes appear, what requires annual reimmunization. For a correct vaccination recommendation, the circulating influenza strains had to be detected promptly and exactly and characterized due to their antigenic properties. During the flu season 2016/17, a wrong vaccination recommendation has been given because of the great time interval between identification of the relevant influenza vaccine strains and outbreak of the flu epidemic during the following winter. Due to such recurring incidents of vaccine mismatches, there is a great need to speed up the process chain from identifying the right vaccine strains to their administration. The monitoring of subtypes as part of this process chain is carried out by national reference laboratories within the WHO Global Influenza Surveillance and Response System (GISRS). To this end, thousands of viruses from patient samples (e.g., throat smears) are isolated and analyzed each year. Currently, this analysis involves complex and time-intensive (several weeks) animal experiments to produce specific hyperimmune sera in ferrets, which are necessary for the determination of the antigen profiles of circulating virus strains. These tests also bear difficulties in standardization and reproducibility, which restricts the significance of the results. To replace this test a peptide-based assay for influenza virus subtyping from corresponding virus samples was developed. The differentiation of the viruses takes place by a set of specifically designed peptidic recognition molecules which interact differently with the different influenza virus subtypes. The differentiation of influenza subtypes is performed by pattern recognition guided by machine learning algorithms, without any animal experiments. Synthetic peptides are immobilized in multiplex format on various platforms (e.g., 96-well microtiter plate, microarray). Afterwards, the viruses are incubated and analyzed comparing different signaling mechanisms and a variety of assay conditions. Differentiation of a range of influenza subtypes, including H1N1, H3N2, H5N1, as well as fine differentiation of single strains within these subtypes is possible using the peptide-based subtyping platform. Thereby, the platform could be capable of replacing the current antigenic characterization of influenza strains using ferret hyperimmune sera.

Keywords: antigenic characterization, influenza-binding peptides, influenza subtyping, influenza surveillance

Procedia PDF Downloads 158
248 New Suspension Mechanism for a Formula Car using Camber Thrust

Authors: Shinji Kajiwara

Abstract:

The basic ability of a vehicle is the ability to “run”, “turn” and “stop”. The safeness and comfort during a drive on various road surfaces and speed depends on the performance of these basic abilities of the vehicle. Stability and maneuverability of a vehicle is vital in automotive engineering. Stability of a vehicle is the ability of the vehicle to revert back to a stable state during a drive when faced with crosswind and irregular road conditions. Maneuverability of a vehicle is the ability of the vehicle to change direction during a drive swiftly based on the steering of the driver. The stability and maneuverability of a vehicle can also be defined as the driving stability of the vehicle. Since fossil fueled vehicle is the main type of transportation today, the environmental factor in automotive engineering is also vital. By improving the fuel efficiency of the vehicle, the overall carbon emission will be reduced thus reducing the effect of global warming and greenhouse gas on the Earth. Another main focus of the automotive engineering is the safety performance of the vehicle especially with the worrying increase of vehicle collision every day. With better safety performance on a vehicle, every driver will be more confidence driving every day. Next, let us focus on the “turn” ability of a vehicle. By improving this particular ability of the vehicle, the cornering limit of the vehicle can be improved thus increasing the stability and maneuverability factor. In order to improve the cornering limit of the vehicle, a study to find the balance between the steering systems, the stability of the vehicle, higher lateral acceleration and the cornering limit detection must be conducted. The aim of this research is to study and develop a new suspension system that that will boost the lateral acceleration of the vehicle and ultimately improving the cornering limit of the vehicle. This research will also study environmental factor and the stability factor of the new suspension system. The double wishbone suspension system is widely used in four-wheel vehicle especially for high cornering performance sports car and racing car. The double wishbone designs allow the engineer to carefully control the motion of the wheel by controlling such parameters as camber angle, caster angle, toe pattern, roll center height, scrub radius, scuff and more. The development of the new suspension system will focus on the ability of the new suspension system to optimize the camber control and to improve the camber limit during a cornering motion. The research will be carried out using the CAE analysis tool. Using this analysis tool we will develop a JSAE Formula Machine equipped with the double wishbone system and also the new suspension system and conduct simulation and conduct studies on performance of both suspension systems.

Keywords: automobile, camber thrust, cornering force, suspension

Procedia PDF Downloads 323
247 Improved Soil and Snow Treatment with the Rapid Update Cycle Land-Surface Model for Regional and Global Weather Predictions

Authors: Tatiana G. Smirnova, Stan G. Benjamin

Abstract:

Rapid Update Cycle (RUC) land surface model (LSM) was a land-surface component in several generations of operational weather prediction models at the National Center for Environment Prediction (NCEP) at the National Oceanic and Atmospheric Administration (NOAA). It was designed for short-range weather predictions with an emphasis on severe weather and originally was intentionally simple to avoid uncertainties from poorly known parameters. Nevertheless, the RUC LSM, when coupled with the hourly-assimilating atmospheric model, can produce a realistic evolution of time-varying soil moisture and temperature, as well as the evolution of snow cover on the ground surface. This result is possible only if the soil/vegetation/snow component of the coupled weather prediction model has sufficient skill to avoid long-term drift. RUC LSM was first implemented in the operational NCEP Rapid Update Cycle (RUC) weather model in 1998 and later in the Weather Research Forecasting Model (WRF)-based Rapid Refresh (RAP) and High-resolution Rapid Refresh (HRRR). Being available to the international WRF community, it was implemented in operational weather models in Austria, New Zealand, and Switzerland. Based on the feedback from the US weather service offices and the international WRF community and also based on our own validation, RUC LSM has matured over the years. Also, a sea-ice module was added to RUC LSM for surface predictions over the Arctic sea-ice. Other modifications include refinements to the snow model and a more accurate specification of albedo, roughness length, and other surface properties. At present, RUC LSM is being tested in the regional application of the Unified Forecast System (UFS). The next generation UFS-based regional Rapid Refresh FV3 Standalone (RRFS) model will replace operational RAP and HRRR at NCEP. Over time, RUC LSM participated in several international model intercomparison projects to verify its skill using observed atmospheric forcing. The ESM-SnowMIP was the last of these experiments focused on the verification of snow models for open and forested regions. The simulations were performed for ten sites located in different climatic zones of the world forced with observed atmospheric conditions. While most of the 26 participating models have more sophisticated snow parameterizations than in RUC, RUC LSM got a high ranking in simulations of both snow water equivalent and surface temperature. However, ESM-SnowMIP experiment also revealed some issues in the RUC snow model, which will be addressed in this paper. One of them is the treatment of grid cells partially covered with snow. RUC snow module computes energy and moisture budgets of snow-covered and snow-free areas separately by aggregating the solutions at the end of each time step. Such treatment elevates the importance of computing in the model snow cover fraction. Improvements to the original simplistic threshold-based approach have been implemented and tested both offline and in the coupled weather model. The detailed description of changes to the snow cover fraction and other modifications to RUC soil and snow parameterizations will be described in this paper.

Keywords: land-surface models, weather prediction, hydrology, boundary-layer processes

Procedia PDF Downloads 89