Search results for: testing simulation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7783

Search results for: testing simulation

583 Dynamic Capabilities and Disorganization: A Conceptual Exploration

Authors: Dinuka Herath, Shelley Harrington

Abstract:

This paper prompts debate about whether disorganization can be positioned as a mechanism that facilitates the creation and enactment of important dynamic capabilities within an organization. This particular article is a conceptual exploration of the link between dynamic capabilities and disorganization and presents the case for agent-based modelling as a viable methodological tool which can be used to explore this link. Dynamic capabilities are those capabilities that an organization needs to sustain competitive advantage in complex environments. Disorganization is the process of breaking down restrictive organizational structures and routines that commonly reside in organizations in order to increase organizational performance. In the 20th century, disorganization was largely viewed as an undesirable phenomenon within an organization. However, the concept of disorganization has been revitalized and garnered research interest in the recent years due to studies which demonstrate some of the advantages of disorganization to an organization. Furthermore, recent Agent-based simulation studies have shown the capability of disorganization to be managed and argue for disorganization to be viewed as an enabler of organizational productivity. Given the natural state of disorganization and resulting fear this can create, this paper argues that instead of trying to ‘correct’ disorganization, it should be actively encouraged to have functional purpose. The study of dynamic capabilities emerged as a result of heightened dynamism and consequentially the very nature of dynamism denotes a level of fluidity and flexibility, something which this paper argues many organizations do not truly foster due to a constrained commitment to organization and order. We argue in this paper that the very state of disorganization is a state that should be encouraged to develop dynamic capabilities needed to not only deal with the complexities of the modern business environment but also to sustain competitive success. The significance of this paper stems from the fact that both dynamic capabilities and disorganization are two concepts that are gaining prominence in their respective academic genres. Despite the attention each concept has received individually, no conceptual link has been established to depict how they actually interact with each other. We argue that the link between these two concepts present a novel way of looking at organizational performance. By doing so, we explore the potential of these two concepts working in tandem in order to increase organizational productivity which has significant implications for both academics and practitioners alike.

Keywords: agent-based modelling, disorganization, dynamic capabilities, performance

Procedia PDF Downloads 317
582 Foslip Loaded and CEA-Affimer Functionalised Silica Nanoparticles for Fluorescent Imaging of Colorectal Cancer Cells

Authors: Yazan S. Khaled, Shazana Shamsuddin, Jim Tiernan, Mike McPherson, Thomas Hughes, Paul Millner, David G. Jayne

Abstract:

Introduction: There is a need for real-time imaging of colorectal cancer (CRC) to allow tailored surgery to the disease stage. Fluorescence guided laparoscopic imaging of primary colorectal cancer and the draining lymphatics would potentially bring stratified surgery into clinical practice and realign future CRC management to the needs of patients. Fluorescent nanoparticles can offer many advantages in terms of intra-operative imaging and therapy (theranostic) in comparison with traditional soluble reagents. Nanoparticles can be functionalised with diverse reagents and then targeted to the correct tissue using an antibody or Affimer (artificial binding protein). We aimed to develop and test fluorescent silica nanoparticles and targeted against CRC using an anti-carcinoembryonic antigen (CEA) Affimer (Aff). Methods: Anti-CEA and control Myoglobin Affimer binders were subcloned into the expressing vector pET11 followed by transformation into BL21 Star™ (DE3) E.coli. The expression of Affimer binders was induced using 0.1 mM isopropyl β-D-1-thiogalactopyranoside (IPTG). Cells were harvested, lysed and purified using nickle chelating affinity chromatography. The photosensitiser Foslip (soluble analogue of 5,10,15,20-Tetra(m-hydroxyphenyl) chlorin) was incorporated into the core of silica nanoparticles using water-in-oil microemulsion technique. Anti-CEA or control Affs were conjugated to silica nanoparticles surface using sulfosuccinimidyl-4-(N-maleimidomethyl) cyclohexane-1-carboxylate (sulfo SMCC) chemical linker. Binding of CEA-Aff or control nanoparticles to colorectal cancer cells (LoVo, LS174T and HC116) was quantified in vitro using confocal microscopy. Results: The molecular weights of the obtained band of Affimers were ~12.5KDa while the diameter of functionalised silica nanoparticles was ~80nm. CEA-Affimer targeted nanoparticles demonstrated 9.4, 5.8 and 2.5 fold greater fluorescence than control in, LoVo, LS174T and HCT116 cells respectively (p < 0.002) for the single slice analysis. A similar pattern of successful CEA-targeted fluorescence was observed in the maximum image projection analysis, with CEA-targeted nanoparticles demonstrating 4.1, 2.9 and 2.4 fold greater fluorescence than control particles in LoVo, LS174T, and HCT116 cells respectively (p < 0.0002). There was no significant difference in fluorescence for CEA-Affimer vs. CEA-Antibody targeted nanoparticles. Conclusion: We are the first to demonstrate that Foslip-doped silica nanoparticles conjugated to anti-CEA Affimers via SMCC allowed tumour cell-specific fluorescent targeting in vitro, and had shown sufficient promise to justify testing in an animal model of colorectal cancer. CEA-Affimer appears to be a suitable targeting molecule to replace CEA-Antibody. Targeted silica nanoparticles loaded with Foslip photosensitiser is now being optimised to drive photodynamic killing, via reactive oxygen generation.

Keywords: colorectal cancer, silica nanoparticles, Affimers, antibodies, imaging

Procedia PDF Downloads 240
581 Prevalence, Antimicrobial Susceptibility Pattern and Public Health Significance for Staphylococcus Aureus of Isolated from Raw Red Meat at Butchery and Abattoir House in Mekelle, Northern Ethiopia

Authors: Haftay Abraha Tadesse

Abstract:

Background: Staphylococcus is a genus of worldwide distributed bacteria correlated to several infectious of different sites in humans and animals. They are among the most important causes of infection that are associated with the consumption of contaminated food. Objective: The objective of this study was to determine the isolates, antimicrobial susceptibility patterns and Public Health Significance of Staphylococcus aureus in raw meat from butchery and abattoir houses of Mekelle, Northern Ethiopia. Methodology: A cross-sectional study was conducted from April to October 2019. Socio-demographic data and Public Health Significance were collected using a predesigned questionnaire. The raw meat samples were collected aseptically in the butchery and abattoir houses and transported using an ice box to Mekelle University, College of Veterinary Sciences, for isolating and identification of Staphylococcus aureus. Antimicrobial susceptibility tests were determined by the disc diffusion method. Data obtained were cleaned and entered into STATA 22.0 and a logistic regression model with odds ratio was calculated to assess the association of risk factors with bacterial contamination. A P-value < 0.05 was considered statistically significant. Results: In the present study, 88 out of 250 (35.2%) were found to be contaminated with Staphylococcus aureus. Among the raw meat specimens, the positivity rate of Staphylococcus aureus was 37.6% (n=47) and (32.8% (n=41), butchery and abattoir houses, respectively. Among the associated risks, factories not using gloves reduces risk was found to (AOR=0.222; 95% CI: 0.104-0.473), Strict Separation b/n clean & dirty (AOR= 1.37; 95% CI: 0.66-2.86) and poor habit of hand washing (AOR=1.08; 95%CI: 0.35 3.35) was found to be statistically significant and have associated with Staphylococcus aureus contamination. All isolates of thirty-seven of Staphylococcus aureus were checked and displayed (100%) sensitive to doxycycline, trimethoprim, gentamicin, sulphamethoxazole, amikacin, CN, Co trimoxazole and nitrofurantoi. Whereas the showed resistance to cefotaxime (100%), ampicillin (87.5%), Penicillin (75%), B (75%), and nalidixic acid (50%) from butchery houses. On the other hand, all isolates of Staphylococcus aureus isolate 100% (n= 10) showed sensitive chloramphenicol, gentamicin and nitrofurantoin, whereas they showed 100% resistance of Penicillin, B, AMX, ceftriaxone, ampicillin and cefotaxime from abattoirs houses. The overall multi-drug resistance pattern for Staphylococcus aureus was 90% and 100% of butchery and abattoir houses, respectively. Conclusion: 35.3% Staphylococcus aureus isolated were recovered from the raw meat samples collected from the butchery and abattoirs houses. More has to be done in the development of hand washing behavior and availability of safe water in the butchery houses to reduce the burden of bacterial contamination. The results of the present finding highlight the need to implement protective measures against the levels of food contamination and alternative drug options. The development of antimicrobial resistance is nearly always a result of repeated therapeutic and/or indiscriminate use of them. Regular antimicrobial sensitivity testing helps to select effective antibiotics and to reduce the problems of drug resistance development towards commonly used antibiotics.

Keywords: abattoir house, AMR, butchery house, S. aureus

Procedia PDF Downloads 98
580 Application of Biomimetic Approach in Optimizing Buildings Heat Regulating System Using Parametric Design Tools to Achieve Thermal Comfort in Indoor Spaces in Hot Arid Regions

Authors: Aya M. H. Eissa, Ayman H. A. Mahmoud

Abstract:

When it comes to energy efficient thermal regulation system, natural systems do not only offer an inspirational source of innovative strategies but also sustainable and even regenerative ones. Using biomimetic design an energy efficient thermal regulation system can be developed. Although, conventional design process methods achieved fairly efficient systems, they still had limitations which can be overcome by using parametric design software. Accordingly, the main objective of this study is to apply and assess the efficiency of heat regulation strategies inspired from termite mounds in residential buildings’ thermal regulation system. Parametric design software is used to pave the way for further and more complex biomimetic design studies and implementations. A hot arid region is selected due to the deficiency of research in this climatic region. First, the analysis phase in which the stimuli, affecting, and the parameters, to be optimized, are set mimicking the natural system. Then, based on climatic data and using parametric design software Grasshopper, building form and openings height and areas are altered till settling on an optimized solution. Finally, an assessment of the efficiency of the optimized system, in comparison with a conventional system, is determined by firstly, indoors airflow and indoors temperature, by Ansys Fluent (CFD) simulation. Secondly by and total solar radiation falling on the building envelope, which was calculated using Ladybug, Grasshopper plugin. The results show an increase in the average indoor airflow speed from 0.5m/s to 1.5 m/s. Also, a slight decrease in temperature was noticed. And finally, the total radiation was decreased by 4%. In conclusion, despite the fact that applying a single bio-inspired heat regulation strategy might not be enough to achieve an optimum system, the concluded system is more energy efficient than the conventional ones as it aids achieving indoors comfort through passive techniques. Thus demonstrating the potential of parametric design software in biomimetic design.

Keywords: biomimicry, heat regulation systems, hot arid regions, parametric design, thermal comfort

Procedia PDF Downloads 294
579 Bayesian Parameter Inference for Continuous Time Markov Chains with Intractable Likelihood

Authors: Randa Alharbi, Vladislav Vyshemirsky

Abstract:

Systems biology is an important field in science which focuses on studying behaviour of biological systems. Modelling is required to produce detailed description of the elements of a biological system, their function, and their interactions. A well-designed model requires selecting a suitable mechanism which can capture the main features of the system, define the essential components of the system and represent an appropriate law that can define the interactions between its components. Complex biological systems exhibit stochastic behaviour. Thus, using probabilistic models are suitable to describe and analyse biological systems. Continuous-Time Markov Chain (CTMC) is one of the probabilistic models that describe the system as a set of discrete states with continuous time transitions between them. The system is then characterised by a set of probability distributions that describe the transition from one state to another at a given time. The evolution of these probabilities through time can be obtained by chemical master equation which is analytically intractable but it can be simulated. Uncertain parameters of such a model can be inferred using methods of Bayesian inference. Yet, inference in such a complex system is challenging as it requires the evaluation of the likelihood which is intractable in most cases. There are different statistical methods that allow simulating from the model despite intractability of the likelihood. Approximate Bayesian computation is a common approach for tackling inference which relies on simulation of the model to approximate the intractable likelihood. Particle Markov chain Monte Carlo (PMCMC) is another approach which is based on using sequential Monte Carlo to estimate intractable likelihood. However, both methods are computationally expensive. In this paper we discuss the efficiency and possible practical issues for each method, taking into account the computational time for these methods. We demonstrate likelihood-free inference by performing analysing a model of the Repressilator using both methods. Detailed investigation is performed to quantify the difference between these methods in terms of efficiency and computational cost.

Keywords: Approximate Bayesian computation(ABC), Continuous-Time Markov Chains, Sequential Monte Carlo, Particle Markov chain Monte Carlo (PMCMC)

Procedia PDF Downloads 202
578 Exploration of Cone Foam Breaker Behavior Using Computational Fluid Dynamic

Authors: G. St-Pierre-Lemieux, E. Askari Mahvelati, D. Groleau, P. Proulx

Abstract:

Mathematical modeling has become an important tool for the study of foam behavior. Computational Fluid Dynamic (CFD) can be used to investigate the behavior of foam around foam breakers to better understand the mechanisms leading to the ‘destruction’ of foam. The focus of this investigation was the simple cone foam breaker, whose performance has been identified in numerous studies. While the optimal pumping angle is known from the literature, the contribution of pressure drop, shearing, and centrifugal forces to the foam syneresis are subject to speculation. This work provides a screening of those factors against changes in the cone angle and foam rheology. The CFD simulation was made with the open source OpenFOAM toolkits on a full three-dimensional model discretized using hexahedral cells. The geometry was generated using a python script then meshed with blockMesh. The OpenFOAM Volume Of Fluid (VOF) method was used (interFOAM) to obtain a detailed description of the interfacial forces, and the model k-omega SST was used to calculate the turbulence fields. The cone configuration allows the use of a rotating wall boundary condition. In each case, a pair of immiscible fluids, foam/air or water/air was used. The foam was modeled as a shear thinning (Herschel-Buckley) fluid. The results were compared to our measurements and to results found in the literature, first by computing the pumping rate of the cone, and second by the liquid break-up at the exit of the cone. A 3D printed version of the cones submerged in foam (shaving cream or soap solution) and water, at speeds varying between 400 RPM and 1500 RPM, was also used to validate the modeling results by calculating the torque exerted on the shaft. While most of the literature is focusing on cone behavior using Newtonian fluids, this works explore its behavior in shear thinning fluid which better reflects foam apparent rheology. Those simulations bring new light on the cone behavior within the foam and allow the computation of shearing, pressure, and velocity of the fluid, enabling to better evaluate the efficiency of the cones as foam breakers. This study contributes to clarify the mechanisms behind foam breaker performances, at least in part, using modern CFD techniques.

Keywords: bioreactor, CFD, foam breaker, foam mitigation, OpenFOAM

Procedia PDF Downloads 204
577 Management of Mycotoxin Production and Fungicide Resistance by Targeting Stress Response System in Fungal Pathogens

Authors: Jong H. Kim, Kathleen L. Chan, Luisa W. Cheng

Abstract:

Control of fungal pathogens, such as foodborne mycotoxin producers, is problematic as effective antimycotic agents are often very limited. Mycotoxin contamination significantly interferes with the safe production of foods or crops worldwide. Moreover, expansion of fungal resistance to commercial drugs or fungicides is a global human health concern. Therefore, there is a persistent need to enhance the efficacy of commercial antimycotic agents or to develop new intervention strategies. Disruption of the cellular antioxidant system should be an effective method for pathogen control. Such disruption can be achieved with safe, redox-active compounds. Natural phenolic derivatives are potent redox cyclers that inhibit fungal growth through destabilization of the cellular antioxidant system. The goal of this study is to identify novel, redox-active compounds that disrupt the fungal antioxidant system. The identified compounds could also function as sensitizing agents to conventional antimycotics (i.e., chemosensitization) to improve antifungal efficacy. Various benzo derivatives were tested against fungal pathogens. Gene deletion mutants of the yeast Saccharomyces cerevisiae were used as model systems for identifying molecular targets of benzo analogs. The efficacy of identified compounds as potent antifungal agents or as chemosensitizing agents to commercial drugs or fungicides was examined with methods outlined by the Clinical Laboratory Standards Institute or the European Committee on Antimicrobial Susceptibility Testing. Selected benzo derivatives possessed potent antifungal or antimycotoxigenic activity. Molecular analyses by using S. cerevisiae mutants indicated antifungal activity of benzo derivatives was through disruption of cellular antioxidant or cell wall integrity system. Certain benzo analogs screened overcame tolerance of Aspergillus signaling mutants, namely mitogen-activated protein kinase mutants, to fludioxonil fungicide. Synergistic antifungal chemosensitization greatly lowered minimum inhibitory or fungicidal concentrations of test compounds, including inhibitors of mitochondrial respiration. Of note, salicylaldehyde is a potent antimycotic volatile that has some practical application as a fumigant. Altogether, benzo derivatives targeting cellular antioxidant system of fungi (along with cell wall integrity system) effectively suppress fungal growth. Candidate compounds possess the antifungal, antimycotoxigenic or chemosensitizing capacity to augment the efficacy of commercial antifungals. Therefore, chemogenetic approaches can lead to the development of novel antifungal intervention strategies, which enhance the efficacy of established microbe intervention practices and overcome drug/fungicide resistance. Chemosensitization further reduces costs and alleviates negative side effects associated with current antifungal treatments.

Keywords: antifungals, antioxidant system, benzo derivatives, chemosensitization

Procedia PDF Downloads 262
576 Comparison of Nutritional Status of Asthmatic vs Non-Asthmatic Adults

Authors: Ayesha Mushtaq

Abstract:

Asthma is a pulmonary disease in which blockade of the airway takes place due to inflammation as a response to certain allergens. Breathing troubles, cough, and dyspnea are one of the few symptoms. Several studies have indicated a significant effect on asthma due to changes in dietary routines. Certain food items, such as oily foods and other materials, are known to cause an increase in the symptoms of asthma. Low dietary intake of fruits and vegetables may be important in relation to asthma prevalence. The objective of this study is to assess and compare the nutritional status of asthmatic and non-asthmatic patients. The significance of this study lies in the factor that it will help nutritionists to arrange a feasible dietary routine for asthmatic patients. This research was conducted at the Pulmonology Department of the Pakistan Institute of Medical Science Islamabad. About thirty hundred thirty-four million people are affected by asthma worldwide. Pakistan is on the verge of being an uplifted urban population and asthma cases are increasingly high these days. Several studies suggest an increase in the Asthmatic patient population due to improper diet. This is a cross-sectional study aimed at assessing the nutritious standing of Asthmatic and non-asthmatic patients. This research took place at the Pakistan Institute of Medical Sciences (PIMS), Islamabad, Pakistan. The research included asthmatic and non-asthmatic patients coming to the pulmonology department clinic at the Pakistan Institute of Medical Sciences (PIMS). These patients were aged between 20-60 years. A questionnaire was developed for these patients to estimate their dietary plans in these patients. The methodology included four sections. The first section was the Socio-Demographic profile, which included age, gender, monthly income and occupation. The next section was anthropometric measurements which included the weight, height and body mass index (BMI) of an individual. The next section, section three, was about the biochemical attributes, such as for biochemical profiling, pulmonary function testing (PFT) was performed. In the next section, Dietary habits were assessed by a food frequency questionnaire (FFQ) through food habits and consumption pattern was assessed. The next section life style data, in which the person's level of physical activity, sleep and smoking habits were assessed. The next section was statistical analysis. All the data obtained from the study were statistically analyzed and assessed. Most of the asthma Patients were females, with weight more than normal or even obese. Body Mass Index (BMI) was higher in asthma Patients than those in non-Asthmatic ones. When the nutritional Values were assessed, we came to know that these patients were low on certain nutrients and their diet included more junk and oily food than healthy vegetables and fruits. Beverages intake was also included in the same assessment. It is evident from this study that nutritional status has a contributory effect on asthma. So, patients on the verge of developing asthma or those who have developed asthma should focus on their diet, maintain good eating habits and take healthy diets, including fruits and vegetables rather than oily foods. Proper sleep may also contribute to the control of asthma.

Keywords: BMI, nutrition, PAL, diet

Procedia PDF Downloads 77
575 Investigation of Nucleation and Thermal Conductivity of Waxy Crude Oil on Pipe Wall via Particle Dynamics

Authors: Jinchen Cao, Tiantian Du

Abstract:

As waxy crude oil is easy to crystallization and deposition in the pipeline wall, it causes pipeline clogging and leads to the reduction of oil and gas gathering and transmission efficiency. In this paper, a mesoscopic scale dissipative particle dynamics method is employed, and constructed four pipe wall models, including smooth wall (SW), hydroxylated wall (HW), rough wall (RW), and single-layer graphene wall (GW). Snapshots of the simulation output trajectories show that paraffin molecules interact with each other to form a network structure that constrains water molecules as their nucleation sites. Meanwhile, it is observed that the paraffin molecules on the near-wall side are adsorbed horizontally between inter-lattice gaps of the solid wall. In the pressure range of 0 - 50 MPa, the pressure change has less effect on the affinity properties of SS, HS, and GS walls, but for RS walls, the contact angle between paraffin wax and water molecules was found to decrease with the increase in pressure, while the water molecules showed the opposite trend, the phenomenon is due to the change in pressure, leading to the transition of paraffin wax molecules from amorphous to crystalline state. Meanwhile, the minimum crystalline phase pressure (MCPP) was proposed to describe the lowest pressure at which crystallization of paraffin molecules occurs. The maximum number of crystalline clusters formed by paraffin molecules at MCPP in the system showed NSS (0.52 MPa) > NHS (0.55 MPa) > NRS (0.62 MPa) > NGS (0.75 MPa). The MCPP on the graphene surface, with the least number of clusters formed, indicates that the addition of graphene inhibited the crystallization process of paraffin deposition on the wall surface. Finally, the thermal conductivity was calculated, and the results show that on the near-wall side, the thermal conductivity changes drastically due to the occurrence of adsorption crystallization of paraffin waxes; on the fluid side the thermal conductivity gradually tends to stabilize, and the average thermal conductivity shows: ĸRS(0.254W/(m·K)) > ĸRS(0.249W/(m·K)) > ĸRS(0.218W/(m·K)) > ĸRS(0.188W/(m·K)).This study provides a theoretical basis for improving the transport efficiency and heat transfer characteristics of waxy crude oil in terms of wall type, wall roughness, and MCPP.

Keywords: waxy crude oil, thermal conductivity, crystallization, dissipative particle dynamics, MCPP

Procedia PDF Downloads 72
574 Acceleration Techniques of DEM Simulation for Dynamics of Particle Damping

Authors: Masato Saeki

Abstract:

Presented herein is a novel algorithms for calculating the damping performance of particle dampers. The particle damper is a passive vibration control technique and has many practical applications due to simple design. It consists of granular materials constrained to move between two ends in the cavity of a primary vibrating system. The damping effect results from the exchange of momentum during the impact of granular materials against the wall of the cavity. This damping has the advantage of being independent of the environment. Therefore, particle damping can be applied in extreme temperature environments, where most conventional dampers would fail. It was shown experimentally in many papers that the efficiency of the particle dampers is high in the case of resonant vibration. In order to use the particle dampers effectively, it is necessary to solve the equations of motion for each particle, considering the granularity. The discrete element method (DEM) has been found to be effective for revealing the dynamics of particle damping. In this method, individual particles are assumed as rigid body and interparticle collisions are modeled by mechanical elements as springs and dashpots. However, the computational cost is significant since the equation of motion for each particle must be solved at each time step. In order to improve the computational efficiency of the DEM, the new algorithms are needed. In this study, new algorithms are proposed for implementing the high performance DEM. On the assumption that behaviors of the granular particles in the each divided area of the damper container are the same, the contact force of the primary system with all particles can be considered to be equal to the product of the divided number of the damper area and the contact force of the primary system with granular materials per divided area. This convenience makes it possible to considerably reduce the calculation time. The validity of this calculation method was investigated and the calculated results were compared with the experimental ones. This paper also presents the results of experimental studies of the performance of particle dampers. It is shown that the particle radius affect the noise level. It is also shown that the particle size and the particle material influence the damper performance.

Keywords: particle damping, discrete element method (DEM), granular materials, numerical analysis, equivalent noise level

Procedia PDF Downloads 453
573 An Adiabatic Quantum Optimization Approach for the Mixed Integer Nonlinear Programming Problem

Authors: Maxwell Henderson, Tristan Cook, Justin Chan Jin Le, Mark Hodson, YoungJung Chang, John Novak, Daniel Padilha, Nishan Kulatilaka, Ansu Bagchi, Sanjoy Ray, John Kelly

Abstract:

We present a method of using adiabatic quantum optimization (AQO) to solve a mixed integer nonlinear programming (MINLP) problem instance. The MINLP problem is a general form of a set of NP-hard optimization problems that are critical to many business applications. It requires optimizing a set of discrete and continuous variables with nonlinear and potentially nonconvex constraints. Obtaining an exact, optimal solution for MINLP problem instances of non-trivial size using classical computation methods is currently intractable. Current leading algorithms leverage heuristic and divide-and-conquer methods to determine approximate solutions. Creating more accurate and efficient algorithms is an active area of research. Quantum computing (QC) has several theoretical benefits compared to classical computing, through which QC algorithms could obtain MINLP solutions that are superior to current algorithms. AQO is a particular form of QC that could offer more near-term benefits compared to other forms of QC, as hardware development is in a more mature state and devices are currently commercially available from D-Wave Systems Inc. It is also designed for optimization problems: it uses an effect called quantum tunneling to explore all lowest points of an energy landscape where classical approaches could become stuck in local minima. Our work used a novel algorithm formulated for AQO to solve a special type of MINLP problem. The research focused on determining: 1) if the problem is possible to solve using AQO, 2) if it can be solved by current hardware, 3) what the currently achievable performance is, 4) what the performance will be on projected future hardware, and 5) when AQO is likely to provide a benefit over classical computing methods. Two different methods, integer range and 1-hot encoding, were investigated for transforming the MINLP problem instance constraints into a mathematical structure that can be embedded directly onto the current D-Wave architecture. For testing and validation a D-Wave 2X device was used, as well as QxBranch’s QxLib software library, which includes a QC simulator based on simulated annealing. Our results indicate that it is mathematically possible to formulate the MINLP problem for AQO, but that currently available hardware is unable to solve problems of useful size. Classical general-purpose simulated annealing is currently able to solve larger problem sizes, but does not scale well and such methods would likely be outperformed in the future by improved AQO hardware with higher qubit connectivity and lower temperatures. If larger AQO devices are able to show improvements that trend in this direction, commercially viable solutions to the MINLP for particular applications could be implemented on hardware projected to be available in 5-10 years. Continued investigation into optimal AQO hardware architectures and novel methods for embedding MINLP problem constraints on to those architectures is needed to realize those commercial benefits.

Keywords: adiabatic quantum optimization, mixed integer nonlinear programming, quantum computing, NP-hard

Procedia PDF Downloads 525
572 Transitional Separation Bubble over a Rounded Backward Facing Step Due to a Temporally Applied Very High Adverse Pressure Gradient Followed by a Slow Adverse Pressure Gradient Applied at Inlet of the Profile

Authors: Saikat Datta

Abstract:

Incompressible laminar time-varying flow is investigated over a rounded backward-facing step for a triangular piston motion at the inlet of a straight channel with very high acceleration, followed by a slow deceleration experimentally and through numerical simulation. The backward-facing step is an important test-case as it embodies important flow characteristics such as separation point, reattachment length, and recirculation of flow. A sliding piston imparts two successive triangular velocities at the inlet, constant acceleration from rest, 0≤t≤t0, and constant deceleration to rest, t0≤tKeywords: laminar boundary layer separation, rounded backward facing step, separation bubble, unsteady separation, unsteady vortex flows

Procedia PDF Downloads 66
571 A Single-Channel BSS-Based Method for Structural Health Monitoring of Civil Infrastructure under Environmental Variations

Authors: Yanjie Zhu, André Jesus, Irwanda Laory

Abstract:

Structural Health Monitoring (SHM), involving data acquisition, data interpretation and decision-making system aim to continuously monitor the structural performance of civil infrastructures under various in-service circumstances. The main value and purpose of SHM is identifying damages through data interpretation system. Research on SHM has been expanded in the last decades and a large volume of data is recorded every day owing to the dramatic development in sensor techniques and certain progress in signal processing techniques. However, efficient and reliable data interpretation for damage detection under environmental variations is still a big challenge. Structural damages might be masked because variations in measured data can be the result of environmental variations. This research reports a novel method based on single-channel Blind Signal Separation (BSS), which extracts environmental effects from measured data directly without any prior knowledge of the structure loading and environmental conditions. Despite the successful application in audio processing and bio-medical research fields, BSS has never been used to detect damage under varying environmental conditions. This proposed method optimizes and combines Ensemble Empirical Mode Decomposition (EEMD), Principal Component Analysis (PCA) and Independent Component Analysis (ICA) together to separate structural responses due to different loading conditions respectively from a single channel input signal. The ICA is applying on dimension-reduced output of EEMD. Numerical simulation of a truss bridge, inspired from New Joban Line Arakawa Railway Bridge, is used to validate this method. All results demonstrate that the single-channel BSS-based method can recover temperature effects from mixed structural response recorded by a single sensor with a convincing accuracy. This will be the foundation of further research on direct damage detection under varying environment.

Keywords: damage detection, ensemble empirical mode decomposition (EEMD), environmental variations, independent component analysis (ICA), principal component analysis (PCA), structural health monitoring (SHM)

Procedia PDF Downloads 304
570 Row Detection and Graph-Based Localization in Tree Nurseries Using a 3D LiDAR

Authors: Ionut Vintu, Stefan Laible, Ruth Schulz

Abstract:

Agricultural robotics has been developing steadily over recent years, with the goal of reducing and even eliminating pesticides used in crops and to increase productivity by taking over human labor. The majority of crops are arranged in rows. The first step towards autonomous robots, capable of driving in fields and performing crop-handling tasks, is for robots to robustly detect the rows of plants. Recent work done towards autonomous driving between plant rows offers big robotic platforms equipped with various expensive sensors as a solution to this problem. These platforms need to be driven over the rows of plants. This approach lacks flexibility and scalability when it comes to the height of plants or distance between rows. This paper proposes instead an algorithm that makes use of cheaper sensors and has a higher variability. The main application is in tree nurseries. Here, plant height can range from a few centimeters to a few meters. Moreover, trees are often removed, leading to gaps within the plant rows. The core idea is to combine row detection algorithms with graph-based localization methods as they are used in SLAM. Nodes in the graph represent the estimated pose of the robot, and the edges embed constraints between these poses or between the robot and certain landmarks. This setup aims to improve individual plant detection and deal with exception handling, like row gaps, which are falsely detected as an end of rows. Four methods were developed for detecting row structures in the fields, all using a point cloud acquired with a 3D LiDAR as an input. Comparing the field coverage and number of damaged plants, the method that uses a local map around the robot proved to perform the best, with 68% covered rows and 25% damaged plants. This method is further used and combined with a graph-based localization algorithm, which uses the local map features to estimate the robot’s position inside the greater field. Testing the upgraded algorithm in a variety of simulated fields shows that the additional information obtained from localization provides a boost in performance over methods that rely purely on perception to navigate. The final algorithm achieved a row coverage of 80% and an accuracy of 27% damaged plants. Future work would focus on achieving a perfect score of 100% covered rows and 0% damaged plants. The main challenges that the algorithm needs to overcome are fields where the height of the plants is too small for the plants to be detected and fields where it is hard to distinguish between individual plants when they are overlapping. The method was also tested on a real robot in a small field with artificial plants. The tests were performed using a small robot platform equipped with wheel encoders, an IMU and an FX10 3D LiDAR. Over ten runs, the system achieved 100% coverage and 0% damaged plants. The framework built within the scope of this work can be further used to integrate data from additional sensors, with the goal of achieving even better results.

Keywords: 3D LiDAR, agricultural robots, graph-based localization, row detection

Procedia PDF Downloads 139
569 The Correlation between Eye Movements, Attentional Shifting, and Driving Simulator Performance among Adolescents with Attention Deficit Hyperactivity Disorder

Authors: Navah Z. Ratzon, Anat Keren, Shlomit Y. Greenberg

Abstract:

Car accidents are a problem worldwide. Adolescents’ involvement in car accidents is higher in comparison to the overall driving population. Researchers estimate the risk of accidents among adolescents with symptoms of attention-deficit/hyperactivity disorder (ADHD) to be 1.2 to 4 times higher than that of their peers. Individuals with ADHD exhibit unique patterns of eye movements and attentional shifts that play an important role in driving. In addition, deficiencies in cognitive and executive functions among adolescents with ADHD is likely to put them at greater risk for car accidents. Fifteen adolescents with ADHD and 17 matched controls participated in the study. Individuals from both groups attended local public schools and did not have a driver’s license. Participants’ mean age was 16.1 (SD=.23). As part of the experiment, they all completed a driving simulation session, while their eye movements were monitored. Data were recorded by an eye tracker: The entire driving session was recorded, registering the tester’s exact gaze position directly on the screen. Eye movements and simulator data were analyzed using Matlab (Mathworks, USA). Participants’ cognitive and metacognitive abilities were evaluated as well. No correlation was found between saccade properties, regions of interest, and simulator performance in either group, although participants with ADHD allocated more visual scan time (25%, SD = .13%) to a smaller segment of dashboard area, whereas controls scanned the monitor more evenly (15%, SD = .05%). The visual scan pattern found among participants with ADHD indicates a distinct pattern of engagement-disengagement of spatial attention compared to that of non-ADHD participants as well as lower attention flexibility, which likely affects driving. Additionally the lower the results on the cognitive tests, the worse driving performance was. None of the participants had prior driving experience, yet participants with ADHD distinctly demonstrated difficulties in scanning their surroundings, which may impair driving. This stresses the need to consider intervention programs, before driving lessons begin, to help adolescents with ADHD acquire proper driving habits, avoid typical driving errors, and achieve safer driving.

Keywords: ADHD, attentional shifting, driving simulator, eye movements

Procedia PDF Downloads 329
568 Real-Time Data Stream Partitioning over a Sliding Window in Real-Time Spatial Big Data

Authors: Sana Hamdi, Emna Bouazizi, Sami Faiz

Abstract:

In recent years, real-time spatial applications, like location-aware services and traffic monitoring, have become more and more important. Such applications result dynamic environments where data as well as queries are continuously moving. As a result, there is a tremendous amount of real-time spatial data generated every day. The growth of the data volume seems to outspeed the advance of our computing infrastructure. For instance, in real-time spatial Big Data, users expect to receive the results of each query within a short time period without holding in account the load of the system. But with a huge amount of real-time spatial data generated, the system performance degrades rapidly especially in overload situations. To solve this problem, we propose the use of data partitioning as an optimization technique. Traditional horizontal and vertical partitioning can increase the performance of the system and simplify data management. But they remain insufficient for real-time spatial Big data; they can’t deal with real-time and stream queries efficiently. Thus, in this paper, we propose a novel data partitioning approach for real-time spatial Big data named VPA-RTSBD (Vertical Partitioning Approach for Real-Time Spatial Big data). This contribution is an implementation of the Matching algorithm for traditional vertical partitioning. We find, firstly, the optimal attribute sequence by the use of Matching algorithm. Then, we propose a new cost model used for database partitioning, for keeping the data amount of each partition more balanced limit and for providing a parallel execution guarantees for the most frequent queries. VPA-RTSBD aims to obtain a real-time partitioning scheme and deals with stream data. It improves the performance of query execution by maximizing the degree of parallel execution. This affects QoS (Quality Of Service) improvement in real-time spatial Big Data especially with a huge volume of stream data. The performance of our contribution is evaluated via simulation experiments. The results show that the proposed algorithm is both efficient and scalable, and that it outperforms comparable algorithms.

Keywords: real-time spatial big data, quality of service, vertical partitioning, horizontal partitioning, matching algorithm, hamming distance, stream query

Procedia PDF Downloads 157
567 Investigations on the Fatigue Behavior of Welded Details with Imperfections

Authors: Helen Bartsch, Markus Feldmann

Abstract:

The dimensioning of steel structures subject to fatigue loads, such as wind turbines, bridges, masts and towers, crane runways and weirs or components in crane construction, is often dominated by fatigue verification. The fatigue details defined by the welded connections, such as butt or cruciform joints, longitudinal welds, welded-on or welded-in stiffeners, etc., are decisive. In Europe, the verification is usually carried out according to EN 1993-1-9 on a nominal stress basis. The basis is the detailed catalog, which specifies the fatigue strength of the various weld and construction details according to fatigue classes. Until now, a relation between fatigue classes and weld imperfection sizes is not included. Quality levels for imperfections in fusion-welded joints in steel, nickel, titanium and their alloys are regulated in EN ISO 5817, which, however, doesn’t contain direct correlations to fatigue resistances. The question arises whether some imperfections might be tolerable to a certain extent since they may be present in the test data used for detail classifications dating back decades ago. Although current standardization requires proof of satisfying limits of imperfection sizes, it would also be possible to tolerate welds with certain irregularities if these can be reliably quantified by non-destructive testing. Fabricators would be prepared to undertake carefully and sustained weld inspection in view of the significant economic consequences of such unfavorable fatigue classes. This paper presents investigations on the fatigue behavior of common welded details containing imperfections. In contrast to the common nominal stress concept, local fatigue concepts were used to consider the true stress increase, i.e., local stresses at the weld toe and root. The actual shape of a weld comprising imperfections, e.g., gaps or undercuts, can be incorporated into the fatigue evaluation, usually on a numerical basis. With the help of the effective notch stress concept, the fatigue resistance of detailed local weld shapes is assessed. Validated numerical models serve to investigate notch factors of fatigue details with different geometries. By utilizing parametrized ABAQUS routines, detailed numerical studies have been performed. Depending on the shape and size of different weld irregularities, fatigue classes can be defined. As well load-carrying welded details, such as the cruciform joint, as non-load carrying welded details, e.g., welded-on or welded-in stiffeners, are regarded. The investigated imperfections include, among others, undercuts, excessive convexity, incorrect weld toe, excessive asymmetry and insufficient or excessive throat thickness. Comparisons of the impact of different imperfections on the different types of fatigue details are made. Moreover, the influence of a combination of crucial weld imperfections on the fatigue resistance is analyzed. With regard to the trend of increasing efficiency in steel construction, the overall aim of the investigations is to include a more economical differentiation of fatigue details with regard to tolerance sizes. In the long term, the harmonization of design standards, execution standards and regulations of weld imperfections is intended.

Keywords: effective notch stress, fatigue, fatigue design, weld imperfections

Procedia PDF Downloads 260
566 Clinical and Analytical Performance of Glial Fibrillary Acidic Protein and Ubiquitin C-Terminal Hydrolase L1 Biomarkers for Traumatic Brain Injury in the Alinity Traumatic Brain Injury Test

Authors: Raj Chandran, Saul Datwyler, Jaime Marino, Daniel West, Karla Grasso, Adam Buss, Hina Syed, Zina Al Sahouri, Jennifer Yen, Krista Caudle, Beth McQuiston

Abstract:

The Alinity i TBI test is Therapeutic Goods Administration (TGA) registered and is a panel of in vitro diagnostic chemiluminescent microparticle immunoassays for the measurement of glial fibrillary acidic protein (GFAP) and ubiquitin C-terminal hydrolase L1 (UCH-L1) in plasma and serum. The Alinity i TBI performance was evaluated in a multi-center pivotal study to demonstrate the capability to assist in determining the need for a CT scan of the head in adult subjects (age 18+) presenting with suspected mild TBI (traumatic brain injury) with a Glasgow Coma Scale score of 13 to 15. TBI has been recognized as an important cause of death and disability and is a growing public health problem. An estimated 69 million people globally experience a TBI annually1. Blood-based biomarkers such as glial fibrillary acidic protein (GFAP) and ubiquitin C-terminal hydrolase L1 (UCH-L1) have shown utility to predict acute traumatic intracranial injury on head CT scans after TBI. A pivotal study using prospectively collected archived (frozen) plasma specimens was conducted to establish the clinical performance of the TBI test on the Alinity i system. The specimens were originally collected in a prospective, multi-center clinical study. Testing of the specimens was performed at three clinical sites in the United States. Performance characteristics such as detection limits, imprecision, linearity, measuring interval, expected values, and interferences were established following Clinical and Laboratory Standards Institute (CLSI) guidance. Of the 1899 mild TBI subjects, 120 had positive head CT scan results; 116 of the 120 specimens had a positive TBI interpretation (Sensitivity 96.7%; 95% CI: 91.7%, 98.7%). Of the 1779 subjects with negative CT scan results, 713 had a negative TBI interpretation (Specificity 40.1%; 95% CI: 37.8, 42.4). The negative predictive value (NPV) of the test was 99.4% (713/717, 95% CI: 98.6%, 99.8%). The analytical measuring interval (AMI) extends from the limit of quantitation (LoQ) to the upper LoQ and is determined by the range that demonstrates acceptable performance for linearity, imprecision, and bias. The AMI is 6.1 to 42,000 pg/mL for GFAP and 26.3 to 25,000 pg/mL for UCH-L1. Overall, within-laboratory imprecision (20 day) ranged from 3.7 to 5.9% CV for GFAP and 3.0 to 6.0% CV for UCH-L1, when including lot and instrument variances. The Alinity i TBI clinical performance results demonstrated high sensitivity and high NPV, supporting the utility to assist in determining the need for a head CT scan in subjects presenting to the emergency department with suspected mild TBI. The GFAP and UCH-L1 assays show robust analytical performance across a broad concentration range of GFAP and UCH-L1 and may serve as a valuable tool to help evaluate TBI patients across the spectrum of mild to severe injury.

Keywords: biomarker, diagnostic, neurology, TBI

Procedia PDF Downloads 66
565 Elite Netball Players’ Perspectives on Long Term Athlete Development Programmes in South Africa

Authors: Petrus Louis Nolte

Abstract:

University sport in South Africa is not isolated from the complexity of globalization and professionalization of sport, as it forms an integral part of the sport development environment in South Africa. In order to align their sport programmes with global and professional requirements, several universities opted to develop elite sport programmes; recruit specialized personnel such as coaches, administrators and athletes; provide expert coaching; scientific and medical services; sports testing; fitness, technical and tactical expertise; sport psychological and rehabilitation support; academic guidance and career assistance; and student-athlete accommodation. In addition, universities provide administrative support and high-quality physical resources (training facilities) for the benefit of the overall South African sport system. Although it is not compulsory for universities to develop elite sport programmes to prepare their teams for competitions, elite competitions such as the annual Varsity Sport, University Sport South Africa (USSA) and local club competitions and leagues within university international competitions where universities not only compete but also deliver players for representative national netball teams. The aim of this study is therefore to describe the perceptions of players of the university elite netball programmes they were participating in. This study adopted a descriptive design with a quantitative approach, utilizing a self-structured questionnaire as research technique. As this research formed part of a national research project for NSA with a population of 172 national and provincial netball players, a sample of 92 university netball players from the population was selected. Content validity of the self-structured questionnaire was secured through a test-retest process, with construct validity through a member of the Statistical Consultation Services (STATCON) of the University of Johannesburg that provided feedback on the structural format of the questionnaire. Reliability was measured utilising Cronbach Alpha on p<0.005 level of significance. A reliability score of 0.87 was measured. The research was approved by the Board of Netball South Africa and ethical conduct implemented according to the processes and procedures approved by the Ethics Committees of the Faculty of Health Sciences, University of Johannesburg with clearance number REC-01-30-2019. From the results it is evident that university elite netball programmes are professional, especially with regards to the employment of knowledgeable and competent coaches and technical officials such as team managers and sport sciences staff. These professionals have access to elite training facilities, support staff, and relatively large groups of elite players, all elements of an elite programme that could enhance the national federation’s (Netball South Africa) system. Universities could serve the dual purpose of serving as university netball clubs, as well as providing elite training services and facilities as performance hubs for national players.

Keywords: elite sport programmes, university netball, player experiences, Varsity Sport netball

Procedia PDF Downloads 149
564 Multiscale Process Modeling of Ceramic Matrix Composites

Authors: Marianna Maiaru, Gregory M. Odegard, Josh Kemppainen, Ivan Gallegos, Michael Olaya

Abstract:

Ceramic matrix composites (CMCs) are typically used in applications that require long-term mechanical integrity at elevated temperatures. CMCs are usually fabricated using a polymer precursor that is initially polymerized in situ with fiber reinforcement, followed by a series of cycles of pyrolysis to transform the polymer matrix into a rigid glass or ceramic. The pyrolysis step typically generates volatile gasses, which creates porosity within the polymer matrix phase of the composite. Subsequent cycles of monomer infusion, polymerization, and pyrolysis are often used to reduce the porosity and thus increase the durability of the composite. Because of the significant expense of such iterative processing cycles, new generations of CMCs with improved durability and manufacturability are difficult and expensive to develop using standard Edisonian approaches. The goal of this research is to develop a computational process-modeling-based approach that can be used to design the next generation of CMC materials with optimized material and processing parameters for maximum strength and efficient manufacturing. The process modeling incorporates computational modeling tools, including molecular dynamics (MD), to simulate the material at multiple length scales. Results from MD simulation are used to inform the continuum-level models to link molecular-level characteristics (material structure, temperature) to bulk-level performance (strength, residual stresses). Processing parameters are optimized such that process-induced residual stresses are minimized and laminate strength is maximized. The multiscale process modeling method developed with this research can play a key role in the development of future CMCs for high-temperature and high-strength applications. By combining multiscale computational tools and process modeling, new manufacturing parameters can be established for optimal fabrication and performance of CMCs for a wide range of applications.

Keywords: digital engineering, finite elements, manufacturing, molecular dynamics

Procedia PDF Downloads 98
563 Factors Impacting Geostatistical Modeling Accuracy and Modeling Strategy of Fluvial Facies Models

Authors: Benbiao Song, Yan Gao, Zhuo Liu

Abstract:

Geostatistical modeling is the key technic for reservoir characterization, the quality of geological models will influence the prediction of reservoir performance greatly, but few studies have been done to quantify the factors impacting geostatistical reservoir modeling accuracy. In this study, 16 fluvial prototype models have been established to represent different geological complexity, 6 cases range from 16 to 361 wells were defined to reproduce all those 16 prototype models by different methodologies including SIS, object-based and MPFS algorithms accompany with different constraint parameters. Modeling accuracy ratio was defined to quantify the influence of each factor, and ten realizations were averaged to represent each accuracy ratio under the same modeling condition and parameters association. Totally 5760 simulations were done to quantify the relative contribution of each factor to the simulation accuracy, and the results can be used as strategy guide for facies modeling in the similar condition. It is founded that data density, geological trend and geological complexity have great impact on modeling accuracy. Modeling accuracy may up to 90% when channel sand width reaches up to 1.5 times of well space under whatever condition by SIS and MPFS methods. When well density is low, the contribution of geological trend may increase the modeling accuracy from 40% to 70%, while the use of proper variogram may have very limited contribution for SIS method. It can be implied that when well data are dense enough to cover simple geobodies, few efforts were needed to construct an acceptable model, when geobodies are complex with insufficient data group, it is better to construct a set of robust geological trend than rely on a reliable variogram function. For object-based method, the modeling accuracy does not increase obviously as SIS method by the increase of data density, but kept rational appearance when data density is low. MPFS methods have the similar trend with SIS method, but the use of proper geological trend accompany with rational variogram may have better modeling accuracy than MPFS method. It implies that the geological modeling strategy for a real reservoir case needs to be optimized by evaluation of dataset, geological complexity, geological constraint information and the modeling objective.

Keywords: fluvial facies, geostatistics, geological trend, modeling strategy, modeling accuracy, variogram

Procedia PDF Downloads 264
562 Application of MALDI-MS to Differentiate SARS-CoV-2 and Non-SARS-CoV-2 Symptomatic Infections in the Early and Late Phases of the Pandemic

Authors: Dmitriy Babenko, Sergey Yegorov, Ilya Korshukov, Aidana Sultanbekova, Valentina Barkhanskaya, Tatiana Bashirova, Yerzhan Zhunusov, Yevgeniya Li, Viktoriya Parakhina, Svetlana Kolesnichenko, Yeldar Baiken, Aruzhan Pralieva, Zhibek Zhumadilova, Matthew S. Miller, Gonzalo H. Hortelano, Anar Turmuhambetova, Antonella E. Chesca, Irina Kadyrova

Abstract:

Introduction: The rapidly evolving COVID-19 pandemic, along with the re-emergence of pathogens causing acute respiratory infections (ARI), has necessitated the development of novel diagnostic tools to differentiate various causes of ARI. MALDI-MS, due to its wide usage and affordability, has been proposed as a potential instrument for diagnosing SARS-CoV-2 versus non-SARS-CoV-2 ARI. The aim of this study was to investigate the potential of MALDI-MS in conjunction with a machine learning model to accurately distinguish between symptomatic infections caused by SARS-CoV-2 and non-SARS-CoV-2 during both the early and later phases of the pandemic. Furthermore, this study aimed to analyze mass spectrometry (MS) data obtained from nasal swabs of healthy individuals. Methods: We gathered mass spectra from 252 samples, comprising 108 SARS-CoV-2-positive samples obtained in 2020 (Covid 2020), 7 SARS-CoV- 2-positive samples obtained in 2023 (Covid 2023), 71 samples from symptomatic individuals without SARS-CoV-2 (Control non-Covid ARVI), and 66 samples from healthy individuals (Control healthy). All the samples were subjected to RT-PCR testing. For data analysis, we employed the caret R package to train and test seven machine-learning algorithms: C5.0, KNN, NB, RF, SVM-L, SVM-R, and XGBoost. We conducted a training process using a five-fold (outer) nested repeated (five times) ten-fold (inner) cross-validation with a randomized stratified splitting approach. Results: In this study, we utilized the Covid 2020 dataset as a case group and the non-Covid ARVI dataset as a control group to train and test various machine learning (ML) models. Among these models, XGBoost and SVM-R demonstrated the highest performance, with accuracy values of 0.97 [0.93, 0.97] and 0.95 [0.95; 0.97], specificity values of 0.86 [0.71; 0.93] and 0.86 [0.79; 0.87], and sensitivity values of 0.984 [0.984; 1.000] and 1.000 [0.968; 1.000], respectively. When examining the Covid 2023 dataset, the Naive Bayes model achieved the highest classification accuracy of 43%, while XGBoost and SVM-R achieved accuracies of 14%. For the healthy control dataset, the accuracy of the models ranged from 0.27 [0.24; 0.32] for k-nearest neighbors to 0.44 [0.41; 0.45] for the Support Vector Machine with a radial basis function kernel. Conclusion: Therefore, ML models trained on MALDI MS of nasopharyngeal swabs obtained from patients with Covid during the initial phase of the pandemic, as well as symptomatic non-Covid individuals, showed excellent classification performance, which aligns with the results of previous studies. However, when applied to swabs from healthy individuals and a limited sample of patients with Covid in the late phase of the pandemic, ML models exhibited lower classification accuracy.

Keywords: SARS-CoV-2, MALDI-TOF MS, ML models, nasopharyngeal swabs, classification

Procedia PDF Downloads 108
561 Online-Scaffolding-Learning Tools to Improve First-Year Undergraduate Engineering Students’ Self-Regulated Learning Abilities

Authors: Chen Wang, Gerard Rowe

Abstract:

The number of undergraduate engineering students enrolled in university has been increasing rapidly recently, leading to challenges associated with increased student-instructor ratios and increased diversity in academic preparedness of the entrants. An increased student-instructor ratio makes the interaction between teachers and students more difficult, with the resulting student ‘anonymity’ known to be a risk to academic success. With increasing student numbers, there is also an increasing diversity in the academic preparedness of the students at entry to university. Conceptual understanding of the entrants has been quantified via diagnostic testing, with the results for the first-year course in electrical engineering showing significant conceptual misunderstandings amongst the entry cohort. The solution is clearly multi-faceted, but part of the solution likely involves greater demands being placed on students to be masters of their own learning. In consequence, it is highly desirable that instructors help students to develop better self-regulated learning skills. A self-regulated learner is one who is capable of setting up their own learning goals, monitoring their study processes, adopting and adjusting learning strategies, and reflecting on their own study achievements. The methods by which instructors might cultivate students’ self-regulated learning abilities is receiving increasing attention from instructors and researchers. The aim of this study was to help students understand fully their self-regulated learning skill levels and provide targeted instructions to help them improve particular learning abilities in order to meet the curriculum requirements. As a survey tool, this research applied the questionnaire ‘Motivated Strategies for Learning Questionnaire’ (MSLQ) to collect first year engineering student’s self-reported data of their cognitive abilities, motivational orientations and learning strategies. MSLQ is a widely-used questionnaire for assessment of university student’s self-regulated learning skills. The questionnaire was offered online as a part of the online-scaffolding-learning tools to develop student understanding of self-regulated learning theories and learning strategies. The online tools, which have been under development since 2015, are designed to help first-year students understand their self-regulated learning skill levels by providing prompt feedback after they complete the questionnaire. In addition, the online tool also supplies corresponding learning strategies to students if they want to improve specific learning skills. A total of 866 first year engineering students who enrolled in the first-year electrical engineering course were invited to participate in this research project. By the end of the course 857 students responded and 738 of their questionnaires were considered as valid questionnaires. Analysis of these surveys showed that 66% of the students thought the online-scaffolding-learning tools helped significantly to improve their self-regulated learning abilities. It was particularly pleasing that 16.4% of the respondents thought the online-scaffolding-learning tools were extremely effective. A current thrust of our research is to investigate the relationships between students’ self-regulated learning abilities and their academic performance. Our results are being used by the course instructors as they revise the curriculum and pedagogy for this fundamental first-year engineering course, but the general principles we have identified are applicable to most first-year STEM courses.

Keywords: academic preparedness, online-scaffolding-learning tool, self-regulated learning, STEM education

Procedia PDF Downloads 110
560 A Topology-Based Dynamic Repair Strategy for Enhancing Urban Road Network Resilience under Flooding

Authors: Xuhui Lin, Qiuchen Lu, Yi An, Tao Yang

Abstract:

As global climate change intensifies, extreme weather events such as floods increasingly threaten urban infrastructure, making the vulnerability of urban road networks a pressing issue. Existing static repair strategies fail to adapt to the rapid changes in road network conditions during flood events, leading to inefficient resource allocation and suboptimal recovery. The main research gap lies in the lack of repair strategies that consider both the dynamic characteristics of networks and the progression of flood propagation. This paper proposes a topology-based dynamic repair strategy that adjusts repair priorities based on real-time changes in flood propagation and traffic demand. Specifically, a novel method is developed to assess and enhance the resilience of urban road networks during flood events. The method combines road network topological analysis, flood propagation modelling, and traffic flow simulation, introducing a local importance metric to dynamically evaluate the significance of road segments across different spatial and temporal scales. Using London's road network and rainfall data as a case study, the effectiveness of this dynamic strategy is compared to traditional and Transport for London (TFL) strategies. The most significant highlight of the research is that the dynamic strategy substantially reduced the number of stranded vehicles across different traffic demand periods, improving efficiency by up to 35.2%. The advantage of this method lies in its ability to adapt in real-time to changes in network conditions, enabling more precise resource allocation and more efficient repair processes. This dynamic strategy offers significant value to urban planners, traffic management departments, and emergency response teams, helping them better respond to extreme weather events like floods, enhance overall urban resilience, and reduce economic losses and social impacts.

Keywords: Urban resilience, road networks, flood response, dynamic repair strategy, topological analysis

Procedia PDF Downloads 35
559 Increased Energy Efficiency and Improved Product Quality in Processing of Lithium Bearing Ores by Applying Fluidized-Bed Calcination Systems

Authors: Edgar Gasafi, Robert Pardemann, Linus Perander

Abstract:

For the production of lithium carbonate or hydroxide out of lithium bearing ores, a thermal activation (calcination/decrepitation) is required for the phase transition in the mineral to enable an acid respectively soda leaching in the downstream hydrometallurgical section. In this paper, traditional processing in Lithium industry is reviewed, and opportunities to reduce energy consumption and improve product quality and recovery rate will be discussed. The conventional process approach is still based on rotary kiln calcination, a technology in use since the early days of lithium ore processing, albeit not significantly further developed since. A new technology, at least for the Lithium industry, is fluidized bed calcination. Decrepitation of lithium ore was investigated at Outotec’s Frankfurt Research Centre. Focusing on fluidized bed technology, a study of major process parameters (temperature and residence time) was performed at laboratory and larger bench scale aiming for optimal product quality for subsequent processing. The technical feasibility was confirmed for optimal process conditions on pilot scale (400 kg/h feed input) providing the basis for industrial process design. Based on experimental results, a comprehensive Aspen Plus flow sheet simulation was developed to quantify mass and energy flow for the rotary kiln and fluidized bed system. Results show a significant reduction in energy consumption and improved process performance in terms of temperature profile, product quality and plant footprint. The major conclusion is that a substantial reduction of energy consumption can be achieved in processing Lithium bearing ores by using fluidized bed based systems. At the same time and different from rotary kiln process, an accurate temperature and residence time control is ensured in fluidized-bed systems leading to a homogenous temperature profile in the reactor which prevents overheating and sintering of the solids and results in uniform product quality.

Keywords: calcination, decrepitation, fluidized bed, lithium, spodumene

Procedia PDF Downloads 230
558 Effect of Discharge Pressure Conditions on Flow Characteristics in Axial Piston Pump

Authors: Jonghyuk Yoon, Jongil Yoon, Seong-Gyo Chung

Abstract:

In many kinds of industries which usually need a large amount of power, an axial piston pump has been widely used as a main power source of a hydraulic system. The axial piston pump is a type of positive displacement pump that has several pistons in a circular array within a cylinder block. As the cylinder block and pistons start to rotate, since the exposed ends of the pistons are constrained to follow the surface of the swashed plate, the pistons are driven to reciprocate axially and then a hydraulic power is produced. In the present study, a numerical simulation which has three dimensional full model of the axial piston pump was carried out using a commercial CFD code (Ansys CFX 14.5). In order to take into consideration motion of compression and extension by the reciprocating pistons, the moving boundary conditions were applied as a function of the rotation angle to that region. In addition, this pump using hydraulic oil as working fluid is intentionally designed as a small amount of oil leaks out in order to lubricate moving parts. Since leakage could directly affect the pump efficiency, evaluation of effect of oil-leakage is very important. In order to predict the effect of the oil leakage on the pump efficiency, we considered the leakage between piston-shoe and swash-plate by modeling cylindrical shaped-feature at the end of the cylinder. In order to validate the numerical method used in this study, the numerical results of the flow rate at the discharge port are compared with the experimental data, and good agreement between them was shown. Using the validated numerical method, the effect of the discharge pressure was also investigated. The result of the present study can be useful information of small axial piston pump used in many different manufacturing industries. Acknowledgement: This research was financially supported by the “Next-generation construction machinery component specialization complex development program” through the Ministry of Trade, Industry and Energy (MOTIE) and Korea Institute for Advancement of Technology (KIAT).

Keywords: axial piston pump, CFD, discharge pressure, hydraulic system, moving boundary condition, oil leaks

Procedia PDF Downloads 248
557 Design of Large Parallel Underground Openings in Himalayas: A Case Study of Desilting Chambers for Punatsangchhu-I, Bhutan

Authors: Kanupreiya, Rajani Sharma

Abstract:

Construction of a single underground structure is itself a challenging task, and it becomes more critical in tectonically active young mountains such as the Himalayas which are highly anisotropic. The Himalayan geology mostly comprises of incompetent and sheared rock mass in addition to fold/faults, rock burst, and water ingress. Underground tunnels form the most essential and important structure in run-of-river hydroelectric projects. Punatsangchhu I hydroelectric project (PHEP-I), Bhutan (1200 MW) is a run-of-river scheme which has four parallel underground desilting chambers. The Punatsangchhu River carries a large quantity of silt load during monsoon season. Desilting chambers were provided to remove the silt particles of size greater than and equal to 0.2 mm with 90% efficiency, thereby minimizing the rate of damage to turbines. These chambers are 330 m long, 18 m wide at the center and 23.87 m high, with a 5.87 m hopper portion. The geology of desilting chambers was known from an exploratory drift which exposed low dipping foliation joint and six joint sets. The RMR and Q value in this reach varied from 40 to 60 and 1 to 6 respectively. This paper describes different rock engineering principles undertaken for safe excavation and rock support of the moderately jointed, blocky and thinly foliated biotite gneiss. For the design of rock support system of desilting chambers, empirical and numerical analysis was adopted. Finite element analysis was carried out for cavern design and finalization of pillar width using Phase2. Phase2 is a powerful tool for simulation of stage-wise excavation with simultaneous provision of support system. As the geology of the region had 7 sets of joints, in addition to FEM based approach, safety factors for potentially unstable wedges were checked using UnWedge. The final support recommendations were based on continuous face mapping, numerical modelling, empirical calculations, and practical experiences.

Keywords: dam siltation, Himalayan geology, hydropower, rock support, numerical modelling

Procedia PDF Downloads 92
556 Research of Stalled Operational Modes of Axial-Flow Compressor for Diagnostics of Pre-Surge State

Authors: F. Mohammadsadeghi

Abstract:

Relevance of research: Axial compressors are used in both aircraft engine construction and ground-based gas turbine engines. The compressor is considered to be one of the main gas turbine engine units, which define absolute and relative indicators of engine in general. Failure of compressor often leads to drastic consequences. Therefore, safe (stable) operation must be maintained when using axial compressor. Currently, we can observe a tendency of increase of power unit, productivity, circumferential velocity and compression ratio of axial compressors in gas turbine engines of aircraft and ground-based application whereas metal consumption of their structure tends to fall. This causes the increase of dynamic loads as well as danger of damage of high load compressor or engine structure elements in general due to transient processes. In operating practices of aeronautical engineering and ground units with gas turbine drive the operational stability failure of gas turbine engines is one of relatively often failure causes what can lead to emergency situations. Surge occurrence is considered to be an absolute buckling failure. This is one of the most dangerous and often occurring types of instability. However detailed were the researches of this phenomenon the development of measures for surge before-the-fact prevention is still relevant. This is why the research of transient processes for axial compressors is necessary in order to provide efficient, stable and secure operation. The paper addresses the problem of automatic control system improvement by integrating the anti-surge algorithms for axial compressor of aircraft gas turbine engine. Paper considers dynamic exhaustion of gas dynamic stability of compressor stage, results of numerical simulation of airflow flowing through the airfoil at design and stalling modes, experimental researches to form the criteria that identify the compressor state at pre-surge mode detection. Authors formulated basic ways for developing surge preventing systems, i.e. forming the algorithms that allow detecting the surge origination and the systems that implement the proposed algorithms.

Keywords: axial compressor, rotation stall, Surg, unstable operation of gas turbine engine

Procedia PDF Downloads 410
555 Speckle-Based Phase Contrast Micro-Computed Tomography with Neural Network Reconstruction

Authors: Y. Zheng, M. Busi, A. F. Pedersen, M. A. Beltran, C. Gundlach

Abstract:

X-ray phase contrast imaging has shown to yield a better contrast compared to conventional attenuation X-ray imaging, especially for soft tissues in the medical imaging energy range. This can potentially lead to better diagnosis for patients. However, phase contrast imaging has mainly been performed using highly brilliant Synchrotron radiation, as it requires high coherence X-rays. Many research teams have demonstrated that it is also feasible using a laboratory source, bringing it one step closer to clinical use. Nevertheless, the requirement of fine gratings and high precision stepping motors when using a laboratory source prevents it from being widely used. Recently, a random phase object has been proposed as an analyzer. This method requires a much less robust experimental setup. However, previous studies were done using a particular X-ray source (liquid-metal jet micro-focus source) or high precision motors for stepping. We have been working on a much simpler setup with just small modification of a commercial bench-top micro-CT (computed tomography) scanner, by introducing a piece of sandpaper as the phase analyzer in front of the X-ray source. However, it needs a suitable algorithm for speckle tracking and 3D reconstructions. The precision and sensitivity of speckle tracking algorithm determine the resolution of the system, while the 3D reconstruction algorithm will affect the minimum number of projections required, thus limiting the temporal resolution. As phase contrast imaging methods usually require much longer exposure time than traditional absorption based X-ray imaging technologies, a dynamic phase contrast micro-CT with a high temporal resolution is particularly challenging. Different reconstruction methods, including neural network based techniques, will be evaluated in this project to increase the temporal resolution of the phase contrast micro-CT. A Monte Carlo ray tracing simulation (McXtrace) was used to generate a large dataset to train the neural network, in order to address the issue that neural networks require large amount of training data to get high-quality reconstructions.

Keywords: micro-ct, neural networks, reconstruction, speckle-based x-ray phase contrast

Procedia PDF Downloads 257
554 A Player's Perspective of University Elite Netball Programmes in South Africa

Authors: Wim Hollander, Petrus Louis Nolte

Abstract:

University sport in South Africa is not isolated from the complexity of globalization and professionalization of sport, as it forms an integral part of the sports development environment in South Africa. In order to align their sports programs with global and professional requirements, several universities opted to develop elite sports programs; recruit specialized personnel such as coaches, administrators, and athletes; provide expert coaching; scientific and medical services; sports testing; fitness, technical and tactical expertise; sport psychological and rehabilitation support; academic guidance and career assistance; and student-athlete accommodation. In addition, universities provide administrative support and high-quality physical resources (training facilities) for the benefit of the overall South African sport system. Although it is not compulsory for universities to develop elite sports programs to prepare their teams for competitions, elite competitions such as the annual Varsity Sport, University Sport South Africa (USSA) and local club competitions and leagues within international university competitions where universities not only compete but also deliver players for representative national netball teams. The aim of this study is, therefore, to describe the perceptions of players of the university elite netball programs they were participating in. This study adopted a descriptive design with a quantitative approach, utilizing a self-structured questionnaire as a research technique. As this research formed part of a national research project for NSA with a population of 172 national and provincial netball players, a sample of 92 university netball players from the population was selected. Content validity of the self-structured questionnaire was secured through a test-retest process, with construct validity through a member of the Statistical Consultation Services (STATCON) of the University of Johannesburg that provided feedback on the structural format of the questionnaire. Reliability was measured utilizing Cronbach Alpha on p < 0.005 level of significance. A reliability score of 0.87 was measured. The research was approved by the Board of Netball South Africa and ethical conduct implemented according to the processes and procedures approved by the Ethics Committees of the Faculty of Health Sciences, the University of Johannesburg with clearance number REC-01-30-2019. From the results, it is evident that university elite netball programs are professional, especially with regards to the employment of knowledgeable and competent coaches and technical officials such as team managers and sport sciences staff. These professionals have access to elite training facilities, support staff, and relatively large groups of elite players, all elements of an elite program that could enhance the national federation’s (Netball South Africa) system. Universities could serve the dual purpose of serving as university netball clubs, as well as providing elite training services and facilities as performance hubs for national players.

Keywords: elite sport programmes, university netball, player experiences, varsity sport netball

Procedia PDF Downloads 166