Search results for: parent sensitivity
266 Considerations for Effectively Using Probability of Failure as a Means of Slope Design Appraisal for Homogeneous and Heterogeneous Rock Masses
Authors: Neil Bar, Andrew Heweston
Abstract:
Probability of failure (PF) often appears alongside factor of safety (FS) in design acceptance criteria for rock slope, underground excavation and open pit mine designs. However, the design acceptance criteria generally provide no guidance relating to how PF should be calculated for homogeneous and heterogeneous rock masses, or what qualifies a ‘reasonable’ PF assessment for a given slope design. Observational and kinematic methods were widely used in the 1990s until advances in computing permitted the routine use of numerical modelling. In the 2000s and early 2010s, PF in numerical models was generally calculated using the point estimate method. More recently, some limit equilibrium analysis software offer statistical parameter inputs along with Monte-Carlo or Latin-Hypercube sampling methods to automatically calculate PF. Factors including rock type and density, weathering and alteration, intact rock strength, rock mass quality and shear strength, the location and orientation of geologic structure, shear strength of geologic structure and groundwater pore pressure influence the stability of rock slopes. Significant engineering and geological judgment, interpretation and data interpolation is usually applied in determining these factors and amalgamating them into a geotechnical model which can then be analysed. Most factors are estimated ‘approximately’ or with allowances for some variability rather than ‘exactly’. When it comes to numerical modelling, some of these factors are then treated deterministically (i.e. as exact values), while others have probabilistic inputs based on the user’s discretion and understanding of the problem being analysed. This paper discusses the importance of understanding the key aspects of slope design for homogeneous and heterogeneous rock masses and how they can be translated into reasonable PF assessments where the data permits. A case study from a large open pit gold mine in a complex geological setting in Western Australia is presented to illustrate how PF can be calculated using different methods and obtain markedly different results. Ultimately sound engineering judgement and logic is often required to decipher the true meaning and significance (if any) of some PF results.Keywords: probability of failure, point estimate method, Monte-Carlo simulations, sensitivity analysis, slope stability
Procedia PDF Downloads 207265 Body Fluids Identification by Raman Spectroscopy and Matrix-Assisted Laser Desorption/Ionization Time-of-Flight Mass Spectrometry
Authors: Huixia Shi, Can Hu, Jun Zhu, Hongling Guo, Haiyan Li, Hongyan Du
Abstract:
The identification of human body fluids during forensic investigations is a critical step to determine key details, and present strong evidence to testify criminal in a case. With the popularity of DNA and improved detection technology, the potential question must be revolved that whether the suspect’s DNA derived from saliva or semen, menstrual or peripheral blood, how to identify the red substance or aged blood traces on the spot is blood; How to determine who contribute the right one in mixed stains. In recent years, molecular approaches have been developing increasingly on mRNA, miRNA, DNA methylation and microbial markers, but appear expensive, time-consuming, and destructive disadvantages. Physicochemical methods are utilized frequently such us scanning electron microscopy/energy spectroscopy and X-ray fluorescence and so on, but results only showing one or two characteristics of body fluid itself and that out of working in unknown or mixed body fluid stains. This paper focuses on using chemistry methods Raman spectroscopy and matrix-assisted laser desorption/ionization time-of-flight mass spectrometry to discriminate species of peripheral blood, menstrual blood, semen, saliva, vaginal secretions, urine or sweat. Firstly, non-destructive, confirmatory, convenient and fast Raman spectroscopy method combined with more accurate matrix-assisted laser desorption/ionization time-of-flight mass spectrometry method can totally distinguish one from other body fluids. Secondly, 11 spectral signatures and specific metabolic molecules have been obtained by analysis results after 70 samples detected. Thirdly, Raman results showed peripheral and menstrual blood, saliva and vaginal have highly similar spectroscopic features. Advanced statistical analysis of the multiple Raman spectra must be requested to classify one to another. On the other hand, it seems that the lactic acid can differentiate peripheral and menstrual blood detected by matrix-assisted laser desorption/ionization time-of-flight mass spectrometry, but that is not a specific metabolic molecule, more sensitivity ones will be analyzed in a forward study. These results demonstrate the great potential of the developed chemistry methods for forensic applications, although more work is needed for method validation.Keywords: body fluids, identification, Raman spectroscopy, matrix-assisted laser desorption/ionization time-of-flight mass spectrometry
Procedia PDF Downloads 135264 User Experience in Relation to Eye Tracking Behaviour in VR Gallery
Authors: Veslava Osinska, Adam Szalach, Dominik Piotrowski
Abstract:
Contemporary VR technologies allow users to explore virtual 3D spaces where they can work, socialize, learn, and play. User's interaction with GUI and the pictures displayed implicate perceptual and also cognitive processes which can be monitored due to neuroadaptive technologies. These modalities provide valuable information about the users' intentions, situational interpretations, and emotional states, to adapt an application or interface accordingly. Virtual galleries outfitted by specialized assets have been designed using the Unity engine BITSCOPE project in the frame of CHIST-ERA IV program. Users interaction with gallery objects implies the questions about his/her visual interests in art works and styles. Moreover, an attention, curiosity, and other emotional states are possible to be monitored and analyzed. Natural gaze behavior data and eye position were recorded by built-in eye-tracking module within HTC Vive headset gogle for VR. Eye gaze results are grouped due to various users’ behavior schemes and the appropriate perpetual-cognitive styles are recognized. Parallelly usability tests and surveys were adapted to identify the basic features of a user-centered interface for the virtual environments across most of the timeline of the project. A total of sixty participants were selected from the distinct faculties of University and secondary schools. Users’ primary knowledge about art and was evaluated during pretest and this way the level of art sensitivity was described. Data were collected during two months. Each participant gave written informed consent before participation. In data analysis reducing the high-dimensional data into a relatively low-dimensional subspace ta non linear algorithms were used such as multidimensional scaling and novel technique technique t-Stochastic Neighbor Embedding. This way it can classify digital art objects by multi modal time characteristics of eye tracking measures and reveal signatures describing selected artworks. Current research establishes the optimal place on aesthetic-utility scale because contemporary interfaces of most applications require to be designed in both functional and aesthetical ways. The study concerns also an analysis of visual experience for subsamples of visitors, differentiated, e.g., in terms of frequency of museum visits, cultural interests. Eye tracking data may also show how to better allocate artefacts and paintings or increase their visibility when possible.Keywords: eye tracking, VR, UX, visual art, virtual gallery, visual communication
Procedia PDF Downloads 42263 Calibration of Contact Model Parameters and Analysis of Microscopic Behaviors of Cuxhaven Sand Using The Discrete Element Method
Authors: Anjali Uday, Yuting Wang, Andres Alfonso Pena Olare
Abstract:
The Discrete Element Method is a promising approach to modeling microscopic behaviors of granular materials. The quality of the simulations however depends on the model parameters utilized. The present study focuses on calibration and validation of the discrete element parameters for Cuxhaven sand based on the experimental data from triaxial and oedometer tests. A sensitivity analysis was conducted during the sample preparation stage and the shear stage of the triaxial tests. The influence of parameters like rolling resistance, inter-particle friction coefficient, confining pressure and effective modulus were investigated on the void ratio of the sample generated. During the shear stage, the effect of parameters like inter-particle friction coefficient, effective modulus, rolling resistance friction coefficient and normal-to-shear stiffness ratio are examined. The calibration of the parameters is carried out such that the simulations reproduce the macro mechanical characteristics like dilation angle, peak stress, and stiffness. The above-mentioned calibrated parameters are then validated by simulating an oedometer test on the sand. The oedometer test results are in good agreement with experiments, which proves the suitability of the calibrated parameters. In the next step, the calibrated and validated model parameters are applied to forecast the micromechanical behavior including the evolution of contact force chains, buckling of columns of particles, observation of non-coaxiality, and sample inhomogeneity during a simple shear test. The evolution of contact force chains vividly shows the distribution, and alignment of strong contact forces. The changes in coordination number are in good agreement with the volumetric strain exhibited during the simple shear test. The vertical inhomogeneity of void ratios is documented throughout the shearing phase, which shows looser structures in the top and bottom layers. Buckling of columns is not observed due to the small rolling resistance coefficient adopted for simulations. The non-coaxiality of principal stress and strain rate is also well captured. Thus the micromechanical behaviors are well described using the calibrated and validated material parameters.Keywords: discrete element model, parameter calibration, triaxial test, oedometer test, simple shear test
Procedia PDF Downloads 119262 A Bottleneck-Aware Power Management Scheme in Heterogeneous Processors for Web Apps
Authors: Inyoung Park, Youngjoo Woo, Euiseong Seo
Abstract:
With the advent of WebGL, Web apps are now able to provide high quality graphics by utilizing the underlying graphic processing units (GPUs). Despite that the Web apps are becoming common and popular, the current power management schemes, which were devised for the conventional native applications, are suboptimal for Web apps because of the additional layer, the Web browser, between OS and application. The Web browser running on a CPU issues GL commands, which are for rendering images to be displayed by the Web app currently running, to the GPU and the GPU processes them. The size and number of issued GL commands determine the processing load of the GPU. While the GPU is processing the GL commands, CPU simultaneously executes the other compute intensive threads. The actual user experience will be determined by either CPU processing or GPU processing depending on which of the two is the more demanded resource. For example, when the GPU work queue is saturated by the outstanding commands, lowering the performance level of the CPU does not affect the user experience because it is already deteriorated by the retarded execution of GPU commands. Consequently, it would be desirable to lower CPU or GPU performance level to save energy when the other resource is saturated and becomes a bottleneck in the execution flow. Based on this observation, we propose a power management scheme that is specialized for the Web app runtime environment. This approach incurs two technical challenges; identification of the bottleneck resource and determination of the appropriate performance level for unsaturated resource. The proposed power management scheme uses the CPU utilization level of the Window Manager to tell which one is the bottleneck if exists. The Window Manager draws the final screen using the processed results delivered from the GPU. Thus, the Window Manager is on the critical path that determines the quality of user experience and purely executed by the CPU. The proposed scheme uses the weighted average of the Window Manager utilization to prevent excessive sensitivity and fluctuation. We classified Web apps into three categories using the analysis results that measure frame-per-second (FPS) changes under diverse CPU/GPU clock combinations. The results showed that the capability of the CPU decides user experience when the Window Manager utilization is above 90% and consequently, the proposed scheme decreases the performance level of CPU by one step. On the contrary, when its utilization is less than 60%, the bottleneck usually lies in the GPU and it is desirable to decrease the performance of GPU. Even the processing unit that is not on critical path, excessive performance drop can occur and that may adversely affect the user experience. Therefore, our scheme lowers the frequency gradually, until it finds an appropriate level by periodically checking the CPU utilization. The proposed scheme reduced the energy consumption by 10.34% on average in comparison to the conventional Linux kernel, and it worsened their FPS by 1.07% only on average.Keywords: interactive applications, power management, QoS, Web apps, WebGL
Procedia PDF Downloads 190261 Multi-Objective Optimization of the Thermal-Hydraulic Behavior for a Sodium Fast Reactor with a Gas Power Conversion System and a Loss of off-Site Power Simulation
Authors: Avent Grange, Frederic Bertrand, Jean-Baptiste Droin, Amandine Marrel, Jean-Henry Ferrasse, Olivier Boutin
Abstract:
CEA and its industrial partners are designing a gas Power Conversion System (PCS) based on a Brayton cycle for the ASTRID Sodium-cooled Fast Reactor. Investigations of control and regulation requirements to operate this PCS during operating, incidental and accidental transients are necessary to adapt core heat removal. To this aim, we developed a methodology to optimize the thermal-hydraulic behavior of the reactor during normal operations, incidents and accidents. This methodology consists of a multi-objective optimization for a specific sequence, whose aim is to increase component lifetime by reducing simultaneously several thermal stresses and to bring the reactor into a stable state. Furthermore, the multi-objective optimization complies with safety and operating constraints. Operating, incidental and accidental sequences use specific regulations to control the thermal-hydraulic reactor behavior, each of them is defined by a setpoint, a controller and an actuator. In the multi-objective problem, the parameters used to solve the optimization are the setpoints and the settings of the controllers associated with the regulations included in the sequence. In this way, the methodology allows designers to define an optimized and specific control strategy of the plant for the studied sequence and hence to adapt PCS piloting at its best. The multi-objective optimization is performed by evolutionary algorithms coupled to surrogate models built on variables computed by the thermal-hydraulic system code, CATHARE2. The methodology is applied to a loss of off-site power sequence. Three variables are controlled: the sodium outlet temperature of the sodium-gas heat exchanger, turbomachine rotational speed and water flow through the heat sink. These regulations are chosen in order to minimize thermal stresses on the gas-gas heat exchanger, on the sodium-gas heat exchanger and on the vessel. The main results of this work are optimal setpoints for the three regulations. Moreover, Proportional-Integral-Derivative (PID) control setting is considered and efficient actuators used in controls are chosen through sensitivity analysis results. Finally, the optimized regulation system and the reactor control procedure, provided by the optimization process, are verified through a direct CATHARE2 calculation.Keywords: gas power conversion system, loss of off-site power, multi-objective optimization, regulation, sodium fast reactor, surrogate model
Procedia PDF Downloads 306260 Transport Mode Selection under Lead Time Variability and Emissions Constraint
Authors: Chiranjit Das, Sanjay Jharkharia
Abstract:
This study is focused on transport mode selection under lead time variability and emissions constraint. In order to reduce the carbon emissions generation due to transportation, organization has often faced a dilemmatic choice of transport mode selection since logistic cost and emissions reduction are complementary with each other. Another important aspect of transportation decision is lead-time variability which is least considered in transport mode selection problem. Thus, in this study, we provide a comprehensive mathematical based analytical model to decide transport mode selection under emissions constraint. We also extend our work through analysing the effect of lead time variability in the transport mode selection by a sensitivity analysis. In order to account lead time variability into the model, two identically normally distributed random variables are incorporated in this study including unit lead time variability and lead time demand variability. Therefore, in this study, we are addressing following questions: How the decisions of transport mode selection will be affected by lead time variability? How lead time variability will impact on total supply chain cost under carbon emissions? To accomplish these objectives, a total transportation cost function is developed including unit purchasing cost, unit transportation cost, emissions cost, holding cost during lead time, and penalty cost for stock out due to lead time variability. A set of modes is available to transport each node, in this paper, we consider only four transport modes such as air, road, rail, and water. Transportation cost, distance, emissions level for each transport mode is considered as deterministic and static in this paper. Each mode is having different emissions level depending on the distance and product characteristics. Emissions cost is indirectly affected by the lead time variability if there is any switching of transport mode from lower emissions prone transport mode to higher emissions prone transport mode in order to reduce penalty cost. We provide a numerical analysis in order to study the effectiveness of the mathematical model. We found that chances of stock out during lead time will be higher due to the higher variability of lead time and lad time demand. Numerical results show that penalty cost of air transport mode is negative that means chances of stock out zero, but, having higher holding and emissions cost. Therefore, air transport mode is only selected when there is any emergency order to reduce penalty cost, otherwise, rail and road transport is the most preferred mode of transportation. Thus, this paper is contributing to the literature by a novel approach to decide transport mode under emissions cost and lead time variability. This model can be extended by studying the effect of lead time variability under some other strategic transportation issues such as modal split option, full truck load strategy, and demand consolidation strategy etc.Keywords: carbon emissions, inventory theoretic model, lead time variability, transport mode selection
Procedia PDF Downloads 432259 New Derivatives 7-(diethylamino)quinolin-2-(1H)-one Based Chalcone Colorimetric Probes for Detection of Bisulfite Anion in Cationic Micellar Media
Authors: Guillermo E. Quintero, Edwin G. Perez, Oriel Sanchez, Christian Espinosa-Bustos, Denis Fuentealba, Margarita E. Aliaga
Abstract:
Bisulfite ion (HSO3-) has been used as a preservative in food, drinks, and medication. However, it is well-known that HSO3- can cause health problems like asthma and allergic reactions in people. Due to the above, the development of analytical methods for detecting this ion has gained great interest. In line with the above, the current use of colorimetric and/or fluorescent probes as a detection technique has acquired great relevance due to their high sensitivity and accuracy. In this context, 2-quinolinone derivatives have been found to possess promising activity as antiviral agents, sensitizers in solar cells, antifungals, antioxidants, and sensors. In particular, 7-(diethylamino)-2-quinolinone derivatives have attracted attention in recent years since their suitable photophysical properties become promising fluorescent probes. In Addition, there is evidence that photophysical properties and reactivity can be affected by the study medium, such as micellar media. Based on the above background, 7-(diethylamino)-2-quinolinone derivatives based chalcone will be able to be incorporated into a cationic micellar environment (Cetyltrimethylammonium bromide, CTAB). Furthermore, the supramolecular control induced by the micellar environment will increase the reactivity of these derivatives towards nucleophilic analytes such as HSO3- (Michael-type addition reaction), leading to the generation of new colorimetric and/or fluorescent probes. In the present study, two derivatives of 7-(diethylamino)-2-quinolinone based chalcone DQD1-2 were synthesized according to the method reported by the literature. These derivatives were structurally characterized by 1H, 13C NMR, and HRMS-ESI. In addition, UV-VIS and fluorescence studies determined absorption bands near 450 nm, emission bands near 600 nm, fluorescence quantum yields near 0.01, and fluorescence lifetimes of 5 ps. In line with the foregoing, these photophysical properties aforementioned were improved in the presence of a cationic micellar medium using CTAB thanks to the formation of adducts presenting association constants of the order of 2,5x105 M-1, increasing the quantum yields to 0.12 and the fluorescence lifetimes corresponding to two lifetimes near to 120 and 400 ps for DQD1 and DQD2. Besides, thanks to the presence of the micellar medium, the reactivity of these derivatives with nucleophilic analytes, such as HSO3-, was increased. This was achieved through kinetic studies, which demonstrated an increase in the bimolecular rate constants in the presence of a micellar medium. Finally, probe DQD1 was chosen as the best sensor since it was assessed to detect HSO3- with excellent results.Keywords: bisulfite detection, cationic micelle, colorimetric probes, quinolinone derivatives
Procedia PDF Downloads 92258 Nondestructive Inspection of Reagents under High Attenuated Cardboard Box Using Injection-Seeded THz-Wave Parametric Generator
Authors: Shin Yoneda, Mikiya Kato, Kosuke Murate, Kodo Kawase
Abstract:
In recent years, there have been numerous attempts to smuggle narcotic drugs and chemicals by concealing them in international mail. Combatting this requires a non-destructive technique that can identify such illicit substances in mail. Terahertz (THz) waves can pass through a wide variety of materials, and many chemicals show specific frequency-dependent absorption, known as a spectral fingerprint, in the THz range. Therefore, it is reasonable to investigate non-destructive mail inspection techniques that use THz waves. For this reason, in this work, we tried to identify reagents under high attenuation shielding materials using injection-seeded THz-wave parametric generator (is-TPG). Our THz spectroscopic imaging system using is-TPG consisted of two non-linear crystals for emission and detection of THz waves. A micro-chip Nd:YAG laser and a continuous wave tunable external cavity diode laser were used as the pump and seed source, respectively. The pump beam and seed beam were injected to the LiNbO₃ crystal satisfying the noncollinear phase matching condition in order to generate high power THz-wave. The emitted THz wave was irradiated to the sample which was raster scanned by the x-z stage while changing the frequencies, and we obtained multispectral images. Then the transmitted THz wave was focused onto another crystal for detection and up-converted to the near infrared detection beam based on nonlinear optical parametric effects, wherein the detection beam intensity was measured using an infrared pyroelectric detector. It was difficult to identify reagents in a cardboard box because of high noise levels. In this work, we introduce improvements for noise reduction and image clarification, and the intensity of the near infrared detection beam was converted correctly to the intensity of the THz wave. A Gaussian spatial filter is also introduced for a clearer THz image. Through these improvements, we succeeded in identification of reagents hidden in a 42-mm thick cardboard box filled with several obstacles, which attenuate 56 dB at 1.3 THz, by improving analysis methods. Using this system, THz spectroscopic imaging was possible for saccharides and may also be applied to cases where illicit drugs are hidden in the box, and multiple reagents are mixed together. Moreover, THz spectroscopic imaging can be achieved through even thicker obstacles by introducing an NIR detector with higher sensitivity.Keywords: nondestructive inspection, principal component analysis, terahertz parametric source, THz spectroscopic imaging
Procedia PDF Downloads 176257 Queer Social Realism and Architecture in British Cinema: Tenement Housing, Unions and the Affective Body
Authors: Christopher Pullen
Abstract:
This paper explores the significance of British cinema in the late 1950s and early 1960s as offering a renaissance of realist discourse, in the representation of everyday social issues. Offering a rejection of Hollywood cinema and the superficially of the middle classes, these ‘kitchen sink dramas’ often set within modest and sometimes squalid domestic and social environments, focused on the political struggle of the disenfranchised examining poverty, the oppressed and the outsider. While films like Look Back in Anger and Room at the Top looked primarily at male heterosexual subjectivity, films like A Taste of Honey and Victim focused on female and queer male narratives. Framing the urban landscape as a discursive architectural arena, representing basic living conditions and threatening social worlds, these iconic films established new storytelling processes for the outsider. This paper examines this historical context foregrounding the contemporary films Beautiful Thing (Hettie Macdonald, 1996), Weekend (Andrew Haigh, 2011) and Pride (Marcus Warchus, 2014), while employing the process of textual analysis in relation to theories of affect, defined by writers such as Lisa U. Marks and Sara Ahmed. Considering both romance narratives and public demonstrations of unity, where the queer ‘affective’ body is placed within architectural and social space, Beautiful Thing tells the story of gay male teenagers falling in love despite oppression from family and school, Weekend examines a one-night stand between young gay men and the unlikeliness of commitment, but the drive for sensitivity, and Pride foregrounds an historical relationship between queer youth activists and the miner’s union, who were on strike between 1984-5. These films frame the queer ‘affective’ body within politicized public space, evident in lower class men’s working clubs, tenement housing and brutal modernist tower blocks, focusing on architectural features such as windows, doorways and staircases, relating temporality, desire and change. Through such an examination a hidden history of gay male performativity is revealed, framing the potential of contemporary cinema to focus on the context of the outsider in encouraging social change.Keywords: queer, affect, cinema, architecture, life chances
Procedia PDF Downloads 358256 Utilizing Artificial Intelligence to Predict Post Operative Atrial Fibrillation in Non-Cardiac Transplant
Authors: Alexander Heckman, Rohan Goswami, Zachi Attia, Paul Friedman, Peter Noseworthy, Demilade Adedinsewo, Pablo Moreno-Franco, Rickey Carter, Tathagat Narula
Abstract:
Background: Postoperative atrial fibrillation (POAF) is associated with adverse health consequences, higher costs, and longer hospital stays. Utilizing existing predictive models that rely on clinical variables and circulating biomarkers, multiple societies have published recommendations on the treatment and prevention of POAF. Although reasonably practical, there is room for improvement and automation to help individualize treatment strategies and reduce associated complications. Methods and Results: In this retrospective cohort study of solid organ transplant recipients, we evaluated the diagnostic utility of a previously developed AI-based ECG prediction for silent AF on the development of POAF within 30 days of transplant. A total of 2261 non-cardiac transplant patients without a preexisting diagnosis of AF were found to have a 5.8% (133/2261) incidence of POAF. While there were no apparent sex differences in POAF incidence (5.8% males vs. 6.0% females, p=.80), there were differences by race and ethnicity (p<0.001 and 0.035, respectively). The incidence in white transplanted patients was 7.2% (117/1628), whereas the incidence in black patients was 1.4% (6/430). Lung transplant recipients had the highest incidence of postoperative AF (17.4%, 37/213), followed by liver (5.6%, 56/1002) and kidney (3.6%, 32/895) recipients. The AUROC in the sample was 0.62 (95% CI: 0.58-0.67). The relatively low discrimination may result from undiagnosed AF in the sample. In particular, 1,177 patients had at least 1 AI-ECG screen for AF pre-transplant above .10, a value slightly higher than the published threshold of 0.08. The incidence of POAF in the 1104 patients without an elevated prediction pre-transplant was lower (3.7% vs. 8.0%; p<0.001). While this supported the hypothesis that potentially undiagnosed AF may have contributed to the diagnosis of POAF, the utility of the existing AI-ECG screening algorithm remained modest. When the prediction for POAF was made using the first postoperative ECG in the sample without an elevated screen pre-transplant (n=1084 on account of n=20 missing postoperative ECG), the AUROC was 0.66 (95% CI: 0.57-0.75). While this discrimination is relatively low, at a threshold of 0.08, the AI-ECG algorithm had a 98% (95% CI: 97 – 99%) negative predictive value at a sensitivity of 66% (95% CI: 49-80%). Conclusions: This study's principal finding is that the incidence of POAF is rare, and a considerable fraction of the POAF cases may be latent and undiagnosed. The high negative predictive value of AI-ECG screening suggests utility for prioritizing monitoring and evaluation on transplant patients with a positive AI-ECG screening. Further development and refinement of a post-transplant-specific algorithm may be warranted further to enhance the diagnostic yield of the ECG-based screening.Keywords: artificial intelligence, atrial fibrillation, cardiology, transplant, medicine, ECG, machine learning
Procedia PDF Downloads 131255 Identifying Large-Scale Photovoltaic and Concentrated Solar Power Hot Spots: Multi-Criteria Decision-Making Framework
Authors: Ayat-Allah Bouramdane
Abstract:
Solar Photovoltaic (PV) and Concentrated Solar Power (CSP) do not burn fossil fuels and, therefore, could meet the world's needs for low-carbon power generation as they do not release greenhouse gases into the atmosphere as they generate electricity. The power output of the solar PV module and CSP collector is proportional to the temperature and the amount of solar radiation received by their surface. Hence, the determination of the most convenient locations of PV and CSP systems is crucial to maximizing their output power. This study aims to provide a hands-on and plausible approach to the multi-criteria evaluation of site suitability of PV and CSP plants using a combination of Geographic Referenced Information (GRI) and Analytic Hierarchy Process (AHP). Applying the GRI-based AHP approach is meant to specify the criteria and sub-criteria, to identify the unsuitable areas, the low-, moderate-, high- and very high suitable areas for each layer of GRI, to perform the pairwise comparison matrix at each level of the hierarchy structure based on experts' knowledge, and calculate the weights using AHP to create the final map of solar PV and CSP plants suitability in Morocco with a particular focus on the Dakhla city. The results recognize that solar irradiation is the main decision factor for the integration of these technologies on energy policy goals of Morocco but explicitly account for other factors that cannot only limit the potential of certain locations but can even exclude the Dakhla city classified as unsuitable area. We discuss the sensitivity of the PV and CSP site suitability to different aspects, such as the methodology, the climate conditions, and the technology used in each source, and provide the final recommendations to the Moroccan energy strategy by analyzing if actual Morocco's PV and CSP installations are located within areas deemed suitable and by discussing several cases to provide mutual benefits across the Food-Energy-Water nexus. The adapted methodology and conducted suitability map could be used by researchers or engineers to provide helpful information for decision-makers in terms of sites selection, design, and planning of future solar plants, especially in areas suffering from energy shortages, such as the Dakhla city, which is now one of Africa's most promising investment hubs and it is especially attractive to investors looking to root their operations in Africa and import to European markets.Keywords: analytic hierarchy process, concentrated solar power, dakhla, geographic referenced information, Morocco, multi-criteria decision-making, photovoltaic, site suitability
Procedia PDF Downloads 172254 Synthesis of MIPs towards Precursors and Intermediates of Illicit Drugs and Their following Application in Sensing Unit
Authors: K. Graniczkowska, N. Beloglazova, S. De Saeger
Abstract:
The threat of synthetic drugs is one of the most significant current drug problems worldwide. The use of drugs of abuse has increased dramatically during the past three decades. Among others, Amphetamine-Type Stimulants (ATS) are globally the second most widely used drugs after cannabis, exceeding the use of cocaine and heroin. ATS are potent central nervous system (CNS) stimulants, capable of inducing euphoric static similar to cocaine. Recreational use of ATS is widespread, even though warnings of irreversible damage of the CNS were reported. ATS pose a big problem and their production contributes to the pollution of the environment by discharging big volumes of liquid waste to sewage system. Therefore, there is a demand to develop robust and sensitive sensors that can detect ATS and their intermediates in environmental water samples. A rapid and simple test is required. Analysis of environmental water samples (which sometimes can be a harsh environment) using antibody-based tests cannot be applied. Therefore, molecular imprinted polymers (MIPs), which are known as synthetic antibodies, have been chosen for that approach. MIPs are characterized with a high mechanical and thermal stability, show chemical resistance in a broad pH range and various organic or aqueous solvents. These properties make them the preferred type of receptors for application in the harsh conditions imposed by environmental samples. To the best of our knowledge, there are no existing MIPs-based sensors toward amphetamine and its intermediates. Also not many commercial MIPs for this application are available. Therefore, the aim of this study was to compare different techniques to obtain MIPs with high specificity towards ATS and characterize them for following use in a sensing unit. MIPs against amphetamine and its intermediates were synthesized using a few different techniques, such as electro-, thermo- and UV-initiated polymerization. Different monomers, cross linkers and initiators, in various ratios, were tested to obtain the best sensitivity and polymers properties. Subsequently, specificity and selectivity were compared with commercially available MIPs against amphetamine. Different linkers, such as lipoic acid, 3-mercaptopioponic acid and tyramine were examined, in combination with several immobilization techniques, to select the best procedure for attaching particles on sensor surface. Performed experiments allowed choosing an optimal method for the intended sensor application. Stability of MIPs in extreme conditions, such as highly acidic or basic was determined. Obtained results led to the conclusion about MIPs based sensor applicability in sewage system testing.Keywords: amphetamine type stimulants, environment, molecular imprinted polymers, MIPs, sensor
Procedia PDF Downloads 249253 Auditory Perception of Frequency-Modulated Sweeps and Reading Difficulties in Chinese
Authors: Hsiao-Lan Wang, Chun-Han Chiang, I-Chen Chen
Abstract:
In Chinese Mandarin, lexical tones play an important role to provide contrasts in word meaning. They are pitch patterns and can be quantified as the fundamental frequency (F0), expressed in Hertz (Hz). In this study, we aim to investigate the influence of frequency discrimination on Chinese children’s performance of reading abilities. Fifty participants from 3rd to 4th grades, including 24 children with reading difficulties and 26 age-matched children, were examined. A serial of cognitive, language, reading and psychoacoustic tests were administrated. Magnetoencephalography (MEG) was also employed to study children’s auditory sensitivity. In the present study, auditory frequency was measured through slide-up pitch, slide-down pitch and frequency-modulated tone. The results showed that children with Chinese reading difficulties were significantly poor at phonological awareness and auditory discrimination for the identification of frequency-modulated tone. Chinese children’s character reading performance was significantly related to lexical tone awareness and auditory perception of frequency-modulated tone. In our MEG measure, we compared the mismatch negativity (MMNm), from 100 to 200 ms, in two groups. There were no significant differences between groups during the perceptual discrimination of standard sounds, fast-up and fast-down frequencies. However, the data revealed significant cluster differences between groups in the slow-up and slow-down frequencies discrimination. In the slow-up stimulus, the cluster demonstrated an upward field map at 106-151 ms (p < .001) with a strong peak time at 127ms. The source analyses of two dipole model and localization resolution model (CLARA) from 100 to 200 ms both indicated a strong source from the left temporal area with 45.845% residual variance. Similar results were found in the slow-down stimulus with a larger upward current at 110-142 ms (p < 0.05) and a peak time at 117 ms in the left temporal area (47.857% residual variance). In short, we found a significant group difference in the MMNm while children processed frequency-modulated tones with slow temporal changes. The findings may imply that perception of sound frequency signals with slower temporal modulations was related to reading and language development in Chinese. Our study may also support the recent hypothesis of underlying non-verbal auditory temporal deficits accounting for the difficulties in literacy development seen developmental dyslexia.Keywords: Chinese Mandarin, frequency modulation sweeps, magnetoencephalography, mismatch negativity, reading difficulties
Procedia PDF Downloads 575252 Investigations on Pyrolysis Model for Radiatively Dominant Diesel Pool Fire Using Fire Dynamic Simulator
Authors: Siva K. Bathina, Sudheer Siddapureddy
Abstract:
Pool fires are formed when the flammable liquid accidentally spills on the ground or water and ignites. Pool fire is a kind of buoyancy-driven and diffusion flame. There have been many pool fire accidents caused during processing, handling and storing of liquid fuels in chemical and oil industries. Such kind of accidents causes enormous damage to property as well as the loss of lives. Pool fires are complex in nature due to the strong interaction among the combustion, heat and mass transfers and pyrolysis at the fuel surface. Moreover, the experimental study of such large complex fires involves fire safety issues and difficulties in performing experiments. In the present work, large eddy simulations are performed to study such complex fire scenarios using fire dynamic simulator. A 1 m diesel pool fire is considered for the studied cases, and diesel is chosen as it is most commonly involved fuel in fire accidents. Fire simulations are performed by specifying two different boundary conditions: one the fuel is in liquid state and pyrolysis model is invoked, and the other by assuming the fuel is initially in a vapor state and thereby prescribing the mass loss rate. A domain of size 11.2 m × 11.2 m × 7.28 m with uniform structured grid is chosen for the numerical simulations. Grid sensitivity analysis is performed, and a non-dimensional grid size of 12 corresponding to 8 cm grid size is considered. Flame properties like mass burning rate, irradiance, and time-averaged axial flame temperature profile are predicted. The predicted steady-state mass burning rate is 40 g/s and is within the uncertainty limits of the previously reported experimental data (39.4 g/s). Though the profile of the irradiance at a distance from the fire along the height is somewhat in line with the experimental data and the location of the maximum value of irradiance is shifted to a higher location. This may be due to the lack of sophisticated models for the species transportation along with combustion and radiation in the continuous zone. Furthermore, the axial temperatures are not predicted well (for any of the boundary conditions) in any of the zones. The present study shows that the existing models are not sufficient enough for modeling blended fuels like diesel. The predictions are strongly dependent on the experimental values of the soot yield. Future experiments are necessary for generalizing the soot yield for different fires.Keywords: burning rate, fire accidents, fire dynamic simulator, pyrolysis
Procedia PDF Downloads 195251 Status of Sensory Profile Score among Children with Autism in Selected Centers of Dhaka City
Authors: Nupur A. D., Miah M. S., Moniruzzaman S. K.
Abstract:
Autism is a neurobiological disorder that affects physical, social, and language skills of a person. A child with autism feels difficulty for processing, integrating, and responding to sensory stimuli. Current estimates have shown that 45% to 96 % of children with Autism Spectrum Disorder demonstrate sensory difficulties. As autism is a worldwide burning issue, it has become a highly prioritized and important service provision in Bangladesh. The sensory deficit does not only hamper the normal development of a child, it also hampers the learning process and functional independency. The purpose of this study was to find out the prevalence of sensory dysfunction among children with autism and recognize common patterns of sensory dysfunction. A cross-sectional study design was chosen to carry out this research work. This study enrolled eighty children with autism and their parents by using the systematic sampling method. In this study, data were collected through the Short Sensory Profile (SSP) assessment tool, which consists of 38 items in the questionnaire, and qualified graduate Occupational Therapists were directly involved in interviewing parents as well as observing child responses to sensory related activities of the children with autism from four selected autism centers in Dhaka, Bangladesh. All item analyses were conducted to identify items yielding or resulting in the highest reported sensory processing dysfunction among those children through using SSP and Statistical Package for Social Sciences (SPSS) version 21.0 for data analysis. This study revealed that almost 78.25% of children with autism had significant sensory processing dysfunction based on their sensory response to relevant activities. Under-responsive sensory seeking and auditory filtering were the least common problems among them. On the other hand, most of them (95%) represented that they had definite to probable differences in sensory processing, including under-response or sensory seeking, auditory filtering, and tactile sensitivity. Besides, the result also shows that the definite difference in sensory processing among 64 children was within 100%; it means those children with autism suffered from sensory difficulties, and thus it drew a great impact on the children’s Daily Living Activities (ADLs) as well as social interaction with others. Almost 95% of children with autism require intervention to overcome or normalize the problem. The result gives insight regarding types of sensory processing dysfunction to consider during diagnosis and ascertaining the treatment. So, early sensory problem identification is very important and thus will help to provide appropriate sensory input to minimize the maladaptive behavior and enhance to reach the normal range of adaptive behavior.Keywords: autism, sensory processing difficulties, sensory profile, occupational therapy
Procedia PDF Downloads 64250 Modern Information Security Management and Digital Technologies: A Comprehensive Approach to Data Protection
Authors: Mahshid Arabi
Abstract:
With the rapid expansion of digital technologies and the internet, information security has become a critical priority for organizations and individuals. The widespread use of digital tools such as smartphones and internet networks facilitates the storage of vast amounts of data, but simultaneously, vulnerabilities and security threats have significantly increased. The aim of this study is to examine and analyze modern methods of information security management and to develop a comprehensive model to counteract threats and information misuse. This study employs a mixed-methods approach, including both qualitative and quantitative analyses. Initially, a systematic review of previous articles and research in the field of information security was conducted. Then, using the Delphi method, interviews with 30 information security experts were conducted to gather their insights on security challenges and solutions. Based on the results of these interviews, a comprehensive model for information security management was developed. The proposed model includes advanced encryption techniques, machine learning-based intrusion detection systems, and network security protocols. AES and RSA encryption algorithms were used for data protection, and machine learning models such as Random Forest and Neural Networks were utilized for intrusion detection. Statistical analyses were performed using SPSS software. To evaluate the effectiveness of the proposed model, T-Test and ANOVA statistical tests were employed, and results were measured using accuracy, sensitivity, and specificity indicators of the models. Additionally, multiple regression analysis was conducted to examine the impact of various variables on information security. The findings of this study indicate that the comprehensive proposed model reduced cyber-attacks by an average of 85%. Statistical analysis showed that the combined use of encryption techniques and intrusion detection systems significantly improves information security. Based on the obtained results, it is recommended that organizations continuously update their information security systems and use a combination of multiple security methods to protect their data. Additionally, educating employees and raising public awareness about information security can serve as an effective tool in reducing security risks. This research demonstrates that effective and up-to-date information security management requires a comprehensive and coordinated approach, including the development and implementation of advanced techniques and continuous training of human resources.Keywords: data protection, digital technologies, information security, modern management
Procedia PDF Downloads 28249 Study of COVID-19 Intensity Correlated with Specific Biomarkers and Environmental Factors
Authors: Satendra Pal Singh, Dalip Kr. Kakru, Jyoti Mishra, Rajesh Thakur, Tarana Sarwat
Abstract:
COVID-19 is still an intrigue as far as morbidity or mortality is concerned. The rate of recovery varies from person to person, & it depends upon the accessibility of the healthcare system and the roles played by the physicians and caregivers. It is envisaged that with the passage of time, people would become immune to this virus, and those who are vulnerable would sustain themselves with the help of vaccines. The proposed study deals with the severeness of COVID-19 is associated with some specific biomarkers linked to correlate age and gender. We will be assessing the overall homeostasis of the persons who were affected by the coronavirus infection and also of those who recovered from it. Some people show more severe effects, while others show very mild symptoms, however, they show low CT values. Thus far, it is unclear why the new strain of Covid has different effects on different people in terms of age, gender, and ABO blood typing. According to data, the fatality rate with heart disease was 10.5 percent, 7.3 percent were diabetic, and 6 percent who are already infected from other comorbidities. However, some COVID-19 cases are worse than others & it is not fully explainable as of date. Overall data show that the ABO blood group is effective or prone to the risk of SARS-COV2 infection, while another study also shows the phenotypic effects of the blood group related to covid. It is an accepted fact that females have more strong immune systems than males, which may be related to the fact that females have two ‘X’ chromosomes, which might contain a more effective immunity booster gene on the X chromosome, and are capable to protect the female. Also specific sex hormones also induce a better immune response in a specific gender. This calls for in-depth analysis to be able to gain insight into this dilemma. COVID-19 is still not fully characterized, and thus we are not very familiar with its biology, mode of infection, susceptibility, and overall viral load in the human body. How many virus particles are needed to infect a person? How, then, comorbidity contribute to coronavirus infection? Since the emergence of this virus in 2020, a large number of papers have been published, and seemingly, vaccines have been prepared. But still, a large number of questions remain unanswered. The proneness of humans for infection by covid-19 needs to be established to be able to develop a better strategy to fight this virus. Our study will be on the Impact of demography on the Severity of covid-19 infection & at the same time, will look into gender-specific sensitivity of Covid-19 and the Operational variation of different biochemical markers in Covid-19 positive patients. Besides, we will be studying the co-relation, if any, of COVID severity & ABO Blood group type and the occurrence of the most common blood group type amongst positive patience.Keywords: coronavirus, ABO blood group, age, gender
Procedia PDF Downloads 97248 Biosensor for Determination of Immunoglobulin A, E, G and M
Authors: Umut Kokbas, Mustafa Nisari
Abstract:
Immunoglobulins, also known as antibodies, are glycoprotein molecules produced by activated B cells that transform into plasma cells and result in them. Antibodies are critical molecules of the immune response to fight, which help the immune system specifically recognize and destroy antigens such as bacteria, viruses, and toxins. Immunoglobulin classes differ in their biological properties, structures, targets, functions, and distributions. Five major classes of antibodies have been identified in mammals: IgA, IgD, IgE, IgG, and IgM. Evaluation of the immunoglobulin isotype can provide a useful insight into the complex humoral immune response. Evaluation and knowledge of immunoglobulin structure and classes are also important for the selection and preparation of antibodies for immunoassays and other detection applications. The immunoglobulin test measures the level of certain immunoglobulins in the blood. IgA, IgG, and IgM are usually measured together. In this way, they can provide doctors with important information, especially regarding immune deficiency diseases. Hypogammaglobulinemia (HGG) is one of the main groups of primary immunodeficiency disorders. HGG is caused by various defects in B cell lineage or function that result in low levels of immunoglobulins in the bloodstream. This affects the body's immune response, causing a wide range of clinical features, from asymptomatic diseases to severe and recurrent infections, chronic inflammation and autoimmunity Transient infant hypogammaglobulinemia (THGI), IgM deficiency (IgMD), Bruton agammaglobulinemia, IgA deficiency (SIgAD) HGG samples are a few. Most patients can continue their normal lives by taking prophylactic antibiotics. However, patients with severe infections require intravenous immune serum globulin (IVIG) therapy. The IgE level may rise to fight off parasitic infections, as well as a sign that the body is overreacting to allergens. Also, since the immune response can vary with different antigens, measuring specific antibody levels also aids in the interpretation of the immune response after immunization or vaccination. Immune deficiencies usually occur in childhood. In Immunology and Allergy clinics, apart from the classical methods, it will be more useful in terms of diagnosis and follow-up of diseases, if it is fast, reliable and especially in childhood hypogammaglobulinemia, sampling from children with a method that is more convenient and uncomplicated. The antibodies were attached to the electrode surface via the poly hydroxyethyl methacrylamide cysteine nanopolymer. It was used to evaluate the anodic peak results obtained in the electrochemical study. According to the data obtained, immunoglobulin determination can be made with a biosensor. However, in further studies, it will be useful to develop a medical diagnostic kit with biomedical engineering and to increase its sensitivity.Keywords: biosensor, immunosensor, immunoglobulin, infection
Procedia PDF Downloads 101247 The Importance of Developing Pedagogical Agency Capacities in Initial Teacher Formation: A Critical Approach to Advance in Social Justice
Authors: Priscilla Echeverria
Abstract:
This paper addresses initial teacher formation as a formative space in which pedagogy students develop a pedagogical agency capacity to contribute to social justice, considering ethical, political, and epistemic dimensions. This paper is structured by discussing first the concepts of agency, pedagogical interaction, and social justice from a critical perspective; and continues offering preliminary results on the capacity of pedagogical agency in novice teachers after the analysis of critical incidents as a research methodology. This study is motivated by the concern that responding to the current neoliberal scenario, many initial teacher formation (ITF) programs have reduced the meaning of education to instruction, and pedagogy to methodology, favouring the formation of a technical professional over a reflective or critical one. From this concern, this study proposes that the restitution of the subject is an urgent task in teacher formation, so it is essential to enable him in his capacity for action and advance in eliminating institutionalized oppression insofar as it affects that capacity. Given that oppression takes place in human interaction, through this work, I propose that initial teacher formation develops sensitivity and educates the gaze to identify oppression and take action against it, both in pedagogical interactions -which configure political, ethical, and epistemic subjectivities- as in the hidden and official curriculum. All this from the premise that modelling democratic and dialogical interactions are basic for any program that seeks to contribute to a more just and empowered society. The contribution of this study lies in the fact that it opens a discussion in an area about which we know little: the impact of the type of interactions offered by university teaching at ITF on the capacity of future teachers to be pedagogical agents. For this reason, this study seeks to gather evidence of the result of this formation, analysing the capacity of pedagogical agency of novice teachers, or, in other words, how capable the graduates of secondary pedagogies are in their first pedagogical experiences to act and make decisions putting the formative purposes that they are capable of autonomously defining before technical or bureaucratic issues imposed by the curriculum or the official culture. This discussion is part of my doctoral research, "The importance of developing the capacity for ethical-political-epistemic agency in novice teachers during initial teacher formation to contribute to social justice", which I am currently developing in the Educational Research program of the University of Lancaster, United Kingdom, as a Conicyt fellow for the 2019 cohort.Keywords: initial teacher formation, pedagogical agency, pedagogical interaction, social justice, hidden curriculum
Procedia PDF Downloads 95246 The Influence of the Variety and Harvesting Date on Haskap Composition and Anti-Diabetic Properties
Authors: Aruma Baduge Kithma Hansanee De Silva
Abstract:
Haskap (Lonicera caerulea L.), also known as blue honeysuckle, is a recently commercialized berry crop in Canada. Haskap berries are rich in polyphenols, including anthocyanins, which are known for potential health-promoting effects. Cyanidin-3-O-glucoside (C3G) is the most prominent anthocyanin of haskap berries. Recent literature reveals the efficacy of C3G in reducing the risk of type 2 diabetes (T2D), which has become an increasingly common health issue around the world. The T2D is characterized as a metabolic disorder of hyperglycemia and insulin resistance. It has been demonstrated that C3G has anti-diabetic effects in various ways, including improvement in insulin sensitivity, and inhibition of activities of carbohydrate-hydrolyzing enzymes, including alpha-amylase and alpha-glucosidase. The goal of this study was to investigate the influence of variety and harvesting date on haskap composition, biological properties, and antidiabetic properties. The polyphenolic compounds present in four commercially grown haskap cultivars, Aurora, Rebecca, Larissa and Evie among five harvesting stages (H1-H5), were extracted separately in 80% ethanol and analyzed to characterize their phenolic profiles. The haskap berries contain different types of polyphenols including flavonoids and phenolic acids. Anthocyanin is the major type of flavonoid. C3G is the most prominent type of anthocyanin, which accounts for 79% of total anthocyanin in all extracts. The variety Larissa at H5 contained the highest average C3G content, and its ethanol extract had the highest (1212.3±63.9 mg/100g FW) while, Evie at H1 contained the lowest C3G content (96.9±40.4 mg/100g FW). The average C3G content of Larissa from H1 – H5 varies from 208 – 1212 mg/100g FW. Quarcetin-3-Rutinoside (Q3Rut) is the major type of flavonol and highest is observed in Rebecca at H4 (47.81 mg/100g FW). The haskap berries also contained phenolic acids, but approximately 95% of the phenolic acids consisted of chlorogenic acid. The cultivar Larissa has a higher level of anthocyanin than the other four cultivars. The highest total phenolic content is observed in Evie at H5 (2.97±1.03 mg/g DW) while the lowest in Rebecca at H1 (1.47±0.96 mg/g DW). The antioxidant capacity of Evie at H5 was higher (14.40±2.21 µmol TE/ g DW) among other cultivars and the lowest observed in Aurora at H3 (5.69±0.34 µmol TE/ g DW). Furthermore, Larissa H5 shows the greatest inhibition of carbohydrate-hydrolyzing enzymes including alpha-glucosidase and alpha-amylase. In conclusion Larissa, at H5 demonstrated highest polyphenol composition and antidiabetic properties.Keywords: anthocyanin, cyanidin-3-O-glucoside, haskap, type 2 diabetes
Procedia PDF Downloads 454245 Adaptation of the Scenario Test for Greek-speaking People with Aphasia: Reliability and Validity Study
Authors: Marina Charalambous, Phivos Phylactou, Thekla Elriz, Loukia Psychogios, Jean-Marie Annoni
Abstract:
Background: Evidence-based practices for the evaluation and treatment of people with aphasia (PWA) in Greek are mainly impairment-based. Functional and multimodal communication is usually under assessed and neglected by clinicians. This study explores the adaptation and psychometric testing of the Greek (GR) version of The Scenario Test. The Scenario Test assesses the everyday functional communication of PWA in an interactive multimodal communication setting with the support of an active communication facilitator. Aims: To define the reliability and validity of The Scenario Test GR and discuss its clinical value. Methods & Procedures: The Scenario Test-GR was administered to 54 people with chronic stroke (6+ months post-stroke): 32 PWA and 22 people with stroke without aphasia. Participants were recruited from Greece and Cyprus. All measures were performed in an interview format. Standard psychometric criteria were applied to evaluate reliability (internal consistency, test-retest, and interrater reliability) and validity (construct and known – groups validity) of the Scenario Test GR. Video analysis was performed for the qualitative examination of the communication modes used. Outcomes & Results: The Scenario Test-GR shows high levels of reliability and validity. High scores of internal consistency (Cronbach’s α = .95), test-retest reliability (ICC = .99), and interrater reliability (ICC = .99) were found. Interrater agreement in scores on individual items fell between good and excellent levels of agreement. Correlations with a tool measuring language function in aphasia (the Aphasia Severity Rating Scale of the Boston Diagnostic Aphasia Examination), a measure of functional communication (the Communicative Effectiveness Index), and two instruments examining the psychosocial impact of aphasia (the Stroke and Aphasia Quality of Life questionnaire and the Aphasia Impact Questionnaire) revealed good convergent validity (all ps< .05). Results showed good known – groups validity (Mann-Whitney U = 96.5, p < .001), with significantly higher scores for participants without aphasia compared to those with aphasia. Conclusions: The psychometric qualities of The Scenario Test-GR support the reliability and validity of the tool for the assessment of functional communication for Greek-speaking PWA. The Scenario Test-GR can be used to assess multimodal functional communication, orient aphasia rehabilitation goal setting towards the activity and participation level, and be used as an outcome measure of everyday communication. Future studies will focus on the measurement of sensitivity to change in PWA with severe non-fluent aphasia.Keywords: the scenario test GR, functional communication assessment, people with aphasia (PWA), tool validation
Procedia PDF Downloads 128244 A Digital Health Approach: Using Electronic Health Records to Evaluate the Cost Benefit of Early Diagnosis of Alpha-1 Antitrypsin Deficiency in the UK
Authors: Sneha Shankar, Orlando Buendia, Will Evans
Abstract:
Alpha-1 antitrypsin deficiency (AATD) is a rare, genetic, and multisystemic condition. Underdiagnosis is common, leading to chronic pulmonary and hepatic complications, increased resource utilization, and additional costs to the healthcare system. Currently, there is limited evidence of the direct medical costs of AATD diagnosis in the UK. This study explores the economic impact of AATD patients during the 3 years before diagnosis and to identify the major cost drivers using primary and secondary care electronic health record (EHR) data. The 3 years before diagnosis time period was chosen based on the ability of our tool to identify patients earlier. The AATD algorithm was created using published disease criteria and applied to 148 known AATD patients’ EHR found in a primary care database of 936,148 patients (413,674 Biobank and 501,188 in a single primary care locality). Among 148 patients, 9 patients were flagged earlier by the tool and, on average, could save 3 (1-6) years per patient. We analysed 101 of the 148 AATD patients’ primary care journey and 20 patients’ Hospital Episode Statistics (HES) data, all of whom had at least 3 years of clinical history in their records before diagnosis. The codes related to laboratory tests, clinical visits, referrals, hospitalization days, day case, and inpatient admissions attributable to AATD were examined in this 3-year period before diagnosis. The average cost per patient was calculated, and the direct medical costs were modelled based on the mean prevalence of 100 AATD patients in a 500,000 population. A deterministic sensitivity analysis (DSA) of 20% was performed to determine the major cost drivers. Cost data was obtained from the NHS National tariff 2020/21, National Schedule of NHS Costs 2018/19, PSSRU 2018/19, and private care tariff. The total direct medical cost of one hundred AATD patients three years before diagnosis in primary and secondary care in the UK was £3,556,489, with an average direct cost per patient of £35,565. A vast majority of this total direct cost (95%) was associated with inpatient admissions (£3,378,229). The DSA determined that the costs associated with tier-2 laboratory tests and inpatient admissions were the greatest contributors to direct costs in primary and secondary care, respectively. This retrospective study shows the role of EHRs in calculating direct medical costs and the potential benefit of new technologies for the early identification of patients with AATD to reduce the economic burden in primary and secondary care in the UK.Keywords: alpha-1 antitrypsin deficiency, costs, digital health, early diagnosis
Procedia PDF Downloads 166243 Monitoring Key Biomarkers Related to the Risk of Low Breastmilk Production in Women, Leading to a Positive Impact in Infant’s Health
Authors: R. Sanchez-Salcedo, N. H. Voelcker
Abstract:
Currently, low breast milk production in women is one of the leading health complications in infants. Recently, It has been demonstrated that exclusive breastfeeding, especially up to a minimum of 6 months, significantly reduces respiratory and gastrointestinal infections, which are the main causes of death in infants. However, the current data shows that a high percentage of women stop breastfeeding their children because they perceive an inadequate supply of milk, and only 45% of children are breastfeeding under 6 months. It is, therefore, clear the necessity to design and develop a biosensor that is sensitive and selective enough to identify and validate a panel of milk biomarkers that allow the early diagnosis of this condition. In this context, electrochemical biosensors could be a powerful tool for assessing all the requirements in terms of reliability, selectivity, sensitivity, cost efficiency and potential for multiplex detection. Moreover, they are suitable for the development of POC devices and wearable sensors. In this work, we report the development of two types of sensing platforms towards several biomarkers, including miRNAs and hormones present in breast milk and dysregulated in this pathological condition. The first type of sensing platform consists of an enzymatic sensor for the detection of lactose, one of the main components in milk. In this design, we used gold surface as an electrochemical transducer due to the several advantages, such as the variety of strategies available for its rapid and efficient functionalization with bioreceptors or capture molecules. For the second type of sensing platform, nanoporous silicon film (pSi) was chosen as the electrode material for the design of DNA sensors and aptasensors targeting miRNAs and hormones, respectively. pSi matrix offers a large superficial area with an abundance of active sites for the immobilization of bioreceptors and tunable characteristics, which increase the selectivity and specificity, making it an ideal alternative material. The analytical performance of the designed biosensors was not only characterized in buffer but also validated in minimally treated breastmilk samples. We have demonstrated the potential of an electrochemical transducer on pSi and gold surface for monitoring clinically relevant biomarkers associated with the heightened risk of low milk production in women. This approach, in which the nanofabrication techniques and the functionalization methods were optimized to increase the efficacy of the biosensor highly provided a foundation for further research and development of targeted diagnosis strategies.Keywords: biosensors, electrochemistry, early diagnosis, clinical markers, miRNAs
Procedia PDF Downloads 15242 Printed Electronics for Enhanced Monitoring of Organ-on-Chip Culture Media Parameters
Authors: Alejandra Ben-Aissa, Martina Moreno, Luciano Sappia, Paul Lacharmoise, Ana Moya
Abstract:
Organ-on-Chip (OoC) stands out as a highly promising approach for drug testing, presenting a cost-effective and ethically superior alternative to conventional in vivo experiments. These cutting-edge devices emerge from the integration of tissue engineering and microfluidic technology, faithfully replicating the physiological conditions of targeted organs. Consequently, they offer a more precise understanding of drug responses without the ethical concerns associated with animal testing. When addressing the limitations of OoC due to conventional and time-consuming techniques, Lab-On-Chip (LoC) emerge as a disruptive technology capable of providing real-time monitoring without compromising sample integrity. This work develops LoC platforms that can be integrated within OoC platforms to monitor essential culture media parameters, including glucose, oxygen, and pH, facilitating the straightforward exchange of sensing units within a dynamic and controlled environment without disrupting cultures. This approach preserves the experimental setup, minimizes the impact on cells, and enables efficient, prolonged measurement. The LoC system is fabricated following the patented methodology protected by EU patent EP4317957A1. One of the key challenges of integrating sensors in a biocompatible, feasible, robust, and scalable manner is addressed through fully printed sensors, ensuring a customized, cost-effective, and scalable solution. With this technique, sensor reliability is enhanced, providing high sensitivity and selectivity for accurate parameter monitoring. In the present study, LoC is validated measuring a complete culture media. The oxygen sensor provided a measurement range from 0 mgO2/L to 6.3 mgO2/L. The pH sensor demonstrated a measurement range spanning 2 pH units to 9.5 pH units. Additionally, the glucose sensor achieved a measurement range from 0 mM to 11 mM. All the measures were performed with the sensors integrated in the LoC. In conclusion, this study showcases the impactful synergy of OoC technology with LoC systems using fully printed sensors, marking a significant step forward in ethical and effective biomedical research, particularly in drug development. This innovation not only meets current demands but also lays the groundwork for future advancements in precision and customization within scientific exploration.Keywords: organ on chip, lab on chip, real time monitoring, biosensors
Procedia PDF Downloads 11241 Winter – Not Spring - Climate Drives Annual Adult Survival in Common Passerines: A Country-Wide, Multi-Species Modeling Exercise
Authors: Manon Ghislain, Timothée Bonnet, Olivier Gimenez, Olivier Dehorter, Pierre-Yves Henry
Abstract:
Climatic fluctuations affect the demography of animal populations, generating changes in population size, phenology, distribution and community assemblages. However, very few studies have identified the underlying demographic processes. For short-lived species, like common passerine birds, are these changes generated by changes in adult survival or in fecundity and recruitment? This study tests for an effect of annual climatic conditions (spring and winter) on annual, local adult survival at very large spatial (a country, 252 sites), temporal (25 years) and biological (25 species) scales. The Constant Effort Site ringing has allowed the collection of capture - mark - recapture data for 100 000 adult individuals since 1989, over metropolitan France, thus documenting annual, local survival rates of the most common passerine birds. We specifically developed a set of multi-year, multi-species, multi-site Bayesian models describing variations in local survival and recapture probabilities. This method allows for a statistically powerful hierarchical assessment (global versus species-specific) of the effects of climate variables on survival. A major part of between-year variations in survival rate was common to all species (74% of between-year variance), whereas only 26% of temporal variation was species-specific. Although changing spring climate is commonly invoked as a cause of population size fluctuations, spring climatic anomalies (mean precipitation or temperature for March-August) do not impact adult survival: only 1% of between-year variation of species survival is explained by spring climatic anomalies. However, for sedentary birds, winter climatic anomalies (North Atlantic Oscillation) had a significant, quadratic effect on adult survival, birds surviving less during intermediate years than during more extreme years. For migratory birds, we do not detect an effect of winter climatic anomalies (Sahel Rainfall). We will analyze the life history traits (migration, habitat, thermal range) that could explain a different sensitivity of species to winter climate anomalies. Overall, we conclude that changes in population sizes for passerine birds are unlikely to be the consequences of climate-driven mortality (or emigration) in spring but could be induced by other demographic parameters, like fecundity.Keywords: Bayesian approach, capture-recapture, climate anomaly, constant effort sites scheme, passerine, seasons, survival
Procedia PDF Downloads 301240 Transgenerational Impact of Intrauterine Hyperglycaemia to F2 Offspring without Pre-Diabetic Exposure on F1 Male Offspring
Authors: Jun Ren, Zhen-Hua Ming, He-Feng Huang, Jian-Zhong Sheng
Abstract:
Adverse intrauterine stimulus during critical or sensitive periods in early life, may lead to health risk not only in later life span, but also further generations. Intrauterine hyperglycaemia, as a major feature of gestational diabetes mellitus (GDM), is a typical adverse environment for both F1 fetus and F1 gamete cells development. However, there is scare information of phenotypic difference of metabolic memory between somatic cells and germ cells exposed by intrauterine hyperglycaemia. The direct transmission effect of intrauterine hyperglycaemia per se has not been assessed either. In this study, we built a GDM mice model and selected male GDM offspring without pre-diabetic phenotype as our founders, to exclude postnatal diabetic influence on gametes, thereby investigate the direct transmission effect of intrauterine hyperglycaemia exposure on F2 offspring, and we further compared the metabolic difference of affected F1-GDM male offspring and F2 offspring. A GDM mouse model of intrauterine hyperglycemia was established by intraperitoneal injection of streptozotocin after pregnancy. Pups of GDM mother were fostered by normal control mothers. All the mice were fed with standard food. Male GDM offspring without metabolic dysfunction phenotype were crossed with normal female mice to obtain F2 offspring. Body weight, glucose tolerance test, insulin tolerance test and homeostasis model of insulin resistance (HOMA-IR) index were measured in both generations at 8 week of age. Some of F1-GDM male mice showed impaired glucose tolerance (p < 0.001), none of F1-GDM male mice showed impaired insulin sensitivity. Body weight of F1-GDM mice showed no significance with control mice. Some of F2-GDM offspring exhibited impaired glucose tolerance (p < 0.001), all the F2-GDM offspring exhibited higher HOMA-IR index (p < 0.01 of normal glucose tolerance individuals vs. control, p < 0.05 of glucose intolerance individuals vs. control). All the F2-GDM offspring exhibited higher ITT curve than control (p < 0.001 of normal glucose tolerance individuals, p < 0.05 of glucose intolerance individuals, vs. control). F2-GDM offspring had higher body weight than control mice (p < 0.001 of normal glucose tolerance individuals, p < 0.001 of glucose intolerance individuals, vs. control). While glucose intolerance is the only phenotype that F1-GDM male mice may exhibit, F2 male generation of healthy F1-GDM father showed insulin resistance, increased body weight and/or impaired glucose tolerance. These findings imply that intrauterine hyperglycaemia exposure affects germ cells and somatic cells differently, thus F1 and F2 offspring demonstrated distinct metabolic dysfunction phenotypes. And intrauterine hyperglycaemia exposure per se has a strong influence on F2 generation, independent of postnatal metabolic dysfunction exposure.Keywords: inheritance, insulin resistance, intrauterine hyperglycaemia, offspring
Procedia PDF Downloads 236239 Model-Driven and Data-Driven Approaches for Crop Yield Prediction: Analysis and Comparison
Authors: Xiangtuo Chen, Paul-Henry Cournéde
Abstract:
Crop yield prediction is a paramount issue in agriculture. The main idea of this paper is to find out efficient way to predict the yield of corn based meteorological records. The prediction models used in this paper can be classified into model-driven approaches and data-driven approaches, according to the different modeling methodologies. The model-driven approaches are based on crop mechanistic modeling. They describe crop growth in interaction with their environment as dynamical systems. But the calibration process of the dynamic system comes up with much difficulty, because it turns out to be a multidimensional non-convex optimization problem. An original contribution of this paper is to propose a statistical methodology, Multi-Scenarios Parameters Estimation (MSPE), for the parametrization of potentially complex mechanistic models from a new type of datasets (climatic data, final yield in many situations). It is tested with CORNFLO, a crop model for maize growth. On the other hand, the data-driven approach for yield prediction is free of the complex biophysical process. But it has some strict requirements about the dataset. A second contribution of the paper is the comparison of these model-driven methods with classical data-driven methods. For this purpose, we consider two classes of regression methods, methods derived from linear regression (Ridge and Lasso Regression, Principal Components Regression or Partial Least Squares Regression) and machine learning methods (Random Forest, k-Nearest Neighbor, Artificial Neural Network and SVM regression). The dataset consists of 720 records of corn yield at county scale provided by the United States Department of Agriculture (USDA) and the associated climatic data. A 5-folds cross-validation process and two accuracy metrics: root mean square error of prediction(RMSEP), mean absolute error of prediction(MAEP) were used to evaluate the crop prediction capacity. The results show that among the data-driven approaches, Random Forest is the most robust and generally achieves the best prediction error (MAEP 4.27%). It also outperforms our model-driven approach (MAEP 6.11%). However, the method to calibrate the mechanistic model from dataset easy to access offers several side-perspectives. The mechanistic model can potentially help to underline the stresses suffered by the crop or to identify the biological parameters of interest for breeding purposes. For this reason, an interesting perspective is to combine these two types of approaches.Keywords: crop yield prediction, crop model, sensitivity analysis, paramater estimation, particle swarm optimization, random forest
Procedia PDF Downloads 229238 Predicting Polyethylene Processing Properties Based on Reaction Conditions via a Coupled Kinetic, Stochastic and Rheological Modelling Approach
Authors: Kristina Pflug, Markus Busch
Abstract:
Being able to predict polymer properties and processing behavior based on the applied operating reaction conditions in one of the key challenges in modern polymer reaction engineering. Especially, for cost-intensive processes such as the high-pressure polymerization of low-density polyethylene (LDPE) with high safety-requirements, the need for simulation-based process optimization and product design is high. A multi-scale modelling approach was set-up and validated via a series of high-pressure mini-plant autoclave reactor experiments. The approach starts with the numerical modelling of the complex reaction network of the LDPE polymerization taking into consideration the actual reaction conditions. While this gives average product properties, the complex polymeric microstructure including random short- and long-chain branching is calculated via a hybrid Monte Carlo-approach. Finally, the processing behavior of LDPE -its melt flow behavior- is determined in dependence of the previously determined polymeric microstructure using the branch on branch algorithm for randomly branched polymer systems. All three steps of the multi-scale modelling approach can be independently validated against analytical data. A triple-detector GPC containing an IR, viscosimetry and multi-angle light scattering detector is applied. It serves to determine molecular weight distributions as well as chain-length dependent short- and long-chain branching frequencies. 13C-NMR measurements give average branching frequencies, and rheological measurements in shear and extension serve to characterize the polymeric flow behavior. The accordance of experimental and modelled results was found to be extraordinary, especially taking into consideration that the applied multi-scale modelling approach does not contain parameter fitting of the data. This validates the suggested approach and proves its universality at the same time. In the next step, the modelling approach can be applied to other reactor types, such as tubular reactors or industrial scale. Moreover, sensitivity analysis for systematically varying process conditions is easily feasible. The developed multi-scale modelling approach finally gives the opportunity to predict and design LDPE processing behavior simply based on process conditions such as feed streams and inlet temperatures and pressures.Keywords: low-density polyethylene, multi-scale modelling, polymer properties, reaction engineering, rheology
Procedia PDF Downloads 123237 European Hinterland and Foreland: Impact of Accessibility, Connectivity, Inter-Port Competition on Containerization
Authors: Dial Tassadit Rania, Figueiredo De Oliveira Gabriel
Abstract:
In this paper, we investigate the relationship between ports and their hinterland and foreland environments and the competitive relationship between the ports themselves. These two environments are changing, evolving and introducing new challenges for commercial and economic development at the regional, national and international levels. Because of the rise of the containerization phenomenon, shipping costs and port handling costs have considerably decreased due to economies of scale. The volume of maritime trade has increased substantially and the markets served by the ports have expanded. On these bases, overlapping hinterlands can give rise to the phenomenon of competition between ports. Our main contribution comparing to the existing literature on this issue, is to build a set of hinterland, foreland and competition indicators. Using these indicators? we investigate the effect of hinterland accessibility, foreland connectivity and inter-ports competition on containerized traffic of Europeans ports. For this, we have a 10-year panel database from 2004 to 2014. Our hinterland indicators are given by two indicators of accessibility; they describe the market potential of a port and are calculated using information on population and wealth (GDP). We then calculate population and wealth for different neighborhoods within a distance from a port ranging from 100 to 1000km. For the foreland, we produce two indicators: port connectivity and number of partners for each port. Finally, we compute the two indicators of inter-port competition and a market concentration indicator (Hirshmann-Herfindhal) for different neighborhood-distances around the port. We then apply a fixed-effect model to test the relationship above. Again, with a fixed effects model, we do a sensitivity analysis for each of these indicators to support the results obtained. The econometric results of the general model given by the regression of the accessibility indicators, the LSCI for port i, and the inter-port competition indicator on the containerized traffic of European ports show a positive and significant effect for accessibility to wealth and not to the population. The results are positive and significant for the two indicators of connectivity and competition as well. One of the main results of this research is that the port development given here by the increase of its containerized traffic is strongly related to the development of its hinterland and foreland environment. In addition, it is the market potential, given by the wealth of the hinterland that has an impact on the containerized traffic of a port. However, accessibility to a large population pool is not important for understanding the dynamics of containerized port traffic. Furthermore, in order to continue to develop, a port must penetrate its hinterland at a deep level exceeding 100 km around the port and seek markets beyond this perimeter. The port authorities could focus their marketing efforts on the immediate hinterland, which can, as the results shows, not be captive and thus engage new approaches of port governance to make it more attractive.Keywords: accessibility, connectivity, European containerization, European hinterland and foreland, inter-port competition
Procedia PDF Downloads 193