Search results for: antimicrobial sensitivity
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2665

Search results for: antimicrobial sensitivity

265 Distinguishing between Bacterial and Viral Infections Based on Peripheral Human Blood Tests Using Infrared Microscopy and Multivariate Analysis

Authors: H. Agbaria, A. Salman, M. Huleihel, G. Beck, D. H. Rich, S. Mordechai, J. Kapelushnik

Abstract:

Viral and bacterial infections are responsible for variety of diseases. These infections have similar symptoms like fever, sneezing, inflammation, vomiting, diarrhea and fatigue. Thus, physicians may encounter difficulties in distinguishing between viral and bacterial infections based on these symptoms. Bacterial infections differ from viral infections in many other important respects regarding the response to various medications and the structure of the organisms. In many cases, it is difficult to know the origin of the infection. The physician orders a blood, urine test, or 'culture test' of tissue to diagnose the infection type when it is necessary. Using these methods, the time that elapses between the receipt of patient material and the presentation of the test results to the clinician is typically too long ( > 24 hours). This time is crucial in many cases for saving the life of the patient and for planning the right medical treatment. Thus, rapid identification of bacterial and viral infections in the lab is of great importance for effective treatment especially in cases of emergency. Blood was collected from 50 patients with confirmed viral infection and 50 with confirmed bacterial infection. White blood cells (WBCs) and plasma were isolated and deposited on a zinc selenide slide, dried and measured under a Fourier transform infrared (FTIR) microscope to obtain their infrared absorption spectra. The acquired spectra of WBCs and plasma were analyzed in order to differentiate between the two types of infections. In this study, the potential of FTIR microscopy in tandem with multivariate analysis was evaluated for the identification of the agent that causes the human infection. The method was used to identify the infectious agent type as either bacterial or viral, based on an analysis of the blood components [i.e., white blood cells (WBC) and plasma] using their infrared vibrational spectra. The time required for the analysis and evaluation after obtaining the blood sample was less than one hour. In the analysis, minute spectral differences in several bands of the FTIR spectra of WBCs were observed between groups of samples with viral and bacterial infections. By employing the techniques of feature extraction with linear discriminant analysis (LDA), a sensitivity of ~92 % and a specificity of ~86 % for an infection type diagnosis was achieved. The present preliminary study suggests that FTIR spectroscopy of WBCs is a potentially feasible and efficient tool for the diagnosis of the infection type.

Keywords: viral infection, bacterial infection, linear discriminant analysis, plasma, white blood cells, infrared spectroscopy

Procedia PDF Downloads 224
264 Leadership in the Emergence Paradigm: A Literature Review on the Medusa Principles

Authors: Everard van Kemenade

Abstract:

Many quality improvement activities are planned. Leaders are strongly involved in missions, visions and strategic planning. They use, consciously or unconsciously, the PDCA-cycle, also know as the Deming cycle. After the planning, the plans are carried out and the results or effects are measured. If the results show that the goals in the plan have not been achieved, adjustments are made in the next plan or in the execution of the processes. Then, the cycle is run through again. Traditionally, the PDCA-cycle is advocated as a means to an end. However, PDCA is especially fit for planned, ordered, certain contexts. It fits with the empirical and referential quality paradigm. For uncertain, unordered, unplanned processes, something else might be needed instead of Plan-Do-Check-Act. Due to the complexity of our society, the influence of the context, and the uncertainty in our world nowadays, not every activity can be planned anymore. At the same time organisations need to be more innovative than ever. That provides leaders with ‘wicked tendencies’. However, that raises the question how one can innovate without being able to plan? Complexity science studies the interactions of a diverse group of agents that bring about change in times of uncertainty, e.g. when radical innovation is co-created. This process is called emergence. This research study explores the role of leadership in the emergence paradigm. Aim of the article is to study the way that leadership can support the emergence of innovation in a complex context. First, clarity is given on the concepts used in the research question: complexity, emergence, innovation and leadership. Thereafter a literature search is conducted to answer the research question. The topics ‘emergent leadership’ or ‘complexity leadership’ are chosen for an exploratory search in Google and Google Scholar using the berry picking method. Exclusion criterion is emergence in other disciplines than organizational development or in the meaning of ‘arising’. The literature search conducted gave 45 hits. Twenty-seven articles were excluded after reading the title and abstract because they did not research the topic of emergent leadership and complexity. After reading the remaining articles as a whole one more was excluded because the article used emergent in the limited meaning of ‗arising‘ and eight more were excluded because the topic did not match the research question of this article. That brings the total of the search to 17 articles. The useful conclusions from the articles are merged and grouped together under overarching topics, using thematic analysis. The findings are that 5 topics prevail when looking at possibilities for leadership to facilitate innovation: enabling, sharing values, dreaming, interacting, context sensitivity and adaptivity. Together they form In Dutch the acronym Medusa.

Keywords: complexity science, emergence, leadership in the emergence paradigm, innovation, the Medusa principles

Procedia PDF Downloads 28
263 Radiomics: Approach to Enable Early Diagnosis of Non-Specific Breast Nodules in Contrast-Enhanced Magnetic Resonance Imaging

Authors: N. D'Amico, E. Grossi, B. Colombo, F. Rigiroli, M. Buscema, D. Fazzini, G. Cornalba, S. Papa

Abstract:

Purpose: To characterize, through a radiomic approach, the nature of nodules considered non-specific by expert radiologists, recognized in magnetic resonance mammography (MRm) with T1-weighted (T1w) sequences with paramagnetic contrast. Material and Methods: 47 cases out of 1200 undergoing MRm, in which the MRm assessment gave uncertain classification (non-specific nodules), were admitted to the study. The clinical outcome of the non-specific nodules was later found through follow-up or further exams (biopsy), finding 35 benign and 12 malignant. All MR Images were acquired at 1.5T, a first basal T1w sequence and then four T1w acquisitions after the paramagnetic contrast injection. After a manual segmentation of the lesions, done by a radiologist, and the extraction of 150 radiomic features (30 features per 5 subsequent times) a machine learning (ML) approach was used. An evolutionary algorithm (TWIST system based on KNN algorithm) was used to subdivide the dataset into training and validation test and to select features yielding the maximal amount of information. After this pre-processing, different machine learning systems were applied to develop a predictive model based on a training-testing crossover procedure. 10 cases with a benign nodule (follow-up older than 5 years) and 18 with an evident malignant tumor (clear malignant histological exam) were added to the dataset in order to allow the ML system to better learn from data. Results: NaiveBayes algorithm working on 79 features selected by a TWIST system, resulted to be the best performing ML system with a sensitivity of 96% and a specificity of 78% and a global accuracy of 87% (average values of two training-testing procedures ab-ba). The results showed that in the subset of 47 non-specific nodules, the algorithm predicted the outcome of 45 nodules which an expert radiologist could not identify. Conclusion: In this pilot study we identified a radiomic approach allowing ML systems to perform well in the diagnosis of a non-specific nodule at MR mammography. This algorithm could be a great support for the early diagnosis of malignant breast tumor, in the event the radiologist is not able to identify the kind of lesion and reduces the necessity for long follow-up. Clinical Relevance: This machine learning algorithm could be essential to support the radiologist in early diagnosis of non-specific nodules, in order to avoid strenuous follow-up and painful biopsy for the patient.

Keywords: breast, machine learning, MRI, radiomics

Procedia PDF Downloads 267
262 Psychological Sense of School Membership and Coping Ability as Predictors of Multidimensional Life Satisfaction among School Children

Authors: Mary Banke Iyabo Omoniyi

Abstract:

Children in the developing countries have complex social, economic, political and environmental contexts that create a wide range of challenges for school children to surmount as they journey through school from childhood to adolescent. Many of these children have little or no personal resources and social support to confront these challenges. This study employed a descriptive research design of survey type to investigate the psychological sense of school membership and coping skills as they relate to the multidimensional life satisfaction of the school children. The sample consists of 835 school children with the age range of 7-11 years who were randomly selected from twenty schools in Ondo state, Nigeria. The instrument for data collection was a questionnaire consisting of 4 sections A, B, C and D. Section A contained items on the children’s bio-data (Age, School, father’s and mother’s educational qualifications), section B is the Multidimensional Children Life Satisfaction Questionnaire (MCLSQ) with a 20 item Likert type scale. The response format range from Never= 1 to Almost always =4. The (MCLSQ) was designed to provide profile of children satisfaction with important domains of (school, family and friends). Section C is the Psychological Sense of School Membership Questionnaire (PSSMQ) with 18 items having response format ranging from Not at true=1 to completely true=5. While section D is the Self-Report Coping Questionnaire (SRCQ) which has 16 items with response ranging from Never =1 to Always=5. The instrument has a test-retest reliability coefficient of r = 0.87 while the sectional reliabilities for MCLSQ, PSSMQ and SRCQ are 0.86, 0.92 and 0.89 respectively. The results indicated that self-report coping skill was significantly correlated with multidimensional life satisfaction (r=592;p<0.05). However, the correlation between multidimensional life satisfaction and psychological sense of school membership was not significant (r=0.038;p>0.05). The regression analysis indicated that the contribution of mother’s education and father’s education to psychological sense of school member of the children were 0.923, Adjusted R2 is 0.440 and 0.730 and Adjusted R2 is 0.446. The results also indicate that contribution of gender to psychological sense of school for male and female has R= 0.782, Adjusted R2 = 0.478 and R = 0.998, Adjusted R2 i= 0.932 respectively. In conclusion, mother’s education qualification was found to contribute more to children psychological sense of membership and multidimensional life satisfaction than father’s. The girl child was also found to have more sense of belonging to the school setting than boy child. The counselling implications and recommendations among others were geared towards positive emotional gender sensitivity with regards to the male folk. Education stakeholders are also encouraged to make the school environment more conducive and gender friendly.

Keywords: multidimensional life satisfaction, psychological sense of school, coping skills, counselling implications

Procedia PDF Downloads 310
261 Study of Open Spaces in Urban Residential Clusters in India

Authors: Renuka G. Oka

Abstract:

From chowks to streets to verandahs to courtyards; residential open spaces are very significantly placed in traditional urban neighborhoods of India. At various levels of intersection, the open spaces with their attributes like juxtaposition with the built fabric, scale, climate sensitivity and response, multi-functionality, etc. reflect and respond to the patterns of human interactions. Also, these spaces tend to be quite well utilized. On the other hand, it is a common specter to see an imbalanced utilization of open spaces in newly/recently planned residential clusters. This is maybe due to lack of activity generators around or wrong locations or excess provisions or improper incorporation of aforementioned design attributes. These casual observations suggest the necessity for a systematic study of current residential open spaces. The exploratory study thus attempts to draw lessons through a structured inspection of residential open spaces to understand the effective environment as revealed through their use patterns. Here, residential open spaces are considered in a wider sense to incorporate all the un-built fabric around. These thus, include both use spaces and access space. For the study, open spaces in ten exemplary housing clusters/societies built during the last ten years across India are studied. A threefold inquiry is attempted in this direction. The first relates to identifying and determining the effects of various physical functions like space organization, size, hierarchy, thermal and optical comfort, etc. on the performance of residential open spaces. The second part sets out to understand socio-cultural variations in values, lifestyle, and beliefs which determine activity choices and behavioral preferences of users for respective residential open spaces. The third inquiry further observes the application of these research findings to the design process to derive meaningful and qualitative design advice. However, the study also emphasizes to develop a suitable framework of analysis and to carve out appropriate methods and approaches to probe into these aspects of the inquiry. Given this emphasis, a considerable portion of the research details out the conceptual framework for the study. This framework is supported by an in-depth search of available literature. The findings are worked out for design solutions which integrate the open space systems with the overall design process for residential clusters. The open spaces in residential areas present great complexities both in terms of their use patterns and determinants of their functional responses. The broad aim of the study is, therefore, to arrive at reconsideration of standards and qualitative parameters used by designers – on the basis of more substantial inquiry into the use patterns of open spaces in residential areas.

Keywords: open spaces, physical and social determinants, residential clusters, use patterns

Procedia PDF Downloads 148
260 Frequency Response of Complex Systems with Localized Nonlinearities

Authors: E. Menga, S. Hernandez

Abstract:

Finite Element Models (FEMs) are widely used in order to study and predict the dynamic properties of structures and usually, the prediction can be obtained with much more accuracy in the case of a single component than in the case of assemblies. Especially for structural dynamics studies, in the low and middle frequency range, most complex FEMs can be seen as assemblies made by linear components joined together at interfaces. From a modelling and computational point of view, these types of joints can be seen as localized sources of stiffness and damping and can be modelled as lumped spring/damper elements, most of time, characterized by nonlinear constitutive laws. On the other side, most of FE programs are able to run nonlinear analysis in time-domain. They treat the whole structure as nonlinear, even if there is one nonlinear degree of freedom (DOF) out of thousands of linear ones, making the analysis unnecessarily expensive from a computational point of view. In this work, a methodology in order to obtain the nonlinear frequency response of structures, whose nonlinearities can be considered as localized sources, is presented. The work extends the well-known Structural Dynamic Modification Method (SDMM) to a nonlinear set of modifications, and allows getting the Nonlinear Frequency Response Functions (NLFRFs), through an ‘updating’ process of the Linear Frequency Response Functions (LFRFs). A brief summary of the analytical concepts is given, starting from the linear formulation and understanding what the implications of the nonlinear one, are. The response of the system is formulated in both: time and frequency domain. First the Modal Database is extracted and the linear response is calculated. Secondly the nonlinear response is obtained thru the NL SDMM, by updating the underlying linear behavior of the system. The methodology, implemented in MATLAB, has been successfully applied to estimate the nonlinear frequency response of two systems. The first one is a two DOFs spring-mass-damper system, and the second example takes into account a full aircraft FE Model. In spite of the different levels of complexity, both examples show the reliability and effectiveness of the method. The results highlight a feasible and robust procedure, which allows a quick estimation of the effect of localized nonlinearities on the dynamic behavior. The method is particularly powerful when most of the FE Model can be considered as acting linearly and the nonlinear behavior is restricted to few degrees of freedom. The procedure is very attractive from a computational point of view because the FEM needs to be run just once, which allows faster nonlinear sensitivity analysis and easier implementation of optimization procedures for the calibration of nonlinear models.

Keywords: frequency response, nonlinear dynamics, structural dynamic modification, softening effect, rubber

Procedia PDF Downloads 266
259 Considerations for Effectively Using Probability of Failure as a Means of Slope Design Appraisal for Homogeneous and Heterogeneous Rock Masses

Authors: Neil Bar, Andrew Heweston

Abstract:

Probability of failure (PF) often appears alongside factor of safety (FS) in design acceptance criteria for rock slope, underground excavation and open pit mine designs. However, the design acceptance criteria generally provide no guidance relating to how PF should be calculated for homogeneous and heterogeneous rock masses, or what qualifies a ‘reasonable’ PF assessment for a given slope design. Observational and kinematic methods were widely used in the 1990s until advances in computing permitted the routine use of numerical modelling. In the 2000s and early 2010s, PF in numerical models was generally calculated using the point estimate method. More recently, some limit equilibrium analysis software offer statistical parameter inputs along with Monte-Carlo or Latin-Hypercube sampling methods to automatically calculate PF. Factors including rock type and density, weathering and alteration, intact rock strength, rock mass quality and shear strength, the location and orientation of geologic structure, shear strength of geologic structure and groundwater pore pressure influence the stability of rock slopes. Significant engineering and geological judgment, interpretation and data interpolation is usually applied in determining these factors and amalgamating them into a geotechnical model which can then be analysed. Most factors are estimated ‘approximately’ or with allowances for some variability rather than ‘exactly’. When it comes to numerical modelling, some of these factors are then treated deterministically (i.e. as exact values), while others have probabilistic inputs based on the user’s discretion and understanding of the problem being analysed. This paper discusses the importance of understanding the key aspects of slope design for homogeneous and heterogeneous rock masses and how they can be translated into reasonable PF assessments where the data permits. A case study from a large open pit gold mine in a complex geological setting in Western Australia is presented to illustrate how PF can be calculated using different methods and obtain markedly different results. Ultimately sound engineering judgement and logic is often required to decipher the true meaning and significance (if any) of some PF results.

Keywords: probability of failure, point estimate method, Monte-Carlo simulations, sensitivity analysis, slope stability

Procedia PDF Downloads 208
258 Body Fluids Identification by Raman Spectroscopy and Matrix-Assisted Laser Desorption/Ionization Time-of-Flight Mass Spectrometry

Authors: Huixia Shi, Can Hu, Jun Zhu, Hongling Guo, Haiyan Li, Hongyan Du

Abstract:

The identification of human body fluids during forensic investigations is a critical step to determine key details, and present strong evidence to testify criminal in a case. With the popularity of DNA and improved detection technology, the potential question must be revolved that whether the suspect’s DNA derived from saliva or semen, menstrual or peripheral blood, how to identify the red substance or aged blood traces on the spot is blood; How to determine who contribute the right one in mixed stains. In recent years, molecular approaches have been developing increasingly on mRNA, miRNA, DNA methylation and microbial markers, but appear expensive, time-consuming, and destructive disadvantages. Physicochemical methods are utilized frequently such us scanning electron microscopy/energy spectroscopy and X-ray fluorescence and so on, but results only showing one or two characteristics of body fluid itself and that out of working in unknown or mixed body fluid stains. This paper focuses on using chemistry methods Raman spectroscopy and matrix-assisted laser desorption/ionization time-of-flight mass spectrometry to discriminate species of peripheral blood, menstrual blood, semen, saliva, vaginal secretions, urine or sweat. Firstly, non-destructive, confirmatory, convenient and fast Raman spectroscopy method combined with more accurate matrix-assisted laser desorption/ionization time-of-flight mass spectrometry method can totally distinguish one from other body fluids. Secondly, 11 spectral signatures and specific metabolic molecules have been obtained by analysis results after 70 samples detected. Thirdly, Raman results showed peripheral and menstrual blood, saliva and vaginal have highly similar spectroscopic features. Advanced statistical analysis of the multiple Raman spectra must be requested to classify one to another. On the other hand, it seems that the lactic acid can differentiate peripheral and menstrual blood detected by matrix-assisted laser desorption/ionization time-of-flight mass spectrometry, but that is not a specific metabolic molecule, more sensitivity ones will be analyzed in a forward study. These results demonstrate the great potential of the developed chemistry methods for forensic applications, although more work is needed for method validation.

Keywords: body fluids, identification, Raman spectroscopy, matrix-assisted laser desorption/ionization time-of-flight mass spectrometry

Procedia PDF Downloads 137
257 User Experience in Relation to Eye Tracking Behaviour in VR Gallery

Authors: Veslava Osinska, Adam Szalach, Dominik Piotrowski

Abstract:

Contemporary VR technologies allow users to explore virtual 3D spaces where they can work, socialize, learn, and play. User's interaction with GUI and the pictures displayed implicate perceptual and also cognitive processes which can be monitored due to neuroadaptive technologies. These modalities provide valuable information about the users' intentions, situational interpretations, and emotional states, to adapt an application or interface accordingly. Virtual galleries outfitted by specialized assets have been designed using the Unity engine BITSCOPE project in the frame of CHIST-ERA IV program. Users interaction with gallery objects implies the questions about his/her visual interests in art works and styles. Moreover, an attention, curiosity, and other emotional states are possible to be monitored and analyzed. Natural gaze behavior data and eye position were recorded by built-in eye-tracking module within HTC Vive headset gogle for VR. Eye gaze results are grouped due to various users’ behavior schemes and the appropriate perpetual-cognitive styles are recognized. Parallelly usability tests and surveys were adapted to identify the basic features of a user-centered interface for the virtual environments across most of the timeline of the project. A total of sixty participants were selected from the distinct faculties of University and secondary schools. Users’ primary knowledge about art and was evaluated during pretest and this way the level of art sensitivity was described. Data were collected during two months. Each participant gave written informed consent before participation. In data analysis reducing the high-dimensional data into a relatively low-dimensional subspace ta non linear algorithms were used such as multidimensional scaling and novel technique technique t-Stochastic Neighbor Embedding. This way it can classify digital art objects by multi modal time characteristics of eye tracking measures and reveal signatures describing selected artworks. Current research establishes the optimal place on aesthetic-utility scale because contemporary interfaces of most applications require to be designed in both functional and aesthetical ways. The study concerns also an analysis of visual experience for subsamples of visitors, differentiated, e.g., in terms of frequency of museum visits, cultural interests. Eye tracking data may also show how to better allocate artefacts and paintings or increase their visibility when possible.

Keywords: eye tracking, VR, UX, visual art, virtual gallery, visual communication

Procedia PDF Downloads 42
256 Calibration of Contact Model Parameters and Analysis of Microscopic Behaviors of Cuxhaven Sand Using The Discrete Element Method

Authors: Anjali Uday, Yuting Wang, Andres Alfonso Pena Olare

Abstract:

The Discrete Element Method is a promising approach to modeling microscopic behaviors of granular materials. The quality of the simulations however depends on the model parameters utilized. The present study focuses on calibration and validation of the discrete element parameters for Cuxhaven sand based on the experimental data from triaxial and oedometer tests. A sensitivity analysis was conducted during the sample preparation stage and the shear stage of the triaxial tests. The influence of parameters like rolling resistance, inter-particle friction coefficient, confining pressure and effective modulus were investigated on the void ratio of the sample generated. During the shear stage, the effect of parameters like inter-particle friction coefficient, effective modulus, rolling resistance friction coefficient and normal-to-shear stiffness ratio are examined. The calibration of the parameters is carried out such that the simulations reproduce the macro mechanical characteristics like dilation angle, peak stress, and stiffness. The above-mentioned calibrated parameters are then validated by simulating an oedometer test on the sand. The oedometer test results are in good agreement with experiments, which proves the suitability of the calibrated parameters. In the next step, the calibrated and validated model parameters are applied to forecast the micromechanical behavior including the evolution of contact force chains, buckling of columns of particles, observation of non-coaxiality, and sample inhomogeneity during a simple shear test. The evolution of contact force chains vividly shows the distribution, and alignment of strong contact forces. The changes in coordination number are in good agreement with the volumetric strain exhibited during the simple shear test. The vertical inhomogeneity of void ratios is documented throughout the shearing phase, which shows looser structures in the top and bottom layers. Buckling of columns is not observed due to the small rolling resistance coefficient adopted for simulations. The non-coaxiality of principal stress and strain rate is also well captured. Thus the micromechanical behaviors are well described using the calibrated and validated material parameters.

Keywords: discrete element model, parameter calibration, triaxial test, oedometer test, simple shear test

Procedia PDF Downloads 120
255 A Bottleneck-Aware Power Management Scheme in Heterogeneous Processors for Web Apps

Authors: Inyoung Park, Youngjoo Woo, Euiseong Seo

Abstract:

With the advent of WebGL, Web apps are now able to provide high quality graphics by utilizing the underlying graphic processing units (GPUs). Despite that the Web apps are becoming common and popular, the current power management schemes, which were devised for the conventional native applications, are suboptimal for Web apps because of the additional layer, the Web browser, between OS and application. The Web browser running on a CPU issues GL commands, which are for rendering images to be displayed by the Web app currently running, to the GPU and the GPU processes them. The size and number of issued GL commands determine the processing load of the GPU. While the GPU is processing the GL commands, CPU simultaneously executes the other compute intensive threads. The actual user experience will be determined by either CPU processing or GPU processing depending on which of the two is the more demanded resource. For example, when the GPU work queue is saturated by the outstanding commands, lowering the performance level of the CPU does not affect the user experience because it is already deteriorated by the retarded execution of GPU commands. Consequently, it would be desirable to lower CPU or GPU performance level to save energy when the other resource is saturated and becomes a bottleneck in the execution flow. Based on this observation, we propose a power management scheme that is specialized for the Web app runtime environment. This approach incurs two technical challenges; identification of the bottleneck resource and determination of the appropriate performance level for unsaturated resource. The proposed power management scheme uses the CPU utilization level of the Window Manager to tell which one is the bottleneck if exists. The Window Manager draws the final screen using the processed results delivered from the GPU. Thus, the Window Manager is on the critical path that determines the quality of user experience and purely executed by the CPU. The proposed scheme uses the weighted average of the Window Manager utilization to prevent excessive sensitivity and fluctuation. We classified Web apps into three categories using the analysis results that measure frame-per-second (FPS) changes under diverse CPU/GPU clock combinations. The results showed that the capability of the CPU decides user experience when the Window Manager utilization is above 90% and consequently, the proposed scheme decreases the performance level of CPU by one step. On the contrary, when its utilization is less than 60%, the bottleneck usually lies in the GPU and it is desirable to decrease the performance of GPU. Even the processing unit that is not on critical path, excessive performance drop can occur and that may adversely affect the user experience. Therefore, our scheme lowers the frequency gradually, until it finds an appropriate level by periodically checking the CPU utilization. The proposed scheme reduced the energy consumption by 10.34% on average in comparison to the conventional Linux kernel, and it worsened their FPS by 1.07% only on average.

Keywords: interactive applications, power management, QoS, Web apps, WebGL

Procedia PDF Downloads 192
254 Multi-Objective Optimization of the Thermal-Hydraulic Behavior for a Sodium Fast Reactor with a Gas Power Conversion System and a Loss of off-Site Power Simulation

Authors: Avent Grange, Frederic Bertrand, Jean-Baptiste Droin, Amandine Marrel, Jean-Henry Ferrasse, Olivier Boutin

Abstract:

CEA and its industrial partners are designing a gas Power Conversion System (PCS) based on a Brayton cycle for the ASTRID Sodium-cooled Fast Reactor. Investigations of control and regulation requirements to operate this PCS during operating, incidental and accidental transients are necessary to adapt core heat removal. To this aim, we developed a methodology to optimize the thermal-hydraulic behavior of the reactor during normal operations, incidents and accidents. This methodology consists of a multi-objective optimization for a specific sequence, whose aim is to increase component lifetime by reducing simultaneously several thermal stresses and to bring the reactor into a stable state. Furthermore, the multi-objective optimization complies with safety and operating constraints. Operating, incidental and accidental sequences use specific regulations to control the thermal-hydraulic reactor behavior, each of them is defined by a setpoint, a controller and an actuator. In the multi-objective problem, the parameters used to solve the optimization are the setpoints and the settings of the controllers associated with the regulations included in the sequence. In this way, the methodology allows designers to define an optimized and specific control strategy of the plant for the studied sequence and hence to adapt PCS piloting at its best. The multi-objective optimization is performed by evolutionary algorithms coupled to surrogate models built on variables computed by the thermal-hydraulic system code, CATHARE2. The methodology is applied to a loss of off-site power sequence. Three variables are controlled: the sodium outlet temperature of the sodium-gas heat exchanger, turbomachine rotational speed and water flow through the heat sink. These regulations are chosen in order to minimize thermal stresses on the gas-gas heat exchanger, on the sodium-gas heat exchanger and on the vessel. The main results of this work are optimal setpoints for the three regulations. Moreover, Proportional-Integral-Derivative (PID) control setting is considered and efficient actuators used in controls are chosen through sensitivity analysis results. Finally, the optimized regulation system and the reactor control procedure, provided by the optimization process, are verified through a direct CATHARE2 calculation.

Keywords: gas power conversion system, loss of off-site power, multi-objective optimization, regulation, sodium fast reactor, surrogate model

Procedia PDF Downloads 308
253 Transport Mode Selection under Lead Time Variability and Emissions Constraint

Authors: Chiranjit Das, Sanjay Jharkharia

Abstract:

This study is focused on transport mode selection under lead time variability and emissions constraint. In order to reduce the carbon emissions generation due to transportation, organization has often faced a dilemmatic choice of transport mode selection since logistic cost and emissions reduction are complementary with each other. Another important aspect of transportation decision is lead-time variability which is least considered in transport mode selection problem. Thus, in this study, we provide a comprehensive mathematical based analytical model to decide transport mode selection under emissions constraint. We also extend our work through analysing the effect of lead time variability in the transport mode selection by a sensitivity analysis. In order to account lead time variability into the model, two identically normally distributed random variables are incorporated in this study including unit lead time variability and lead time demand variability. Therefore, in this study, we are addressing following questions: How the decisions of transport mode selection will be affected by lead time variability? How lead time variability will impact on total supply chain cost under carbon emissions? To accomplish these objectives, a total transportation cost function is developed including unit purchasing cost, unit transportation cost, emissions cost, holding cost during lead time, and penalty cost for stock out due to lead time variability. A set of modes is available to transport each node, in this paper, we consider only four transport modes such as air, road, rail, and water. Transportation cost, distance, emissions level for each transport mode is considered as deterministic and static in this paper. Each mode is having different emissions level depending on the distance and product characteristics. Emissions cost is indirectly affected by the lead time variability if there is any switching of transport mode from lower emissions prone transport mode to higher emissions prone transport mode in order to reduce penalty cost. We provide a numerical analysis in order to study the effectiveness of the mathematical model. We found that chances of stock out during lead time will be higher due to the higher variability of lead time and lad time demand. Numerical results show that penalty cost of air transport mode is negative that means chances of stock out zero, but, having higher holding and emissions cost. Therefore, air transport mode is only selected when there is any emergency order to reduce penalty cost, otherwise, rail and road transport is the most preferred mode of transportation. Thus, this paper is contributing to the literature by a novel approach to decide transport mode under emissions cost and lead time variability. This model can be extended by studying the effect of lead time variability under some other strategic transportation issues such as modal split option, full truck load strategy, and demand consolidation strategy etc.

Keywords: carbon emissions, inventory theoretic model, lead time variability, transport mode selection

Procedia PDF Downloads 434
252 New Derivatives 7-(diethylamino)quinolin-2-(1H)-one Based Chalcone Colorimetric Probes for Detection of Bisulfite Anion in Cationic Micellar Media

Authors: Guillermo E. Quintero, Edwin G. Perez, Oriel Sanchez, Christian Espinosa-Bustos, Denis Fuentealba, Margarita E. Aliaga

Abstract:

Bisulfite ion (HSO3-) has been used as a preservative in food, drinks, and medication. However, it is well-known that HSO3- can cause health problems like asthma and allergic reactions in people. Due to the above, the development of analytical methods for detecting this ion has gained great interest. In line with the above, the current use of colorimetric and/or fluorescent probes as a detection technique has acquired great relevance due to their high sensitivity and accuracy. In this context, 2-quinolinone derivatives have been found to possess promising activity as antiviral agents, sensitizers in solar cells, antifungals, antioxidants, and sensors. In particular, 7-(diethylamino)-2-quinolinone derivatives have attracted attention in recent years since their suitable photophysical properties become promising fluorescent probes. In Addition, there is evidence that photophysical properties and reactivity can be affected by the study medium, such as micellar media. Based on the above background, 7-(diethylamino)-2-quinolinone derivatives based chalcone will be able to be incorporated into a cationic micellar environment (Cetyltrimethylammonium bromide, CTAB). Furthermore, the supramolecular control induced by the micellar environment will increase the reactivity of these derivatives towards nucleophilic analytes such as HSO3- (Michael-type addition reaction), leading to the generation of new colorimetric and/or fluorescent probes. In the present study, two derivatives of 7-(diethylamino)-2-quinolinone based chalcone DQD1-2 were synthesized according to the method reported by the literature. These derivatives were structurally characterized by 1H, 13C NMR, and HRMS-ESI. In addition, UV-VIS and fluorescence studies determined absorption bands near 450 nm, emission bands near 600 nm, fluorescence quantum yields near 0.01, and fluorescence lifetimes of 5 ps. In line with the foregoing, these photophysical properties aforementioned were improved in the presence of a cationic micellar medium using CTAB thanks to the formation of adducts presenting association constants of the order of 2,5x105 M-1, increasing the quantum yields to 0.12 and the fluorescence lifetimes corresponding to two lifetimes near to 120 and 400 ps for DQD1 and DQD2. Besides, thanks to the presence of the micellar medium, the reactivity of these derivatives with nucleophilic analytes, such as HSO3-, was increased. This was achieved through kinetic studies, which demonstrated an increase in the bimolecular rate constants in the presence of a micellar medium. Finally, probe DQD1 was chosen as the best sensor since it was assessed to detect HSO3- with excellent results.

Keywords: bisulfite detection, cationic micelle, colorimetric probes, quinolinone derivatives

Procedia PDF Downloads 94
251 Nondestructive Inspection of Reagents under High Attenuated Cardboard Box Using Injection-Seeded THz-Wave Parametric Generator

Authors: Shin Yoneda, Mikiya Kato, Kosuke Murate, Kodo Kawase

Abstract:

In recent years, there have been numerous attempts to smuggle narcotic drugs and chemicals by concealing them in international mail. Combatting this requires a non-destructive technique that can identify such illicit substances in mail. Terahertz (THz) waves can pass through a wide variety of materials, and many chemicals show specific frequency-dependent absorption, known as a spectral fingerprint, in the THz range. Therefore, it is reasonable to investigate non-destructive mail inspection techniques that use THz waves. For this reason, in this work, we tried to identify reagents under high attenuation shielding materials using injection-seeded THz-wave parametric generator (is-TPG). Our THz spectroscopic imaging system using is-TPG consisted of two non-linear crystals for emission and detection of THz waves. A micro-chip Nd:YAG laser and a continuous wave tunable external cavity diode laser were used as the pump and seed source, respectively. The pump beam and seed beam were injected to the LiNbO₃ crystal satisfying the noncollinear phase matching condition in order to generate high power THz-wave. The emitted THz wave was irradiated to the sample which was raster scanned by the x-z stage while changing the frequencies, and we obtained multispectral images. Then the transmitted THz wave was focused onto another crystal for detection and up-converted to the near infrared detection beam based on nonlinear optical parametric effects, wherein the detection beam intensity was measured using an infrared pyroelectric detector. It was difficult to identify reagents in a cardboard box because of high noise levels. In this work, we introduce improvements for noise reduction and image clarification, and the intensity of the near infrared detection beam was converted correctly to the intensity of the THz wave. A Gaussian spatial filter is also introduced for a clearer THz image. Through these improvements, we succeeded in identification of reagents hidden in a 42-mm thick cardboard box filled with several obstacles, which attenuate 56 dB at 1.3 THz, by improving analysis methods. Using this system, THz spectroscopic imaging was possible for saccharides and may also be applied to cases where illicit drugs are hidden in the box, and multiple reagents are mixed together. Moreover, THz spectroscopic imaging can be achieved through even thicker obstacles by introducing an NIR detector with higher sensitivity.

Keywords: nondestructive inspection, principal component analysis, terahertz parametric source, THz spectroscopic imaging

Procedia PDF Downloads 177
250 Queer Social Realism and Architecture in British Cinema: Tenement Housing, Unions and the Affective Body

Authors: Christopher Pullen

Abstract:

This paper explores the significance of British cinema in the late 1950s and early 1960s as offering a renaissance of realist discourse, in the representation of everyday social issues. Offering a rejection of Hollywood cinema and the superficially of the middle classes, these ‘kitchen sink dramas’ often set within modest and sometimes squalid domestic and social environments, focused on the political struggle of the disenfranchised examining poverty, the oppressed and the outsider. While films like Look Back in Anger and Room at the Top looked primarily at male heterosexual subjectivity, films like A Taste of Honey and Victim focused on female and queer male narratives. Framing the urban landscape as a discursive architectural arena, representing basic living conditions and threatening social worlds, these iconic films established new storytelling processes for the outsider. This paper examines this historical context foregrounding the contemporary films Beautiful Thing (Hettie Macdonald, 1996), Weekend (Andrew Haigh, 2011) and Pride (Marcus Warchus, 2014), while employing the process of textual analysis in relation to theories of affect, defined by writers such as Lisa U. Marks and Sara Ahmed. Considering both romance narratives and public demonstrations of unity, where the queer ‘affective’ body is placed within architectural and social space, Beautiful Thing tells the story of gay male teenagers falling in love despite oppression from family and school, Weekend examines a one-night stand between young gay men and the unlikeliness of commitment, but the drive for sensitivity, and Pride foregrounds an historical relationship between queer youth activists and the miner’s union, who were on strike between 1984-5. These films frame the queer ‘affective’ body within politicized public space, evident in lower class men’s working clubs, tenement housing and brutal modernist tower blocks, focusing on architectural features such as windows, doorways and staircases, relating temporality, desire and change. Through such an examination a hidden history of gay male performativity is revealed, framing the potential of contemporary cinema to focus on the context of the outsider in encouraging social change.

Keywords: queer, affect, cinema, architecture, life chances

Procedia PDF Downloads 358
249 Utilizing Artificial Intelligence to Predict Post Operative Atrial Fibrillation in Non-Cardiac Transplant

Authors: Alexander Heckman, Rohan Goswami, Zachi Attia, Paul Friedman, Peter Noseworthy, Demilade Adedinsewo, Pablo Moreno-Franco, Rickey Carter, Tathagat Narula

Abstract:

Background: Postoperative atrial fibrillation (POAF) is associated with adverse health consequences, higher costs, and longer hospital stays. Utilizing existing predictive models that rely on clinical variables and circulating biomarkers, multiple societies have published recommendations on the treatment and prevention of POAF. Although reasonably practical, there is room for improvement and automation to help individualize treatment strategies and reduce associated complications. Methods and Results: In this retrospective cohort study of solid organ transplant recipients, we evaluated the diagnostic utility of a previously developed AI-based ECG prediction for silent AF on the development of POAF within 30 days of transplant. A total of 2261 non-cardiac transplant patients without a preexisting diagnosis of AF were found to have a 5.8% (133/2261) incidence of POAF. While there were no apparent sex differences in POAF incidence (5.8% males vs. 6.0% females, p=.80), there were differences by race and ethnicity (p<0.001 and 0.035, respectively). The incidence in white transplanted patients was 7.2% (117/1628), whereas the incidence in black patients was 1.4% (6/430). Lung transplant recipients had the highest incidence of postoperative AF (17.4%, 37/213), followed by liver (5.6%, 56/1002) and kidney (3.6%, 32/895) recipients. The AUROC in the sample was 0.62 (95% CI: 0.58-0.67). The relatively low discrimination may result from undiagnosed AF in the sample. In particular, 1,177 patients had at least 1 AI-ECG screen for AF pre-transplant above .10, a value slightly higher than the published threshold of 0.08. The incidence of POAF in the 1104 patients without an elevated prediction pre-transplant was lower (3.7% vs. 8.0%; p<0.001). While this supported the hypothesis that potentially undiagnosed AF may have contributed to the diagnosis of POAF, the utility of the existing AI-ECG screening algorithm remained modest. When the prediction for POAF was made using the first postoperative ECG in the sample without an elevated screen pre-transplant (n=1084 on account of n=20 missing postoperative ECG), the AUROC was 0.66 (95% CI: 0.57-0.75). While this discrimination is relatively low, at a threshold of 0.08, the AI-ECG algorithm had a 98% (95% CI: 97 – 99%) negative predictive value at a sensitivity of 66% (95% CI: 49-80%). Conclusions: This study's principal finding is that the incidence of POAF is rare, and a considerable fraction of the POAF cases may be latent and undiagnosed. The high negative predictive value of AI-ECG screening suggests utility for prioritizing monitoring and evaluation on transplant patients with a positive AI-ECG screening. Further development and refinement of a post-transplant-specific algorithm may be warranted further to enhance the diagnostic yield of the ECG-based screening.

Keywords: artificial intelligence, atrial fibrillation, cardiology, transplant, medicine, ECG, machine learning

Procedia PDF Downloads 133
248 Identifying Large-Scale Photovoltaic and Concentrated Solar Power Hot Spots: Multi-Criteria Decision-Making Framework

Authors: Ayat-Allah Bouramdane

Abstract:

Solar Photovoltaic (PV) and Concentrated Solar Power (CSP) do not burn fossil fuels and, therefore, could meet the world's needs for low-carbon power generation as they do not release greenhouse gases into the atmosphere as they generate electricity. The power output of the solar PV module and CSP collector is proportional to the temperature and the amount of solar radiation received by their surface. Hence, the determination of the most convenient locations of PV and CSP systems is crucial to maximizing their output power. This study aims to provide a hands-on and plausible approach to the multi-criteria evaluation of site suitability of PV and CSP plants using a combination of Geographic Referenced Information (GRI) and Analytic Hierarchy Process (AHP). Applying the GRI-based AHP approach is meant to specify the criteria and sub-criteria, to identify the unsuitable areas, the low-, moderate-, high- and very high suitable areas for each layer of GRI, to perform the pairwise comparison matrix at each level of the hierarchy structure based on experts' knowledge, and calculate the weights using AHP to create the final map of solar PV and CSP plants suitability in Morocco with a particular focus on the Dakhla city. The results recognize that solar irradiation is the main decision factor for the integration of these technologies on energy policy goals of Morocco but explicitly account for other factors that cannot only limit the potential of certain locations but can even exclude the Dakhla city classified as unsuitable area. We discuss the sensitivity of the PV and CSP site suitability to different aspects, such as the methodology, the climate conditions, and the technology used in each source, and provide the final recommendations to the Moroccan energy strategy by analyzing if actual Morocco's PV and CSP installations are located within areas deemed suitable and by discussing several cases to provide mutual benefits across the Food-Energy-Water nexus. The adapted methodology and conducted suitability map could be used by researchers or engineers to provide helpful information for decision-makers in terms of sites selection, design, and planning of future solar plants, especially in areas suffering from energy shortages, such as the Dakhla city, which is now one of Africa's most promising investment hubs and it is especially attractive to investors looking to root their operations in Africa and import to European markets.

Keywords: analytic hierarchy process, concentrated solar power, dakhla, geographic referenced information, Morocco, multi-criteria decision-making, photovoltaic, site suitability

Procedia PDF Downloads 173
247 Synthesis of MIPs towards Precursors and Intermediates of Illicit Drugs and Their following Application in Sensing Unit

Authors: K. Graniczkowska, N. Beloglazova, S. De Saeger

Abstract:

The threat of synthetic drugs is one of the most significant current drug problems worldwide. The use of drugs of abuse has increased dramatically during the past three decades. Among others, Amphetamine-Type Stimulants (ATS) are globally the second most widely used drugs after cannabis, exceeding the use of cocaine and heroin. ATS are potent central nervous system (CNS) stimulants, capable of inducing euphoric static similar to cocaine. Recreational use of ATS is widespread, even though warnings of irreversible damage of the CNS were reported. ATS pose a big problem and their production contributes to the pollution of the environment by discharging big volumes of liquid waste to sewage system. Therefore, there is a demand to develop robust and sensitive sensors that can detect ATS and their intermediates in environmental water samples. A rapid and simple test is required. Analysis of environmental water samples (which sometimes can be a harsh environment) using antibody-based tests cannot be applied. Therefore, molecular imprinted polymers (MIPs), which are known as synthetic antibodies, have been chosen for that approach. MIPs are characterized with a high mechanical and thermal stability, show chemical resistance in a broad pH range and various organic or aqueous solvents. These properties make them the preferred type of receptors for application in the harsh conditions imposed by environmental samples. To the best of our knowledge, there are no existing MIPs-based sensors toward amphetamine and its intermediates. Also not many commercial MIPs for this application are available. Therefore, the aim of this study was to compare different techniques to obtain MIPs with high specificity towards ATS and characterize them for following use in a sensing unit. MIPs against amphetamine and its intermediates were synthesized using a few different techniques, such as electro-, thermo- and UV-initiated polymerization. Different monomers, cross linkers and initiators, in various ratios, were tested to obtain the best sensitivity and polymers properties. Subsequently, specificity and selectivity were compared with commercially available MIPs against amphetamine. Different linkers, such as lipoic acid, 3-mercaptopioponic acid and tyramine were examined, in combination with several immobilization techniques, to select the best procedure for attaching particles on sensor surface. Performed experiments allowed choosing an optimal method for the intended sensor application. Stability of MIPs in extreme conditions, such as highly acidic or basic was determined. Obtained results led to the conclusion about MIPs based sensor applicability in sewage system testing.

Keywords: amphetamine type stimulants, environment, molecular imprinted polymers, MIPs, sensor

Procedia PDF Downloads 250
246 Chronic Impact of Silver Nanoparticle on Aerobic Wastewater Biofilm

Authors: Sanaz Alizadeh, Yves Comeau, Arshath Abdul Rahim, Sunhasis Ghoshal

Abstract:

The application of silver nanoparticles (AgNPs) in personal care products, various household and industrial products has resulted in an inevitable environmental exposure of such engineered nanoparticles (ENPs). Ag ENPs, released via household and industrial wastes, reach water resource recovery facilities (WRRFs), yet the fate and transport of ENPs in WRRFs and their potential risk in the biological wastewater processes are poorly understood. Accordingly, our main objective was to elucidate the impact of long-term continuous exposure to AgNPs on biological activity of aerobic wastewater biofilm. The fate, transport and toxicity of 10 μg.L-1and 100 μg.L-1 PVP-stabilized AgNPs (50 nm) were evaluated in an attached growth biological treatment process, using lab-scale moving bed bioreactors (MBBRs). Two MBBR systems for organic matter removal were fed with a synthetic influent and operated at a hydraulic retention time (HRT) of 180 min and 60% volumetric filling ratio of Anox-K5 carriers with specific surface area of 800 m2/m3. Both reactors were operated for 85 days after reaching steady state conditions to develop a mature biofilm. The impact of AgNPs on the biological performance of the MBBRs was characterized over a period of 64 days in terms of the filtered biodegradable COD (SCOD) removal efficiency, the biofilm viability and key enzymatic activities (α-glucosidase and protease). The AgNPs were quantitatively characterized using single-particle inductively coupled plasma mass spectroscopy (spICP-MS), determining simultaneously the particle size distribution, particle concentration and dissolved silver content in influent, bioreactor and effluent samples. The generation of reactive oxygen species and the oxidative stress were assessed as the proposed toxicity mechanism of AgNPs. Results indicated that a low concentration of AgNPs (10 μg.L-1) did not significantly affect the SCOD removal efficiency whereas a significant reduction in treatment efficiency (37%) was observed at 100 μg.L-1AgNPs. Neither the viability nor the enzymatic activities of biofilm were affected at 10 μg.L-1AgNPs but a higher concentration of AgNPs induced cell membrane integrity damage resulting in 31% loss of viability and reduced α-glucosidase and protease enzymatic activities by 31% and 29%, respectively, over the 64-day exposure period. The elevated intercellular ROS in biofilm at a higher AgNPs concentration over time was consistent with a reduced biological biofilm performance, confirming the occurrence of a nanoparticle-induced oxidative stress in the heterotrophic biofilm. The spICP-MS analysis demonstrated a decrease in the nanoparticles concentration over the first 25 days, indicating a significant partitioning of AgNPs into the biofilm matrix in both reactors. The concentration of nanoparticles increased in effluent of both reactors after 25 days, however, indicating a decreased retention capacity of AgNPs in biofilm. The observed significant detachment of biofilm also contributed to a higher release of nanoparticles due to cell-wall destabilizing properties of AgNPs as an antimicrobial agent. The removal efficiency of PVP-AgNPs and the biofilm biological responses were a function of nanoparticle concentration and exposure time. This study contributes to a better understanding of the fate and behavior of AgNPs in biological wastewater processes, providing key information that can be used to predict the environmental risks of ENPs in aquatic ecosystems.

Keywords: biofilm, silver nanoparticle, single particle ICP-MS, toxicity, wastewater

Procedia PDF Downloads 268
245 Auditory Perception of Frequency-Modulated Sweeps and Reading Difficulties in Chinese

Authors: Hsiao-Lan Wang, Chun-Han Chiang, I-Chen Chen

Abstract:

In Chinese Mandarin, lexical tones play an important role to provide contrasts in word meaning. They are pitch patterns and can be quantified as the fundamental frequency (F0), expressed in Hertz (Hz). In this study, we aim to investigate the influence of frequency discrimination on Chinese children’s performance of reading abilities. Fifty participants from 3rd to 4th grades, including 24 children with reading difficulties and 26 age-matched children, were examined. A serial of cognitive, language, reading and psychoacoustic tests were administrated. Magnetoencephalography (MEG) was also employed to study children’s auditory sensitivity. In the present study, auditory frequency was measured through slide-up pitch, slide-down pitch and frequency-modulated tone. The results showed that children with Chinese reading difficulties were significantly poor at phonological awareness and auditory discrimination for the identification of frequency-modulated tone. Chinese children’s character reading performance was significantly related to lexical tone awareness and auditory perception of frequency-modulated tone. In our MEG measure, we compared the mismatch negativity (MMNm), from 100 to 200 ms, in two groups. There were no significant differences between groups during the perceptual discrimination of standard sounds, fast-up and fast-down frequencies. However, the data revealed significant cluster differences between groups in the slow-up and slow-down frequencies discrimination. In the slow-up stimulus, the cluster demonstrated an upward field map at 106-151 ms (p < .001) with a strong peak time at 127ms. The source analyses of two dipole model and localization resolution model (CLARA) from 100 to 200 ms both indicated a strong source from the left temporal area with 45.845% residual variance. Similar results were found in the slow-down stimulus with a larger upward current at 110-142 ms (p < 0.05) and a peak time at 117 ms in the left temporal area (47.857% residual variance). In short, we found a significant group difference in the MMNm while children processed frequency-modulated tones with slow temporal changes. The findings may imply that perception of sound frequency signals with slower temporal modulations was related to reading and language development in Chinese. Our study may also support the recent hypothesis of underlying non-verbal auditory temporal deficits accounting for the difficulties in literacy development seen developmental dyslexia.

Keywords: Chinese Mandarin, frequency modulation sweeps, magnetoencephalography, mismatch negativity, reading difficulties

Procedia PDF Downloads 576
244 Investigations on Pyrolysis Model for Radiatively Dominant Diesel Pool Fire Using Fire Dynamic Simulator

Authors: Siva K. Bathina, Sudheer Siddapureddy

Abstract:

Pool fires are formed when the flammable liquid accidentally spills on the ground or water and ignites. Pool fire is a kind of buoyancy-driven and diffusion flame. There have been many pool fire accidents caused during processing, handling and storing of liquid fuels in chemical and oil industries. Such kind of accidents causes enormous damage to property as well as the loss of lives. Pool fires are complex in nature due to the strong interaction among the combustion, heat and mass transfers and pyrolysis at the fuel surface. Moreover, the experimental study of such large complex fires involves fire safety issues and difficulties in performing experiments. In the present work, large eddy simulations are performed to study such complex fire scenarios using fire dynamic simulator. A 1 m diesel pool fire is considered for the studied cases, and diesel is chosen as it is most commonly involved fuel in fire accidents. Fire simulations are performed by specifying two different boundary conditions: one the fuel is in liquid state and pyrolysis model is invoked, and the other by assuming the fuel is initially in a vapor state and thereby prescribing the mass loss rate. A domain of size 11.2 m × 11.2 m × 7.28 m with uniform structured grid is chosen for the numerical simulations. Grid sensitivity analysis is performed, and a non-dimensional grid size of 12 corresponding to 8 cm grid size is considered. Flame properties like mass burning rate, irradiance, and time-averaged axial flame temperature profile are predicted. The predicted steady-state mass burning rate is 40 g/s and is within the uncertainty limits of the previously reported experimental data (39.4 g/s). Though the profile of the irradiance at a distance from the fire along the height is somewhat in line with the experimental data and the location of the maximum value of irradiance is shifted to a higher location. This may be due to the lack of sophisticated models for the species transportation along with combustion and radiation in the continuous zone. Furthermore, the axial temperatures are not predicted well (for any of the boundary conditions) in any of the zones. The present study shows that the existing models are not sufficient enough for modeling blended fuels like diesel. The predictions are strongly dependent on the experimental values of the soot yield. Future experiments are necessary for generalizing the soot yield for different fires.

Keywords: burning rate, fire accidents, fire dynamic simulator, pyrolysis

Procedia PDF Downloads 196
243 Status of Sensory Profile Score among Children with Autism in Selected Centers of Dhaka City

Authors: Nupur A. D., Miah M. S., Moniruzzaman S. K.

Abstract:

Autism is a neurobiological disorder that affects physical, social, and language skills of a person. A child with autism feels difficulty for processing, integrating, and responding to sensory stimuli. Current estimates have shown that 45% to 96 % of children with Autism Spectrum Disorder demonstrate sensory difficulties. As autism is a worldwide burning issue, it has become a highly prioritized and important service provision in Bangladesh. The sensory deficit does not only hamper the normal development of a child, it also hampers the learning process and functional independency. The purpose of this study was to find out the prevalence of sensory dysfunction among children with autism and recognize common patterns of sensory dysfunction. A cross-sectional study design was chosen to carry out this research work. This study enrolled eighty children with autism and their parents by using the systematic sampling method. In this study, data were collected through the Short Sensory Profile (SSP) assessment tool, which consists of 38 items in the questionnaire, and qualified graduate Occupational Therapists were directly involved in interviewing parents as well as observing child responses to sensory related activities of the children with autism from four selected autism centers in Dhaka, Bangladesh. All item analyses were conducted to identify items yielding or resulting in the highest reported sensory processing dysfunction among those children through using SSP and Statistical Package for Social Sciences (SPSS) version 21.0 for data analysis. This study revealed that almost 78.25% of children with autism had significant sensory processing dysfunction based on their sensory response to relevant activities. Under-responsive sensory seeking and auditory filtering were the least common problems among them. On the other hand, most of them (95%) represented that they had definite to probable differences in sensory processing, including under-response or sensory seeking, auditory filtering, and tactile sensitivity. Besides, the result also shows that the definite difference in sensory processing among 64 children was within 100%; it means those children with autism suffered from sensory difficulties, and thus it drew a great impact on the children’s Daily Living Activities (ADLs) as well as social interaction with others. Almost 95% of children with autism require intervention to overcome or normalize the problem. The result gives insight regarding types of sensory processing dysfunction to consider during diagnosis and ascertaining the treatment. So, early sensory problem identification is very important and thus will help to provide appropriate sensory input to minimize the maladaptive behavior and enhance to reach the normal range of adaptive behavior.

Keywords: autism, sensory processing difficulties, sensory profile, occupational therapy

Procedia PDF Downloads 65
242 Modern Information Security Management and Digital Technologies: A Comprehensive Approach to Data Protection

Authors: Mahshid Arabi

Abstract:

With the rapid expansion of digital technologies and the internet, information security has become a critical priority for organizations and individuals. The widespread use of digital tools such as smartphones and internet networks facilitates the storage of vast amounts of data, but simultaneously, vulnerabilities and security threats have significantly increased. The aim of this study is to examine and analyze modern methods of information security management and to develop a comprehensive model to counteract threats and information misuse. This study employs a mixed-methods approach, including both qualitative and quantitative analyses. Initially, a systematic review of previous articles and research in the field of information security was conducted. Then, using the Delphi method, interviews with 30 information security experts were conducted to gather their insights on security challenges and solutions. Based on the results of these interviews, a comprehensive model for information security management was developed. The proposed model includes advanced encryption techniques, machine learning-based intrusion detection systems, and network security protocols. AES and RSA encryption algorithms were used for data protection, and machine learning models such as Random Forest and Neural Networks were utilized for intrusion detection. Statistical analyses were performed using SPSS software. To evaluate the effectiveness of the proposed model, T-Test and ANOVA statistical tests were employed, and results were measured using accuracy, sensitivity, and specificity indicators of the models. Additionally, multiple regression analysis was conducted to examine the impact of various variables on information security. The findings of this study indicate that the comprehensive proposed model reduced cyber-attacks by an average of 85%. Statistical analysis showed that the combined use of encryption techniques and intrusion detection systems significantly improves information security. Based on the obtained results, it is recommended that organizations continuously update their information security systems and use a combination of multiple security methods to protect their data. Additionally, educating employees and raising public awareness about information security can serve as an effective tool in reducing security risks. This research demonstrates that effective and up-to-date information security management requires a comprehensive and coordinated approach, including the development and implementation of advanced techniques and continuous training of human resources.

Keywords: data protection, digital technologies, information security, modern management

Procedia PDF Downloads 29
241 Study of COVID-19 Intensity Correlated with Specific Biomarkers and Environmental Factors

Authors: Satendra Pal Singh, Dalip Kr. Kakru, Jyoti Mishra, Rajesh Thakur, Tarana Sarwat

Abstract:

COVID-19 is still an intrigue as far as morbidity or mortality is concerned. The rate of recovery varies from person to person, & it depends upon the accessibility of the healthcare system and the roles played by the physicians and caregivers. It is envisaged that with the passage of time, people would become immune to this virus, and those who are vulnerable would sustain themselves with the help of vaccines. The proposed study deals with the severeness of COVID-19 is associated with some specific biomarkers linked to correlate age and gender. We will be assessing the overall homeostasis of the persons who were affected by the coronavirus infection and also of those who recovered from it. Some people show more severe effects, while others show very mild symptoms, however, they show low CT values. Thus far, it is unclear why the new strain of Covid has different effects on different people in terms of age, gender, and ABO blood typing. According to data, the fatality rate with heart disease was 10.5 percent, 7.3 percent were diabetic, and 6 percent who are already infected from other comorbidities. However, some COVID-19 cases are worse than others & it is not fully explainable as of date. Overall data show that the ABO blood group is effective or prone to the risk of SARS-COV2 infection, while another study also shows the phenotypic effects of the blood group related to covid. It is an accepted fact that females have more strong immune systems than males, which may be related to the fact that females have two ‘X’ chromosomes, which might contain a more effective immunity booster gene on the X chromosome, and are capable to protect the female. Also specific sex hormones also induce a better immune response in a specific gender. This calls for in-depth analysis to be able to gain insight into this dilemma. COVID-19 is still not fully characterized, and thus we are not very familiar with its biology, mode of infection, susceptibility, and overall viral load in the human body. How many virus particles are needed to infect a person? How, then, comorbidity contribute to coronavirus infection? Since the emergence of this virus in 2020, a large number of papers have been published, and seemingly, vaccines have been prepared. But still, a large number of questions remain unanswered. The proneness of humans for infection by covid-19 needs to be established to be able to develop a better strategy to fight this virus. Our study will be on the Impact of demography on the Severity of covid-19 infection & at the same time, will look into gender-specific sensitivity of Covid-19 and the Operational variation of different biochemical markers in Covid-19 positive patients. Besides, we will be studying the co-relation, if any, of COVID severity & ABO Blood group type and the occurrence of the most common blood group type amongst positive patience.

Keywords: coronavirus, ABO blood group, age, gender

Procedia PDF Downloads 98
240 Biosensor for Determination of Immunoglobulin A, E, G and M

Authors: Umut Kokbas, Mustafa Nisari

Abstract:

Immunoglobulins, also known as antibodies, are glycoprotein molecules produced by activated B cells that transform into plasma cells and result in them. Antibodies are critical molecules of the immune response to fight, which help the immune system specifically recognize and destroy antigens such as bacteria, viruses, and toxins. Immunoglobulin classes differ in their biological properties, structures, targets, functions, and distributions. Five major classes of antibodies have been identified in mammals: IgA, IgD, IgE, IgG, and IgM. Evaluation of the immunoglobulin isotype can provide a useful insight into the complex humoral immune response. Evaluation and knowledge of immunoglobulin structure and classes are also important for the selection and preparation of antibodies for immunoassays and other detection applications. The immunoglobulin test measures the level of certain immunoglobulins in the blood. IgA, IgG, and IgM are usually measured together. In this way, they can provide doctors with important information, especially regarding immune deficiency diseases. Hypogammaglobulinemia (HGG) is one of the main groups of primary immunodeficiency disorders. HGG is caused by various defects in B cell lineage or function that result in low levels of immunoglobulins in the bloodstream. This affects the body's immune response, causing a wide range of clinical features, from asymptomatic diseases to severe and recurrent infections, chronic inflammation and autoimmunity Transient infant hypogammaglobulinemia (THGI), IgM deficiency (IgMD), Bruton agammaglobulinemia, IgA deficiency (SIgAD) HGG samples are a few. Most patients can continue their normal lives by taking prophylactic antibiotics. However, patients with severe infections require intravenous immune serum globulin (IVIG) therapy. The IgE level may rise to fight off parasitic infections, as well as a sign that the body is overreacting to allergens. Also, since the immune response can vary with different antigens, measuring specific antibody levels also aids in the interpretation of the immune response after immunization or vaccination. Immune deficiencies usually occur in childhood. In Immunology and Allergy clinics, apart from the classical methods, it will be more useful in terms of diagnosis and follow-up of diseases, if it is fast, reliable and especially in childhood hypogammaglobulinemia, sampling from children with a method that is more convenient and uncomplicated. The antibodies were attached to the electrode surface via the poly hydroxyethyl methacrylamide cysteine nanopolymer. It was used to evaluate the anodic peak results obtained in the electrochemical study. According to the data obtained, immunoglobulin determination can be made with a biosensor. However, in further studies, it will be useful to develop a medical diagnostic kit with biomedical engineering and to increase its sensitivity.

Keywords: biosensor, immunosensor, immunoglobulin, infection

Procedia PDF Downloads 104
239 The Importance of Developing Pedagogical Agency Capacities in Initial Teacher Formation: A Critical Approach to Advance in Social Justice

Authors: Priscilla Echeverria

Abstract:

This paper addresses initial teacher formation as a formative space in which pedagogy students develop a pedagogical agency capacity to contribute to social justice, considering ethical, political, and epistemic dimensions. This paper is structured by discussing first the concepts of agency, pedagogical interaction, and social justice from a critical perspective; and continues offering preliminary results on the capacity of pedagogical agency in novice teachers after the analysis of critical incidents as a research methodology. This study is motivated by the concern that responding to the current neoliberal scenario, many initial teacher formation (ITF) programs have reduced the meaning of education to instruction, and pedagogy to methodology, favouring the formation of a technical professional over a reflective or critical one. From this concern, this study proposes that the restitution of the subject is an urgent task in teacher formation, so it is essential to enable him in his capacity for action and advance in eliminating institutionalized oppression insofar as it affects that capacity. Given that oppression takes place in human interaction, through this work, I propose that initial teacher formation develops sensitivity and educates the gaze to identify oppression and take action against it, both in pedagogical interactions -which configure political, ethical, and epistemic subjectivities- as in the hidden and official curriculum. All this from the premise that modelling democratic and dialogical interactions are basic for any program that seeks to contribute to a more just and empowered society. The contribution of this study lies in the fact that it opens a discussion in an area about which we know little: the impact of the type of interactions offered by university teaching at ITF on the capacity of future teachers to be pedagogical agents. For this reason, this study seeks to gather evidence of the result of this formation, analysing the capacity of pedagogical agency of novice teachers, or, in other words, how capable the graduates of secondary pedagogies are in their first pedagogical experiences to act and make decisions putting the formative purposes that they are capable of autonomously defining before technical or bureaucratic issues imposed by the curriculum or the official culture. This discussion is part of my doctoral research, "The importance of developing the capacity for ethical-political-epistemic agency in novice teachers during initial teacher formation to contribute to social justice", which I am currently developing in the Educational Research program of the University of Lancaster, United Kingdom, as a Conicyt fellow for the 2019 cohort.

Keywords: initial teacher formation, pedagogical agency, pedagogical interaction, social justice, hidden curriculum

Procedia PDF Downloads 97
238 The Influence of the Variety and Harvesting Date on Haskap Composition and Anti-Diabetic Properties

Authors: Aruma Baduge Kithma Hansanee De Silva

Abstract:

Haskap (Lonicera caerulea L.), also known as blue honeysuckle, is a recently commercialized berry crop in Canada. Haskap berries are rich in polyphenols, including anthocyanins, which are known for potential health-promoting effects. Cyanidin-3-O-glucoside (C3G) is the most prominent anthocyanin of haskap berries. Recent literature reveals the efficacy of C3G in reducing the risk of type 2 diabetes (T2D), which has become an increasingly common health issue around the world. The T2D is characterized as a metabolic disorder of hyperglycemia and insulin resistance. It has been demonstrated that C3G has anti-diabetic effects in various ways, including improvement in insulin sensitivity, and inhibition of activities of carbohydrate-hydrolyzing enzymes, including alpha-amylase and alpha-glucosidase. The goal of this study was to investigate the influence of variety and harvesting date on haskap composition, biological properties, and antidiabetic properties. The polyphenolic compounds present in four commercially grown haskap cultivars, Aurora, Rebecca, Larissa and Evie among five harvesting stages (H1-H5), were extracted separately in 80% ethanol and analyzed to characterize their phenolic profiles. The haskap berries contain different types of polyphenols including flavonoids and phenolic acids. Anthocyanin is the major type of flavonoid. C3G is the most prominent type of anthocyanin, which accounts for 79% of total anthocyanin in all extracts. The variety Larissa at H5 contained the highest average C3G content, and its ethanol extract had the highest (1212.3±63.9 mg/100g FW) while, Evie at H1 contained the lowest C3G content (96.9±40.4 mg/100g FW). The average C3G content of Larissa from H1 – H5 varies from 208 – 1212 mg/100g FW. Quarcetin-3-Rutinoside (Q3Rut) is the major type of flavonol and highest is observed in Rebecca at H4 (47.81 mg/100g FW). The haskap berries also contained phenolic acids, but approximately 95% of the phenolic acids consisted of chlorogenic acid. The cultivar Larissa has a higher level of anthocyanin than the other four cultivars. The highest total phenolic content is observed in Evie at H5 (2.97±1.03 mg/g DW) while the lowest in Rebecca at H1 (1.47±0.96 mg/g DW). The antioxidant capacity of Evie at H5 was higher (14.40±2.21 µmol TE/ g DW) among other cultivars and the lowest observed in Aurora at H3 (5.69±0.34 µmol TE/ g DW). Furthermore, Larissa H5 shows the greatest inhibition of carbohydrate-hydrolyzing enzymes including alpha-glucosidase and alpha-amylase. In conclusion Larissa, at H5 demonstrated highest polyphenol composition and antidiabetic properties.

Keywords: anthocyanin, cyanidin-3-O-glucoside, haskap, type 2 diabetes

Procedia PDF Downloads 456
237 Adaptation of the Scenario Test for Greek-speaking People with Aphasia: Reliability and Validity Study

Authors: Marina Charalambous, Phivos Phylactou, Thekla Elriz, Loukia Psychogios, Jean-Marie Annoni

Abstract:

Background: Evidence-based practices for the evaluation and treatment of people with aphasia (PWA) in Greek are mainly impairment-based. Functional and multimodal communication is usually under assessed and neglected by clinicians. This study explores the adaptation and psychometric testing of the Greek (GR) version of The Scenario Test. The Scenario Test assesses the everyday functional communication of PWA in an interactive multimodal communication setting with the support of an active communication facilitator. Aims: To define the reliability and validity of The Scenario Test GR and discuss its clinical value. Methods & Procedures: The Scenario Test-GR was administered to 54 people with chronic stroke (6+ months post-stroke): 32 PWA and 22 people with stroke without aphasia. Participants were recruited from Greece and Cyprus. All measures were performed in an interview format. Standard psychometric criteria were applied to evaluate reliability (internal consistency, test-retest, and interrater reliability) and validity (construct and known – groups validity) of the Scenario Test GR. Video analysis was performed for the qualitative examination of the communication modes used. Outcomes & Results: The Scenario Test-GR shows high levels of reliability and validity. High scores of internal consistency (Cronbach’s α = .95), test-retest reliability (ICC = .99), and interrater reliability (ICC = .99) were found. Interrater agreement in scores on individual items fell between good and excellent levels of agreement. Correlations with a tool measuring language function in aphasia (the Aphasia Severity Rating Scale of the Boston Diagnostic Aphasia Examination), a measure of functional communication (the Communicative Effectiveness Index), and two instruments examining the psychosocial impact of aphasia (the Stroke and Aphasia Quality of Life questionnaire and the Aphasia Impact Questionnaire) revealed good convergent validity (all ps< .05). Results showed good known – groups validity (Mann-Whitney U = 96.5, p < .001), with significantly higher scores for participants without aphasia compared to those with aphasia. Conclusions: The psychometric qualities of The Scenario Test-GR support the reliability and validity of the tool for the assessment of functional communication for Greek-speaking PWA. The Scenario Test-GR can be used to assess multimodal functional communication, orient aphasia rehabilitation goal setting towards the activity and participation level, and be used as an outcome measure of everyday communication. Future studies will focus on the measurement of sensitivity to change in PWA with severe non-fluent aphasia.

Keywords: the scenario test GR, functional communication assessment, people with aphasia (PWA), tool validation

Procedia PDF Downloads 128
236 A Digital Health Approach: Using Electronic Health Records to Evaluate the Cost Benefit of Early Diagnosis of Alpha-1 Antitrypsin Deficiency in the UK

Authors: Sneha Shankar, Orlando Buendia, Will Evans

Abstract:

Alpha-1 antitrypsin deficiency (AATD) is a rare, genetic, and multisystemic condition. Underdiagnosis is common, leading to chronic pulmonary and hepatic complications, increased resource utilization, and additional costs to the healthcare system. Currently, there is limited evidence of the direct medical costs of AATD diagnosis in the UK. This study explores the economic impact of AATD patients during the 3 years before diagnosis and to identify the major cost drivers using primary and secondary care electronic health record (EHR) data. The 3 years before diagnosis time period was chosen based on the ability of our tool to identify patients earlier. The AATD algorithm was created using published disease criteria and applied to 148 known AATD patients’ EHR found in a primary care database of 936,148 patients (413,674 Biobank and 501,188 in a single primary care locality). Among 148 patients, 9 patients were flagged earlier by the tool and, on average, could save 3 (1-6) years per patient. We analysed 101 of the 148 AATD patients’ primary care journey and 20 patients’ Hospital Episode Statistics (HES) data, all of whom had at least 3 years of clinical history in their records before diagnosis. The codes related to laboratory tests, clinical visits, referrals, hospitalization days, day case, and inpatient admissions attributable to AATD were examined in this 3-year period before diagnosis. The average cost per patient was calculated, and the direct medical costs were modelled based on the mean prevalence of 100 AATD patients in a 500,000 population. A deterministic sensitivity analysis (DSA) of 20% was performed to determine the major cost drivers. Cost data was obtained from the NHS National tariff 2020/21, National Schedule of NHS Costs 2018/19, PSSRU 2018/19, and private care tariff. The total direct medical cost of one hundred AATD patients three years before diagnosis in primary and secondary care in the UK was £3,556,489, with an average direct cost per patient of £35,565. A vast majority of this total direct cost (95%) was associated with inpatient admissions (£3,378,229). The DSA determined that the costs associated with tier-2 laboratory tests and inpatient admissions were the greatest contributors to direct costs in primary and secondary care, respectively. This retrospective study shows the role of EHRs in calculating direct medical costs and the potential benefit of new technologies for the early identification of patients with AATD to reduce the economic burden in primary and secondary care in the UK.

Keywords: alpha-1 antitrypsin deficiency, costs, digital health, early diagnosis

Procedia PDF Downloads 167