Search results for: absorptive capability
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1386

Search results for: absorptive capability

96 Ensuring Safety in Fire Evacuation by Facilitating Way-Finding in Complex Buildings

Authors: Atefeh Omidkhah, Mohammadreza Bemanian

Abstract:

The issue of way-finding earmarks a wide range of literature in architecture and despite the 50 year background of way-finding studies, it still lacks a comprehensive theory for indoor settings. Way-finding has a notable role in emergency evacuation as well. People in the panic situation of a fire emergency need to find the safe egress route correctly and in as minimum time as possible. In this regard the parameters of an appropriate way-finding are mentioned in the evacuation related researches albeit scattered. This study reviews the fire safety related literature to extract a way-finding related framework for architectural purposes of the design of a safe evacuation route. In this regard a research trend review in addition with applied methodological approaches review is conducted. Then by analyzing eight original researches related to way-finding parameters in fire evacuation, main parameters that affect way-finding in emergency situation of a fire incident are extracted and a framework was developed based on them. Results show that the issues related to exit route and emergency evacuation can be chased in task oriented studies of way-finding. This research trend aims to access a high-level framework and in the best condition a theory that has an explanatory capability to define differences in way-finding in indoor/outdoor settings, complex/simple buildings and different building types or transitional spaces. The methodological advances demonstrate the evacuation way-finding researches in line with three approaches that the latter one is the most up-to-date and precise method to research this subject: real actors and hypothetical stimuli as in evacuation experiments, hypothetical actors and stimuli as in agent-based simulations and real actors and semi-real stimuli as in virtual reality environment by adding multi-sensory simulation. Findings on data-mining of 8 sample of original researches in way-finding in evacuation indicate that emergency way-finding design of a building should consider two level of space cognition problems in the time of emergency and performance consequences of them in the built environment. So four major classes of problems in way-finding which are visual information deficiency, confusing layout configuration, improper navigating signage and demographic issues had been defined and discussed as the main parameters that should be provided with solutions in design and interior of a building. In the design phase of complex buildings, which face more reported problem in way-finding, it is important to consider the interior components regarding to the building type of occupancy and behavior of its occupants and determine components that tend to become landmarks and set the architectural features of egress route in line with the directions that they navigate people. Research on topological cognition of environmental and its effect on way-finding task in emergency evacuation is proposed for future.

Keywords: architectural design, egress route, way-finding, fire safety, evacuation

Procedia PDF Downloads 174
95 Development of Three-Dimensional Bio-Reactor Using Magnetic Field Stimulation to Enhance PC12 Cell Axonal Extension

Authors: Eiji Nakamachi, Ryota Sakiyama, Koji Yamamoto, Yusuke Morita, Hidetoshi Sakamoto

Abstract:

The regeneration of injured central nerve network caused by the cerebrovascular accidents is difficult, because of poor regeneration capability of central nerve system composed of the brain and the spinal cord. Recently, new regeneration methods such as transplant of nerve cells and supply of nerve nutritional factor were proposed and examined. However, there still remain many problems with the canceration of engrafted cells and so on and it is strongly required to establish an efficacious treating method of a central nerve system. Blackman proposed the electromagnetic stimulation method to enhance the axonal nerve extension. In this study, we try to design and fabricate a new three-dimensional (3D) bio-reactor, which can load a uniform AC magnetic field stimulation on PC12 cells in the extracellular environment for enhancement of an axonal nerve extension and 3D nerve network generation. Simultaneously, we measure the morphology of PC12 cell bodies, axons, and dendrites by the multiphoton excitation fluorescence microscope (MPM) and evaluate the effectiveness of the uniform AC magnetic stimulation to enhance the axonal nerve extension. Firstly, we designed and fabricated the uniform AC magnetic field stimulation bio-reactor. For the AC magnetic stimulation system, we used the laminated silicon steel sheets for a yoke structure of 3D chamber, which had a high magnetic permeability. Next, we adopted the pole piece structure and installed similar specification coils on both sides of the yoke. We searched an optimum pole piece structure using the magnetic field finite element (FE) analyses and the response surface methodology. We confirmed that the optimum 3D chamber structure showed a uniform magnetic flux density in the PC12 cell culture area by using FE analysis. Then, we fabricated the uniform AC magnetic field stimulation bio-reactor by adopting analytically determined specifications, such as the size of chamber and electromagnetic conditions. We confirmed that measurement results of magnetic field in the chamber showed a good agreement with FE results. Secondly, we fabricated a dish, which set inside the uniform AC magnetic field stimulation of bio-reactor. PC12 cells were disseminated with collagen gel and could be 3D cultured in the dish. The collagen gel were poured in the dish. The collagen gel, which had a disk shape of 6 mm diameter and 3mm height, was set on the membrane filter, which was located at 4 mm height from the bottom of dish. The disk was full filled with the culture medium inside the dish. Finally, we evaluated the effectiveness of the uniform AC magnetic field stimulation to enhance the nurve axonal extension. We confirmed that a 6.8 increase in the average axonal extension length of PC12 under the uniform AC magnetic field stimulation at 7 days culture in our bio-reactor, and a 24.7 increase in the maximum axonal extension length. Further, we confirmed that a 60 increase in the number of dendrites of PC12 under the uniform AC magnetic field stimulation. Finally, we confirm the availability of our uniform AC magnetic stimulation bio-reactor for the nerve axonal extension and the nerve network generation.

Keywords: nerve regeneration, axonal extension , PC12 cell, magnetic field, three-dimensional bio-reactor

Procedia PDF Downloads 168
94 Flexural Performance of the Sandwich Structures Having Aluminum Foam Core with Different Thicknesses

Authors: Emre Kara, Ahmet Fatih Geylan, Kadir Koç, Şura Karakuzu, Metehan Demir, Halil Aykul

Abstract:

The structures obtained with the use of sandwich technologies combine low weight with high energy absorbing capacity and load carrying capacity. Hence, there is a growing and markedly interest in the use of sandwiches with aluminium foam core because of very good properties such as flexural rigidity and energy absorption capability. The static (bending and penetration) and dynamic (dynamic bending and low velocity impact) tests were already performed on the aluminum foam cored sandwiches with different types of outer skins by some of the authors. In the current investigation, the static three-point bending tests were carried out on the sandwiches with aluminum foam core and glass fiber reinforced polymer (GFRP) skins at different values of support span distances (L= 55, 70, 80, 125 mm) aiming the analyses of their flexural performance. The influence of the core thickness and the GFRP skin type was reported in terms of peak load, energy absorption capacity and energy efficiency. For this purpose, the skins with two different types of fabrics ([0°/90°] cross ply E-Glass Woven and [0°/90°] cross ply S-Glass Woven which have same thickness value of 1.5 mm) and the aluminum foam core with two different thicknesses (h=10 and 15 mm) were bonded with a commercial polyurethane based flexible adhesive in order to combine the composite sandwich panels. The GFRP skins fabricated via Vacuum Assisted Resin Transfer Molding (VARTM) technique used in the study can be easily bonded to the aluminum foam core and it is possible to configure the base materials (skin, adhesive and core), fiber angle orientation and number of layers for a specific application. The main results of the bending tests are: force-displacement curves, peak force values, absorbed energy, energy efficiency, collapse mechanisms and the effect of the support span length and core thickness. The results of the experimental study showed that the sandwich with the skins made of S-Glass Woven fabrics and with the thicker foam core presented higher mechanical values such as load carrying and energy absorption capacities. The increment of the support span distance generated the decrease of the mechanical values for each type of panels, as expected, because of the inverse proportion between the force and span length. The most common failure types of the sandwiches are debonding of the upper or lower skin and the core shear. The obtained results have particular importance for applications that require lightweight structures with a high capacity of energy dissipation, such as the transport industry (automotive, aerospace, shipbuilding and marine industry), where the problems of collision and crash have increased in the last years.

Keywords: aluminum foam, composite panel, flexure, transport application

Procedia PDF Downloads 339
93 A Study of the Carbon Footprint from a Liquid Silicone Rubber Compounding Facility in Malaysia

Authors: Q. R. Cheah, Y. F. Tan

Abstract:

In modern times, the push for a low carbon footprint entails achieving carbon neutrality as a goal for future generations. One possible step towards carbon footprint reduction is the use of more durable materials with longer lifespans, for example, silicone data cableswhich show at least double the lifespan of similar plastic products. By having greater durability and longer lifespans, silicone data cables can reduce the amount of trash produced as compared to plastics. Furthermore, silicone products don’t produce micro contamination harmful to the ocean. Every year the electronics industry produces an estimated 5 billion data cables for USB type C and lightning data cables for tablets and mobile phone devices. Material usage for outer jacketing is 6 to 12 grams per meter. Tests show that the product lifespan of a silicone data cable over plastic can be doubled due to greater durability. This can save at least 40,000 tonnes of material a year just on the outer jacketing of the data cable. The facility in this study specialises in compounding of liquid silicone rubber (LSR) material for the extrusion process in jacketing for the silicone data cable. This study analyses the carbon emissions from the facility, which is presently capable of producing more than 1,000 tonnes of LSR annually. This study uses guidelines from the World Business Council for Sustainable Development (WBCSD) and World Resources Institute (WRI) to define the boundaries of the scope. The scope of emissions is defined as 1. Emissions from operations owned or controlled by the reporting company, 2. Emissions from the generation of purchased or acquired energy such as electricity, steam, heating, or cooling consumed by the reporting company, and 3. All other indirect emissions occurring in the value chain of the reporting company, including both upstream and downstream emissions. As the study is limited to the compounding facility, the system boundaries definition according to GHG protocol is cradle-to-gate instead of cradle-to-grave exercises. Malaysia’s present electricity generation scenario was also used, where natural gas and coal constitute the bulk of emissions. Calculations show the LSR produced for the silicone data cable with high fire retardant capability has scope 1 emissions of 0.82kg CO2/kg, scope 2 emissions of 0.87kg CO2/kg, and scope 3 emissions of 2.76kg CO2/kg, with a total product carbon footprint of 4.45kg CO2/kg. This total product carbon footprint (Cradle-to-gate) is comparable to the industry and to plastic materials per tonne of material. Although per tonne emission is comparable to plastic material, due to greater durability and longer lifespan, there can be significantly reduced use of LSR material. Suggestions to reduce the calculated product carbon footprint in the scope of emissions involve 1. Incorporating the recycling of factory silicone waste into operations, 2. Using green renewable energy for external electricity sources and 3. Sourcing eco-friendly raw materials with low GHG emissions.

Keywords: carbon footprint, liquid silicone rubber, silicone data cable, Malaysia facility

Procedia PDF Downloads 97
92 Indoor Air Pollution and Reduced Lung Function in Biomass Exposed Women: A Cross Sectional Study in Pune District, India

Authors: Rasmila Kawan, Sanjay Juvekar, Sandeep Salvi, Gufran Beig, Rainer Sauerborn

Abstract:

Background: Indoor air pollution especially from the use of biomass fuels, remains a potentially large global health threat. The inefficient use of such fuels in poorly ventilated conditions results in high levels of indoor air pollution, most seriously affecting women and young children. Objectives: The main aim of this study was to measure and compare the lung function of the women exposed in the biomass fuels and LPG fuels and relate it to the indoor emission measured using a structured questionnaire, spirometer and filter based low volume samplers respectively. Methodology: This cross-sectional comparative study was conducted among the women (aged > 18 years) living in rural villages of Pune district who were not diagnosed of chronic pulmonary diseases or any other respiratory diseases and using biomass fuels or LPG for cooking for a minimum period of 5 years or more. Data collection was done from April to June 2017 in dry season. Spirometer was performed using the portable, battery-operated ultrasound Easy One spirometer (Spiro bank II, NDD Medical Technologies, Zurich, Switzerland) to determine the lung function over Forced expiratory volume. The primary outcome variable was forced expiratory volume in 1 second (FEV1). Secondary outcome was chronic obstruction pulmonary disease (post bronchodilator FEV1/ Forced Vital Capacity (FVC) < 70%) as defined by the Global Initiative for Obstructive Lung Disease. Potential confounders such as age, height, weight, smoking history, occupation, educational status were considered. Results: Preliminary results showed that the lung function of the women using Biomass fuels (FEV1/FVC = 85% ± 5.13) had comparatively reduced lung function than the LPG users (FEV1/FVC = 86.40% ± 5.32). The mean PM 2.5 mass concentration in the biomass user’s kitchen was 274.34 ± 314.90 and 85.04 ± 97.82 in the LPG user’s kitchen. Black carbon amount was found higher in the biomass users (black carbon = 46.71 ± 46.59 µg/m³) than LPG users (black carbon=11.08 ± 22.97 µg/m³). Most of the houses used separate kitchen. Almost all the houses that used the clean fuel like LPG had minimum amount of the particulate matter 2.5 which might be due to the background pollution and cross ventilation from the houses using biomass fuels. Conclusions: Therefore, there is an urgent need to adopt various strategies to improve indoor air quality. There is a lacking of current state of climate active pollutants emission from different stove designs and identify major deficiencies that need to be tackled. Moreover, the advancement in research tools, measuring technique in particular, is critical for researchers in developing countries to improve their capability to study the emissions for addressing the growing climate change and public health concerns.

Keywords: black carbon, biomass fuels, indoor air pollution, lung function, particulate matter

Procedia PDF Downloads 174
91 Clinical and Analytical Performance of Glial Fibrillary Acidic Protein and Ubiquitin C-Terminal Hydrolase L1 Biomarkers for Traumatic Brain Injury in the Alinity Traumatic Brain Injury Test

Authors: Raj Chandran, Saul Datwyler, Jaime Marino, Daniel West, Karla Grasso, Adam Buss, Hina Syed, Zina Al Sahouri, Jennifer Yen, Krista Caudle, Beth McQuiston

Abstract:

The Alinity i TBI test is Therapeutic Goods Administration (TGA) registered and is a panel of in vitro diagnostic chemiluminescent microparticle immunoassays for the measurement of glial fibrillary acidic protein (GFAP) and ubiquitin C-terminal hydrolase L1 (UCH-L1) in plasma and serum. The Alinity i TBI performance was evaluated in a multi-center pivotal study to demonstrate the capability to assist in determining the need for a CT scan of the head in adult subjects (age 18+) presenting with suspected mild TBI (traumatic brain injury) with a Glasgow Coma Scale score of 13 to 15. TBI has been recognized as an important cause of death and disability and is a growing public health problem. An estimated 69 million people globally experience a TBI annually1. Blood-based biomarkers such as glial fibrillary acidic protein (GFAP) and ubiquitin C-terminal hydrolase L1 (UCH-L1) have shown utility to predict acute traumatic intracranial injury on head CT scans after TBI. A pivotal study using prospectively collected archived (frozen) plasma specimens was conducted to establish the clinical performance of the TBI test on the Alinity i system. The specimens were originally collected in a prospective, multi-center clinical study. Testing of the specimens was performed at three clinical sites in the United States. Performance characteristics such as detection limits, imprecision, linearity, measuring interval, expected values, and interferences were established following Clinical and Laboratory Standards Institute (CLSI) guidance. Of the 1899 mild TBI subjects, 120 had positive head CT scan results; 116 of the 120 specimens had a positive TBI interpretation (Sensitivity 96.7%; 95% CI: 91.7%, 98.7%). Of the 1779 subjects with negative CT scan results, 713 had a negative TBI interpretation (Specificity 40.1%; 95% CI: 37.8, 42.4). The negative predictive value (NPV) of the test was 99.4% (713/717, 95% CI: 98.6%, 99.8%). The analytical measuring interval (AMI) extends from the limit of quantitation (LoQ) to the upper LoQ and is determined by the range that demonstrates acceptable performance for linearity, imprecision, and bias. The AMI is 6.1 to 42,000 pg/mL for GFAP and 26.3 to 25,000 pg/mL for UCH-L1. Overall, within-laboratory imprecision (20 day) ranged from 3.7 to 5.9% CV for GFAP and 3.0 to 6.0% CV for UCH-L1, when including lot and instrument variances. The Alinity i TBI clinical performance results demonstrated high sensitivity and high NPV, supporting the utility to assist in determining the need for a head CT scan in subjects presenting to the emergency department with suspected mild TBI. The GFAP and UCH-L1 assays show robust analytical performance across a broad concentration range of GFAP and UCH-L1 and may serve as a valuable tool to help evaluate TBI patients across the spectrum of mild to severe injury.

Keywords: biomarker, diagnostic, neurology, TBI

Procedia PDF Downloads 68
90 Unveiling Drought Dynamics in the Cuneo District, Italy: A Machine Learning-Enhanced Hydrological Modelling Approach

Authors: Mohammadamin Hashemi, Mohammadreza Kashizadeh

Abstract:

Droughts pose a significant threat to sustainable water resource management, agriculture, and socioeconomic sectors, particularly in the field of climate change. This study investigates drought simulation using rainfall-runoff modelling in the Cuneo district, Italy, over the past 60-year period. The study leverages the TUW model, a lumped conceptual rainfall-runoff model with a semi-distributed operation capability. Similar in structure to the widely used Hydrologiska Byråns Vattenbalansavdelning (HBV) model, the TUW model operates on daily timesteps for input and output data specific to each catchment. It incorporates essential routines for snow accumulation and melting, soil moisture storage, and streamflow generation. Multiple catchments' discharge data within the Cuneo district form the basis for thorough model calibration employing the Kling-Gupta Efficiency (KGE) metric. A crucial metric for reliable drought analysis is one that can accurately represent low-flow events during drought periods. This ensures that the model provides a realistic picture of water availability during these critical times. Subsequent validation of monthly discharge simulations thoroughly evaluates overall model performance. Beyond model development, the investigation delves into drought analysis using the robust Standardized Runoff Index (SRI). This index allows for precise characterization of drought occurrences within the study area. A meticulous comparison of observed and simulated discharge data is conducted, with particular focus on low-flow events that characterize droughts. Additionally, the study explores the complex interplay between land characteristics (e.g., soil type, vegetation cover) and climate variables (e.g., precipitation, temperature) that influence the severity and duration of hydrological droughts. The study's findings demonstrate successful calibration of the TUW model across most catchments, achieving commendable model efficiency. Comparative analysis between simulated and observed discharge data reveals significant agreement, especially during critical low-flow periods. This agreement is further supported by the Pareto coefficient, a statistical measure of goodness-of-fit. The drought analysis provides critical insights into the duration, intensity, and severity of drought events within the Cuneo district. This newfound understanding of spatial and temporal drought dynamics offers valuable information for water resource management strategies and drought mitigation efforts. This research deepens our understanding of drought dynamics in the Cuneo region. Future research directions include refining hydrological modelling techniques and exploring future drought projections under various climate change scenarios.

Keywords: hydrologic extremes, hydrological drought, hydrological modelling, machine learning, rainfall-runoff modelling

Procedia PDF Downloads 43
89 Seismic Perimeter Surveillance System (Virtual Fence) for Threat Detection and Characterization Using Multiple ML Based Trained Models in Weighted Ensemble Voting

Authors: Vivek Mahadev, Manoj Kumar, Neelu Mathur, Brahm Dutt Pandey

Abstract:

Perimeter guarding and protection of critical installations require prompt intrusion detection and assessment to take effective countermeasures. Currently, visual and electronic surveillance are the primary methods used for perimeter guarding. These methods can be costly and complicated, requiring careful planning according to the location and terrain. Moreover, these methods often struggle to detect stealthy and camouflaged insurgents. The object of the present work is to devise a surveillance technique using seismic sensors that overcomes the limitations of existing systems. The aim is to improve intrusion detection, assessment, and characterization by utilizing seismic sensors. Most of the similar systems have only two types of intrusion detection capability viz., human or vehicle. In our work we could even categorize further to identify types of intrusion activity such as walking, running, group walking, fence jumping, tunnel digging and vehicular movements. A virtual fence of 60 meters at GCNEP, Bahadurgarh, Haryana, India, was created by installing four underground geophones at a distance of 15 meters each. The signals received from these geophones are then processed to find unique seismic signatures called features. Various feature optimization and selection methodologies, such as LightGBM, Boruta, Random Forest, Logistics, Recursive Feature Elimination, Chi-2 and Pearson Ratio were used to identify the best features for training the machine learning models. The trained models were developed using algorithms such as supervised support vector machine (SVM) classifier, kNN, Decision Tree, Logistic Regression, Naïve Bayes, and Artificial Neural Networks. These models were then used to predict the category of events, employing weighted ensemble voting to analyze and combine their results. The models were trained with 1940 training events and results were evaluated with 831 test events. It was observed that using the weighted ensemble voting increased the efficiency of predictions. In this study we successfully developed and deployed the virtual fence using geophones. Since these sensors are passive, do not radiate any energy and are installed underground, it is impossible for intruders to locate and nullify them. Their flexibility, quick and easy installation, low costs, hidden deployment and unattended surveillance make such systems especially suitable for critical installations and remote facilities with difficult terrain. This work demonstrates the potential of utilizing seismic sensors for creating better perimeter guarding and protection systems using multiple machine learning models in weighted ensemble voting. In this study the virtual fence achieved an intruder detection efficiency of over 97%.

Keywords: geophone, seismic perimeter surveillance, machine learning, weighted ensemble method

Procedia PDF Downloads 81
88 Superparamagnetic Sensor with Lateral Flow Immunoassays as Platforms for Biomarker Quantification

Authors: M. Salvador, J. C. Martinez-Garcia, A. Moyano, M. C. Blanco-Lopez, M. Rivas

Abstract:

Biosensors play a crucial role in the detection of molecules nowadays due to their advantages of user-friendliness, high selectivity, the analysis in real time and in-situ applications. Among them, Lateral Flow Immunoassays (LFIAs) are presented among technologies for point-of-care bioassays with outstanding characteristics such as affordability, portability and low-cost. They have been widely used for the detection of a vast range of biomarkers, which do not only include proteins but also nucleic acids and even whole cells. Although the LFIA has traditionally been a positive/negative test, tremendous efforts are being done to add to the method the quantifying capability based on the combination of suitable labels and a proper sensor. One of the most successful approaches involves the use of magnetic sensors for detection of magnetic labels. Bringing together the required characteristics mentioned before, our research group has developed a biosensor to detect biomolecules. Superparamagnetic nanoparticles (SPNPs) together with LFIAs play the fundamental roles. SPMNPs are detected by their interaction with a high-frequency current flowing on a printed micro track. By means of the instant and proportional variation of the impedance of this track provoked by the presence of the SPNPs, quantitative and rapid measurement of the number of particles can be obtained. This way of detection requires no external magnetic field application, which reduces the device complexity. On the other hand, the major limitations of LFIAs are that they are only qualitative or semiquantitative when traditional gold or latex nanoparticles are used as color labels. Moreover, the necessity of always-constant ambient conditions to get reproducible results, the exclusive detection of the nanoparticles on the surface of the membrane, and the short durability of the signal are drawbacks that can be advantageously overcome with the design of magnetically labeled LFIAs. The approach followed was to coat the SPIONs with a specific monoclonal antibody which targets the protein under consideration by chemical bonds. Then, a sandwich-type immunoassay was prepared by printing onto the nitrocellulose membrane strip a second antibody against a different epitope of the protein (test line) and an IgG antibody (control line). When the sample flows along the strip, the SPION-labeled proteins are immobilized at the test line, which provides magnetic signal as described before. Preliminary results using this practical combination for the detection and quantification of the Prostatic-Specific Antigen (PSA) shows the validity and consistency of the technique in the clinical range, where a PSA level of 4.0 ng/mL is the established upper normal limit. Moreover, a LOD of 0.25 ng/mL was calculated with a confident level of 3 according to the IUPAC Gold Book definition. Its versatility has also been proved with the detection of other biomolecules such as troponin I (cardiac injury biomarker) or histamine.

Keywords: biosensor, lateral flow immunoassays, point-of-care devices, superparamagnetic nanoparticles

Procedia PDF Downloads 232
87 Cognitive Radio in Aeronautic: Comparison of Some Spectrum Sensing Technics

Authors: Abdelkhalek Bouchikhi, Elyes Benmokhtar, Sebastien Saletzki

Abstract:

The aeronautical field is experiencing issues with RF spectrum congestion due to the constant increase in the number of flights, aircrafts and telecom systems on board. In addition, these systems are bulky in size, weight and energy consumption. The cognitive radio helps particularly solving the spectrum congestion issue by its capacity to detect idle frequency channels then, allowing an opportunistic exploitation of the RF spectrum. The present work aims to propose a new use case for aeronautical spectrum sharing and to study the performances of three different detection techniques: energy detector, matched filter and cyclostationary detector within the aeronautical use case. The spectrum in the proposed cognitive radio is allocated dynamically where each cognitive radio follows a cognitive cycle. The spectrum sensing is a crucial step. The goal of the sensing is gathering data about the surrounding environment. Cognitive radio can use different sensors: antennas, cameras, accelerometer, thermometer, etc. In IEEE 802.22 standard, for example, a primary user (PU) has always the priority to communicate. When a frequency channel witch used by the primary user is idle, the secondary user (SU) is allowed to transmit in this channel. The Distance Measuring Equipment (DME) is composed of a UHF transmitter/receiver (interrogator) in the aircraft and a UHF receiver/transmitter on the ground. While the future cognitive radio will be used jointly to alleviate the spectrum congestion issue in the aeronautical field. LDACS, for example, is a good candidate; it provides two isolated data-links: ground-to-air and air-to-ground data-links. The first contribution of the present work is a strategy allowing sharing the L-band. The adopted spectrum sharing strategy is as follow: the DME will play the role of PU which is the licensed user and the LDACS1 systems will be the SUs. The SUs could use the L-band channels opportunely as long as they do not causing harmful interference signals which affect the QoS of the DME system. Although the spectrum sensing is a key step, it helps detecting holes by determining whether the primary signal is present or not in a given frequency channel. A missing detection on primary user presence creates interference between PU and SU and will affect seriously the QoS of the legacy radio. In this study, first brief definitions, concepts and the state of the art of cognitive radio will be presented. Then, a study of three communication channel detection algorithms in a cognitive radio context is carried out. The study is made from the point of view of functions, material requirements and signal detection capability in the aeronautical field. Then, we presented a modeling of the detection problem by three different methods (energy, adapted filter, and cyclostationary) as well as an algorithmic description of these detectors is done. Then, we study and compare the performance of the algorithms. Simulations were carried out using MATLAB software. We analyzed the results based on ROCs curves for SNR between -10dB and 20dB. The three detectors have been tested with a synthetics and real world signals.

Keywords: aeronautic, communication, navigation, surveillance systems, cognitive radio, spectrum sensing, software defined radio

Procedia PDF Downloads 175
86 Adapting an Accurate Reverse-time Migration Method to USCT Imaging

Authors: Brayden Mi

Abstract:

Reverse time migration has been widely used in the Petroleum exploration industry to reveal subsurface images and to detect rock and fluid properties since the early 1980s. The seismic technology involves the construction of a velocity model through interpretive model construction, seismic tomography, or full waveform inversion, and the application of the reverse-time propagation of acquired seismic data and the original wavelet used in the acquisition. The methodology has matured from 2D, simple media to present-day to handle full 3D imaging challenges in extremely complex geological conditions. Conventional Ultrasound computed tomography (USCT) utilize travel-time-inversion to reconstruct the velocity structure of an organ. With the velocity structure, USCT data can be migrated with the “bend-ray” method, also known as migration. Its seismic application counterpart is called Kirchhoff depth migration, in which the source of reflective energy is traced by ray-tracing and summed to produce a subsurface image. It is well known that ray-tracing-based migration has severe limitations in strongly heterogeneous media and irregular acquisition geometries. Reverse time migration (RTM), on the other hand, fully accounts for the wave phenomena, including multiple arrives and turning rays due to complex velocity structure. It has the capability to fully reconstruct the image detectable in its acquisition aperture. The RTM algorithms typically require a rather accurate velocity model and demand high computing powers, and may not be applicable to real-time imaging as normally required in day-to-day medical operations. However, with the improvement of computing technology, such a computational bottleneck may not present a challenge in the near future. The present-day (RTM) algorithms are typically implemented from a flat datum for the seismic industry. It can be modified to accommodate any acquisition geometry and aperture, as long as sufficient illumination is provided. Such flexibility of RTM can be conveniently implemented for the application in USCT imaging if the spatial coordinates of the transmitters and receivers are known and enough data is collected to provide full illumination. This paper proposes an implementation of a full 3D RTM algorithm for USCT imaging to produce an accurate 3D acoustic image based on the Phase-shift-plus-interpolation (PSPI) method for wavefield extrapolation. In this method, each acquired data set (shot) is propagated back in time, and a known ultrasound wavelet is propagated forward in time, with PSPI wavefield extrapolation and a piece-wise constant velocity model of the organ (breast). The imaging condition is then applied to produce a partial image. Although each image is subject to the limitation of its own illumination aperture, the stack of multiple partial images will produce a full image of the organ, with a much-reduced noise level if compared with individual partial images.

Keywords: illumination, reverse time migration (RTM), ultrasound computed tomography (USCT), wavefield extrapolation

Procedia PDF Downloads 75
85 An eHealth Intervention Using Accelerometer- Smart Phone-App Technology to Promote Physical Activity and Health among Employees in a Military Setting

Authors: Emilia Pietiläinen, Heikki Kyröläinen, Tommi Vasankari, Matti Santtila, Tiina Luukkaala, Kai Parkkola

Abstract:

Working in the military sets special demands on physical fitness, however, reduced physical activity levels among employees in the Finnish Defence Forces (FDF), a trend also being seen among the working-age population in Finland, is leading to reduced physical fitness levels and increased risk of cardiovascular and metabolic diseases, something which also increases human resource costs. Therefore, the aim of the present study was to develop an eHealth intervention using accelerometer- smartphone app feedback technique, telephone counseling and physical activity recordings to increase physical activity of the personnel and thereby improve their health. Specific aims were to reduce stress, improve quality of sleep and mental and physical performance, ability to work and reduce sick leave absences. Employees from six military brigades around Finland were invited to participate in the study, and finally, 260 voluntary participants were included (66 women, 194 men). The participants were randomized into intervention (156) and control groups (104). The eHealth intervention group used accelerometers measuring daily physical activity and duration and quality of sleep for six months. The accelerometers transmitted the data to smartphone apps while giving feedback about daily physical activity and sleep. The intervention group participants were also encouraged to exercise for two hours a week during working hours, a benefit that was already offered to employees following existing FDF guidelines. To separate the exercise done during working hours from the accelerometer data, the intervention group marked this exercise into an exercise diary. The intervention group also participated in telephone counseling about their physical activity. On the other hand, the control group participants continued with their normal exercise routine without the accelerometer and feedback. They could utilize the benefit of being able to exercise during working hours, but they were not separately encouraged for it, nor was the exercise diary used. The participants were measured at baseline, after the entire intervention period, and six months after the end of the entire intervention. The measurements included accelerometer recordings, biochemical laboratory tests, body composition measurements, physical fitness tests, and a wide questionnaire focusing on sociodemographic factors, physical activity and health. In terms of results, the primary indicators of effectiveness are increased physical activity and fitness, improved health status, and reduced sick leave absences. The evaluation of the present scientific reach is based on the data collected during the baseline measurements. Maintenance of the studied outcomes is assessed by comparing the results of the control group measured at the baseline and a year follow-up. Results of the study are not yet available but will be presented at the conference. The present findings will help to develop an easy and cost-effective model to support the health and working capability of employees in the military and other workplaces.

Keywords: accelerometer, health, mobile applications, physical activity, physical performance

Procedia PDF Downloads 197
84 The Origins of Representations: Cognitive and Brain Development

Authors: Athanasios Raftopoulos

Abstract:

In this paper, an attempt is made to explain the evolution or development of human’s representational arsenal from its humble beginnings to its modern abstract symbols. Representations are physical entities that represent something else. To represent a thing (in a general sense of “thing”) means to use in the mind or in an external medium a sign that stands for it. The sign can be used as a proxy of the represented thing when the thing is absent. Representations come in many varieties, from signs that perceptually resemble their representative to abstract symbols that are related to their representata through conventions. Relying the distinction among indices, icons, and symbols, it is explained how symbolic representations gradually emerged from indices and icons. To understand the development or evolution of our representational arsenal, the development of the cognitive capacities that enabled the gradual emergence of representations of increasing complexity and expressive capability should be examined. The examination of these factors should rely on a careful assessment of the available empirical neuroscientific and paleo-anthropological evidence. These pieces of evidence should be synthesized to produce arguments whose conclusions provide clues concerning the developmental process of our representational capabilities. The analysis of the empirical findings in this paper shows that Homo Erectus was able to use both icons and symbols. Icons were used as external representations, while symbols were used in language. The first step in the emergence of representations is that a sensory-motor purely causal schema involved in indices is decoupled from its normal causal sensory-motor functions and serves as a representation of the object that initially called it into play. Sensory-motor schemes are tied to specific contexts of the organism-environment interactions and are activated only within these contexts. For a representation of an object to be possible, this scheme must be de-contextualized so that the same object can be represented in different contexts; a decoupled schema loses its direct ties to reality and becomes mental content. The analysis suggests that symbols emerged due to selection pressures of the social environment. The need to establish and maintain social relationships in ever-enlarging groups that would benefit the group was a sufficient environmental pressure to lead to the appearance of the symbolic capacity. Symbols could serve this need because they can express abstract relationships, such as marriage or monogamy. Icons, by being firmly attached to what can be observed, could not go beyond surface properties to express abstract relations. The cognitive capacities that are required for having iconic and then symbolic representations were present in Homo Erectus, which had a language that started without syntactic rules but was structured so as to mirror the structure of the world. This language became increasingly complex, and grammatical rules started to appear to allow for the construction of more complex expressions required to keep up with the increasing complexity of social niches. This created evolutionary pressures that eventually led to increasing cranial size and restructuring of the brain that allowed more complex representational systems to emerge.

Keywords: mental representations, iconic representations, symbols, human evolution

Procedia PDF Downloads 59
83 Electromagnetic Simulation Based on Drift and Diffusion Currents for Real-Time Systems

Authors: Alexander Norbach

Abstract:

The script in this paper describes the use of advanced simulation environment using electronic systems (Microcontroller, Operational Amplifiers, and FPGA). The simulation may be used for all dynamic systems with the diffusion and the ionisation behaviour also. By additionally required observer structure, the system works with parallel real-time simulation based on diffusion model and the state-space representation for other dynamics. The proposed deposited model may be used for electrodynamic effects, including ionising effects and eddy current distribution also. With the script and proposed method, it is possible to calculate the spatial distribution of the electromagnetic fields in real-time. For further purpose, the spatial temperature distribution may be used also. With upon system, the uncertainties, unknown initial states and disturbances may be determined. This provides the estimation of the more precise system states for the required system, and additionally, the estimation of the ionising disturbances that occur due to radiation effects. The results have shown that a system can be also developed and adopted specifically for space systems with the real-time calculation of the radiation effects only. Electronic systems can take damage caused by impacts with charged particle flux in space or radiation environment. In order to be able to react to these processes, it must be calculated within a shorter time that ionising radiation and dose is present. All available sensors shall be used to observe the spatial distributions. By measured value of size and known location of the sensors, the entire distribution can be calculated retroactively or more accurately. With the formation, the type of ionisation and the direct effect to the systems and thus possible prevent processes can be activated up to the shutdown. The results show possibilities to perform more qualitative and faster simulations independent of kind of systems space-systems and radiation environment also. The paper gives additionally an overview of the diffusion effects and their mechanisms. For the modelling and derivation of equations, the extended current equation is used. The size K represents the proposed charge density drifting vector. The extended diffusion equation was derived and shows the quantising character and has similar law like the Klein-Gordon equation. These kinds of PDE's (Partial Differential Equations) are analytically solvable by giving initial distribution conditions (Cauchy problem) and boundary conditions (Dirichlet boundary condition). For a simpler structure, a transfer function for B- and E- fields was analytically calculated. With known discretised responses g₁(k·Ts) and g₂(k·Ts), the electric current or voltage may be calculated using a convolution; g₁ is the direct function and g₂ is a recursive function. The analytical results are good enough for calculation of fields with diffusion effects. Within the scope of this work, a proposed model of the consideration of the electromagnetic diffusion effects of arbitrary current 'waveforms' has been developed. The advantage of the proposed calculation of diffusion is the real-time capability, which is not really possible with the FEM programs available today. It makes sense in the further course of research to use these methods and to investigate them thoroughly.

Keywords: advanced observer, electrodynamics, systems, diffusion, partial differential equations, solver

Procedia PDF Downloads 131
82 Drivetrain Comparison and Selection Approach for Armored Wheeled Hybrid Vehicles

Authors: Çağrı Bekir Baysal, Göktuğ Burak Çalık

Abstract:

Armored vehicles may have different traction layouts as a result of terrain capabilities and mobility needs. Two main categories of layouts can be separated as wheeled and tracked. Tracked vehicles have superior off-road capabilities but what they gain on terrain performance they lose on mobility front. Wheeled vehicles on the other hand do not have as good terrain capabilities as tracked vehicles but they have superior mobility capabilities such as top speed, range and agility with respect to tracked vehicles. Conventional armored vehicles employ a diesel ICE as main power source. In these vehicles ICE is mechanically connected to the powertrain. This determines the ICE rpm as a result of speed and torque requested by the driver. ICE efficiency changes drastically with torque and speed required and conventional vehicles suffer in terms of fuel consumption because of this. Hybrid electric vehicles employ at least one electric motor in order to improve fuel efficiency. There are different types of hybrid vehicles but main types are Series Hybrid, Parallel Hybrid and Series-Parallel Hybrid. These vehicles introduce an electric motor for traction and also can have a generator electric motor for range extending purposes. Having an electric motor as the traction power source brings the flexibility of either using the ICE as an alternative traction source while it is in efficient range or completely separating the ICE from traction and using it solely considering efficiency. Hybrid configurations have additional advantages for armored vehicles in addition to fuel efficiency. Heat signature, silent operation and prolonged stationary missions can be possible with the help of the high-power battery pack that will be present in the vehicle for hybrid drivetrain. Because of the reasons explained, hybrid armored vehicles are becoming a target area for military and also for vehicle suppliers. In order to have a better idea and starting point when starting a hybrid armored vehicle design, hybrid drivetrain configuration has to be selected after performing a trade-off study. This study has to include vehicle mobility simulations, integration level, vehicle level and performance level criteria. In this study different hybrid traction configurations possible for an 8x8 vehicle is compared using above mentioned criteria set. In order to compare hybrid traction configurations ease of application, cost, weight advantage, reliability, maintainability, redundancy and performance criteria have been used. Performance criteria points have been defined with the help of vehicle simulations and tests. Results of these simulations and tests also help determining required tractive power for an armored vehicle including conditions like trench and obstacle crossing, gradient climb. With the method explained in this study, each configuration is assigned a point for each criterion. This way, correct configuration can be selected objectively for every application. Also, key aspects of armored vehicles, mine protection and ballistic protection will be considered for hybrid configurations. Results are expected to vary for different types of vehicles but it is observed that having longitudinal differential locking capability improves mobility and having high motor count increases complexity in general.

Keywords: armored vehicles, electric drivetrain, electric mobility, hybrid vehicles

Procedia PDF Downloads 86
81 The Display of Age-Period/Age-Cohort Mortality Trends Using 1-Year Intervals Reveals Period and Cohort Effects Coincident with Major Influenza A Events

Authors: Maria Ines Azambuja

Abstract:

Graphic displays of Age-Period-Cohort (APC) mortality trends generally uses data aggregated within 5 or 10-year intervals. Technology allows one to increase the amount of processed data. Displaying occurrences by 1-year intervals is a logic first step in the direction of attaining higher quality landscapes of variations in temporal occurrences. Method: 1) Comparison of UK mortality trends plotted by 10-, 5- and 1-year intervals; 2) Comparison of UK and US mortality trends (period X age and cohort X age) displayed by 1-year intervals. Source: Mortality data (period, 1x1, males, 1933-1912) uploaded from the Human Mortality Database to Excel files, where Period X Age and Cohort X Age graphics were produced. The choice of transforming age-specific trends from calendar to birth-cohort years (cohort = period – age) (instead of using cohort 1x1 data available at the HMD resource) was taken to facilitate the comparison of age-specific trends when looking across calendar-years and birth-cohorts. Yearly live births, males, 1933 to 1912 (UK) were uploaded from the HFD. Influenza references are from the literature. Results: 1) The use of 1-year intervals unveiled previously unsuspected period, cohort and interacting period x cohort effects upon all-causes mortality. 2) The UK and US figures showed variations associated with particular calendar years (1936, 1940, 1951, 1957-68, 72) and, most surprisingly, with particular birth-cohorts (1889-90 in the US, and 1900, 1918-19, 1940-41 and 1946-47, in both countries. Also, the figures showed ups and downs in age-specific trends initiated at particular birth-cohorts (1900, 1918-19 and 1947-48) or a particular calendar-year (1968, 1972, 1977-78 in the US), variations at times restricted to just a range of ages (cohort x period interacting effects). Importantly, most of the identified “scars” (period and cohort) correlates with the record of occurrences of Influenza A epidemics since the late 19th Century. Conclusions: The use of 1-year intervals to describe APC mortality trends both increases the amount of information available, thus enhancing the opportunities for patterns’ recognition, and increases our capability of interpreting those patterns by describing trends across smaller intervals of time (period or birth-cohort). The US and the UK mortality landscapes share many but not all 'scars' and distortions suggested here to be associated with influenza epidemics. Different size-effects of wars are evident, both in mortality and in fertility. But it would also be realistic to suppose that the preponderant influenza A viruses circulating in UK and US at the beginning of the 20th Century might be different and the difference to have intergenerational long-term consequences. Compared with the live births trend (UK data), birth-cohort scars clearly depend on birth-cohort sizes relatives to neighbor ones, which, if causally associated with influenza, would result from influenza-related fetal outcomes/selection. Fetal selection could introduce continuing modifications on population patterns of immune-inflammatory phenotypes that might give rise to 'epidemic constitutions' favoring the occurrence of particular diseases. Comparative analysis of mortality landscapes may help us to straight our record of past circulation of Influenza viruses and document associations between influenza recycling and fertility changes.

Keywords: age-period-cohort trends, epidemic constitution, fertility, influenza, mortality

Procedia PDF Downloads 231
80 Hydrogen Purity: Developing Low-Level Sulphur Speciation Measurement Capability

Authors: Sam Bartlett, Thomas Bacquart, Arul Murugan, Abigail Morris

Abstract:

Fuel cell electric vehicles provide the potential to decarbonise road transport, create new economic opportunities, diversify national energy supply, and significantly reduce the environmental impacts of road transport. A potential issue, however, is that the catalyst used at the fuel cell cathode is susceptible to degradation by impurities, especially sulphur-containing compounds. A recent European Directive (2014/94/EU) stipulates that, from November 2017, all hydrogen provided to fuel cell vehicles in Europe must comply with the hydrogen purity specifications listed in ISO 14687-2; this includes reactive and toxic chemicals such as ammonia and total sulphur-containing compounds. This requirement poses great analytical challenges due to the instability of some of these compounds in calibration gas standards at relatively low amount fractions and the difficulty associated with undertaking measurements of groups of compounds rather than individual compounds. Without the available reference materials and analytical infrastructure, hydrogen refuelling stations will not be able to demonstrate compliance to the ISO 14687 specifications. The hydrogen purity laboratory at NPL provides world leading, accredited purity measurements to allow hydrogen refuelling stations to evidence compliance to ISO 14687. Utilising state-of-the-art methods that have been developed by NPL’s hydrogen purity laboratory, including a novel method for measuring total sulphur compounds at 4 nmol/mol and a hydrogen impurity enrichment device, we provide the capabilities necessary to achieve these goals. An overview of these capabilities will be given in this paper. As part of the EMPIR Hydrogen co-normative project ‘Metrology for sustainable hydrogen energy applications’, NPL are developing a validated analytical methodology for the measurement of speciated sulphur-containing compounds in hydrogen at low amount fractions pmol/mol to nmol/mol) to allow identification and measurement of individual sulphur-containing impurities in real samples of hydrogen (opposed to a ‘total sulphur’ measurement). This is achieved by producing a suite of stable gravimetrically-prepared primary reference gas standards containing low amount fractions of sulphur-containing compounds (hydrogen sulphide, carbonyl sulphide, carbon disulphide, 2-methyl-2-propanethiol and tetrahydrothiophene have been selected for use in this study) to be used in conjunction with novel dynamic dilution facilities to enable generation of pmol/mol to nmol/mol level gas mixtures (a dynamic method is required as compounds at these levels would be unstable in gas cylinder mixtures). Method development and optimisation are performed using gas chromatographic techniques assisted by cryo-trapping technologies and coupled with sulphur chemiluminescence detection to allow improved qualitative and quantitative analyses of sulphur-containing impurities in hydrogen. The paper will review the state-of-the art gas standard preparation techniques, including the use and testing of dynamic dilution technologies for reactive chemical components in hydrogen. Method development will also be presented highlighting the advances in the measurement of speciated sulphur compounds in hydrogen at low amount fractions.

Keywords: gas chromatography, hydrogen purity, ISO 14687, sulphur chemiluminescence detector

Procedia PDF Downloads 226
79 Expanding Entrepreneurial Capabilities through Business Incubators: A Case Study of Idea Hub Nigeria

Authors: Kenechukwu Ikebuaku

Abstract:

Entrepreneurship has long been offered as the panacea for poor economic growth and high rate of unemployment. Business incubation is considered an effective means for enhancing entrepreneurial actitivities while engendering socio-economic development. Information Technology Developers Entrepreneurship Accelerator (iDEA), is a software business incubation programme established by the Nigerian government as a means of boosting digital entrepreneurship activities and reducing unemployment in the country. This study assessed the contribution of iDEA Nigeria’s entrepreneurship programmes towards enhancing the capabilities of its tenants. Using the capability approach and the sustainable livelihoods approach, the study analysed iDEA programmes’ contribution towards the expansion of participants’ entrepreneurial capabilities. Apart from identifying a set of entrepreneurial capabilities from both the literature and empirical analysis, the study went further to ascertain how iDEA incubation has helped to enhance those capabilities for its tenants. It also examined digital entrepreneurship as a valued functioning and as an intermediate functioning leading to other valuable functioning. Furthermore, the study examined gender as a conversion factor in digital entrepreneurship. Both qualitative and quantitative research methods were used for the study, and measurement of key variables was made. While the entire population was utilised to collect data for the quantitative research, purposive sampling was used to select respondents for semi-structured interviews in the qualitative research. However, only 40 beneficiaries agreed to take part in the survey while 10 respondents were interviewed for the study. Responses collected from questionnaires administered were subjected to statistical analysis using SPSS. The study developed indexes to measure the perception of the respondents, on how iDEA programmes have enhanced their entrepreneurial capabilities. The Capabilities Enhancement Perception Index (CEPI) computed indicated that the respondents believed that iDEA programmes enhanced their entrepreneurial capabilities. While access to power supply and reliable internet have the highest positive deviations around mean, negotiation skills and access to customers/clients have the highest negative deviation. These were well supported by the findings of the qualitative analysis in which the participants unequivocally narrated how the resources provided by iDEA aid them in their entrepreneurial endeavours. It was also found that iDEA programmes have a significant effect on the tenants’ access to networking opportunities, both with other emerging entrepreneurs and established entrepreneurs. While assessing gender as a conversion factor, it was discovered that there was very low female participation within the digital entrepreneurship ecosystem. The root cause of this gender disparity was found in unquestioned cultural beliefs and social norms which relegate women to a subservient position and household duties. The findings also showed that many of the entrepreneurs could be considered opportunity-based entrepreneurs rather than necessity entrepreneurs, and that digital entrepreneurship is a valued functioning for iDEA tenants. With regards to challenges facing digital entrepreneurship in Nigeria, infrastructural/institutional inadequacies, lack of funding opportunities, and unfavourable government policies, were considered inimical to entrepreneurial capabilities in the country.

Keywords: entrepreneurial capabilities, unemployment, business incubators, development

Procedia PDF Downloads 239
78 Assessment of Physical Learning Environments in ECE: Interdisciplinary and Multivocal Innovation for Chilean Kindergartens

Authors: Cynthia Adlerstein

Abstract:

Physical learning environment (PLE) has been considered, after family and educators, as the third teacher. There have been conflicting and converging viewpoints on the role of the physical dimensions of places to learn, in facilitating educational innovation and quality. Despite the different approaches, PLE has been widely recognized as a key factor in the quality of the learning experience , and in the levels of learning achievement in ECE . The conceptual frameworks of the field assume that PLE consists of a complex web of factors that shape the overall conditions for learning, and that much more interdisciplinary and complementary methodologies of research and development are required. Although the relevance of PLE attracts a broad international consensus, in Chile it remains under-researched and weakly regulated by public policy. Gaining deeper contextual understanding and more thoughtfully-designed recommendations require the use of innovative assessment tools that cross cultural and disciplinary boundaries to produce new hybrid approaches and improvements. When considering a PLE-based change process for ECE improvement, a central question is what dimensions, variables and indicators could allow a comprehensive assessment of PLE in Chilean kindergartens? Based on a grounded theory social justice inquiry, we adopted a mixed method design, that enabled a multivocal and interdisciplinary construction of data. By using in-depth interviews, discussion groups, questionnaires, and documental analysis, we elicited the PLE discourses of politicians, early childhood practitioners, experts in architectural design and ergonomics, ECE stakeholders, and 3 to 5 year olds. A constant comparison method enabled the construction of the dimensions, variables and indicators through which PLE assessment is possible. Subsequently, the instrument was applied in a sample of 125 early childhood classrooms, to test reliability (internal consistency) and validity (content and construct). As a result, an interdisciplinary and multivocal tool for assessing physical learning environments was constructed and validated, for Chilean kindergartens. The tool is structured upon 7 dimensions (wellbeing, flexible, empowerment, inclusiveness, symbolically meaningful, pedagogically intentioned, institutional management) 19 variables and 105 indicators that are assessed through observation and registration on a mobile app. The overall reliability of the instrument is .938 while the consistency of each dimension varies between .773 (inclusive) and .946 (symbolically meaningful). The validation process through expert opinion and factorial analysis (chi-square test) has shown that the dimensions of the assessment tool reflect the factors of physical learning environments. The constructed assessment tool for kindergartens highlights the significance of the physical environment in early childhood educational settings. The relevance of the instrument relies in its interdisciplinary approach to PLE and in its capability to guide innovative learning environments, based on educational habitability. Though further analysis are required for concurrent validation and standardization, the tool has been considered by practitioners and ECE stakeholders as an intuitive, accessible and remarkable instrument to arise awareness on PLE and on equitable distribution of learning opportunities.

Keywords: Chilean kindergartens, early childhood education, physical learning environment, third teacher

Procedia PDF Downloads 358
77 Implementation of Hybrid Curriculum in Canadian Dental Schools to Manage Child Abuse and Neglect

Authors: Priyajeet Kaur Kaleka

Abstract:

Introduction: A dentist is often the first responder in the battle for a patient’s healthy body and maybe the first health professional to observe signs of child abuse, be it physical, emotional, and/or sexual mistreatment. Therefore, it is an ethical responsibility for the dental clinician to detect and report suspected cases of child abuse and neglect (CAN). The main reasons for not reporting suspected cases of CAN, with special emphasis on the third: 1) Uncertainty of the diagnosis, 2) Lack of knowledge of the reporting procedure, and 3) Child abuse and neglect somewhat remained the subject of ignorance among dental professionals because of a lack of advance clinical training. Given these epidemic proportions, there is a scope of further research about dental school curriculum design. Purpose: This study aimed to assess the knowledge and attitude of dentists in Canada regarding signs and symptoms of child abuse and neglect (CAN), reporting procedures, and whether educational strategies followed by dental schools address this sensitive issue. In pursuit of that aim, this abstract summarizes the evidence related to this question. Materials and Methods: Data was collected through a specially designed questionnaire adapted and modified from the author’s previous cross-sectional study on (CAN), which was conducted in Pune, India, in 2016 and is available on the database of PubMed. Design: A random sample was drawn from the targeted population of registered dentists and dental students in Canada regarding their knowledge, professional responsibilities, and behavior concerning child abuse. Questionnaire data were distributed to 200 members. Out of which, a total number of 157 subjects were in the final sample for statistical analysis, yielding response of 78.5%. Results: Despite having theoretical information on signs and symptoms, 55% of the participants indicated they are not confident to detect child physical abuse cases. 90% of respondents believed that recognition and handling the CAN cases should be a part of undergraduate training. Only 4.5% of the participants have correctly identified all signs of abuse due to inadequate formal training in dental schools and workplaces. Although nearly 96.3% agreed that it is a dentist’s legal responsibility to report CAN, only a small percentage of the participants reported an abuse case in the past. While 72% stated that the most common factor that might prevent a dentist from reporting a case was doubt over the diagnosis. Conclusion: The goal is to motivate dental schools to deal with this critical issue and provide their students with consummate training to strengthen their capability to care for and protect children. The educational institutions should make efforts to spread awareness among dental students regarding the management and tackling of CAN. Clinical Significance: There should be modifications in the dental school curriculum focusing on problem-based learning models to assist graduates to fulfill their legal and professional responsibilities. CAN literacy should be incorporated into the dental curriculum, which will eventually benefit future dentists to break this intergenerational cycle of violence.

Keywords: abuse, child abuse and neglect, dentist knowledge, dental school curriculum, problem-based learning

Procedia PDF Downloads 201
76 Investigation of Software Integration for Simulations of Buoyancy-Driven Heat Transfer in a Vehicle Underhood during Thermal Soak

Authors: R. Yuan, S. Sivasankaran, N. Dutta, K. Ebrahimi

Abstract:

This paper investigates the software capability and computer-aided engineering (CAE) method of modelling transient heat transfer process occurred in the vehicle underhood region during vehicle thermal soak phase. The heat retention from the soak period will be beneficial to the cold start with reduced friction loss for the second 14°C worldwide harmonized light-duty vehicle test procedure (WLTP) cycle, therefore provides benefits on both CO₂ emission reduction and fuel economy. When vehicle undergoes soak stage, the airflow and the associated convective heat transfer around and inside the engine bay is driven by the buoyancy effect. This effect along with thermal radiation and conduction are the key factors to the thermal simulation of the engine bay to obtain the accurate fluids and metal temperature cool-down trajectories and to predict the temperatures at the end of the soak period. Method development has been investigated in this study on a light-duty passenger vehicle using coupled aerodynamic-heat transfer thermal transient modelling method for the full vehicle under 9 hours of thermal soak. The 3D underhood flow dynamics were solved inherently transient by the Lattice-Boltzmann Method (LBM) method using the PowerFlow software. This was further coupled with heat transfer modelling using the PowerTHERM software provided by Exa Corporation. The particle-based LBM method was capable of accurately handling extremely complicated transient flow behavior on complex surface geometries. The detailed thermal modelling, including heat conduction, radiation, and buoyancy-driven heat convection, were integrated solved by PowerTHERM. The 9 hours cool-down period was simulated and compared with the vehicle testing data of the key fluid (coolant, oil) and metal temperatures. The developed CAE method was able to predict the cool-down behaviour of the key fluids and components in agreement with the experimental data and also visualised the air leakage paths and thermal retention around the engine bay. The cool-down trajectories of the key components obtained for the 9 hours thermal soak period provide vital information and a basis for the further development of reduced-order modelling studies in future work. This allows a fast-running model to be developed and be further imbedded with the holistic study of vehicle energy modelling and thermal management. It is also found that the buoyancy effect plays an important part at the first stage of the 9 hours soak and the flow development during this stage is vital to accurately predict the heat transfer coefficients for the heat retention modelling. The developed method has demonstrated the software integration for simulating buoyancy-driven heat transfer in a vehicle underhood region during thermal soak with satisfying accuracy and efficient computing time. The CAE method developed will allow integration of the design of engine encapsulations for improving fuel consumption and reducing CO₂ emissions in a timely and robust manner, aiding the development of low-carbon transport technologies.

Keywords: ATCT/WLTC driving cycle, buoyancy-driven heat transfer, CAE method, heat retention, underhood modeling, vehicle thermal soak

Procedia PDF Downloads 154
75 Forest Fire Burnt Area Assessment in a Part of West Himalayan Region Using Differenced Normalized Burnt Ratio and Neural Network Approach

Authors: Sunil Chandra, Himanshu Rawat, Vikas Gusain, Triparna Barman

Abstract:

Forest fires are a recurrent phenomenon in the Himalayan region owing to the presence of vulnerable forest types, topographical gradients, climatic weather conditions, and anthropogenic pressure. The present study focuses on the identification of forest fire-affected areas in a small part of the West Himalayan region using a differential normalized burnt ratio method and spectral unmixing methods. The study area has a rugged terrain with the presence of sub-tropical pine forest, montane temperate forest, and sub-alpine forest and scrub. The major reason for fires in this region is anthropogenic in nature, with the practice of human-induced fires for getting fresh leaves, scaring wild animals to protect agricultural crops, grazing practices within reserved forests, and igniting fires for cooking and other reasons. The fires caused by the above reasons affect a large area on the ground, necessitating its precise estimation for further management and policy making. In the present study, two approaches have been used for carrying out a burnt area analysis. The first approach followed for burnt area analysis uses a differenced normalized burnt ratio (dNBR) index approach that uses the burnt ratio values generated using the Short-Wave Infrared (SWIR) band and Near Infrared (NIR) bands of the Sentinel-2 image. The results of the dNBR have been compared with the outputs of the spectral mixing methods. It has been found that the dNBR is able to create good results in fire-affected areas having homogenous forest stratum and with slope degree <5 degrees. However, in a rugged terrain where the landscape is largely influenced by the topographical variations, vegetation types, tree density, the results may be largely influenced by the effects of topography, complexity in tree composition, fuel load composition, and soil moisture. Hence, such variations in the factors influencing burnt area assessment may not be effectively carried out using a dNBR approach which is commonly followed for burnt area assessment over a large area. Hence, another approach that has been attempted in the present study utilizes a spectral mixing method where the individual pixel is tested before assigning an information class to it. The method uses a neural network approach utilizing Sentinel-2 bands. The training and testing data are generated from the Sentinel-2 data and the national field inventory, which is further used for generating outputs using ML tools. The analysis of the results indicates that the fire-affected regions and their severity can be better estimated using spectral unmixing methods, which have the capability to resolve the noise in the data and can classify the individual pixel to the precise burnt/unburnt class.

Keywords: categorical data, log linear modeling, neural network, shifting cultivation

Procedia PDF Downloads 56
74 Capability of a Single Antigen to Induce Both Protective and Disease Enhancing Antibody: An Obstacle in the Creation of Vaccines and Passive Immunotherapies

Authors: Parul Kulshreshtha, Subrata Sinha, Rakesh Bhatnagar

Abstract:

This study was conducted by taking B. anthracis as a model pathogen. On infecting a host, B. anthracis secretes three proteins, namely, protective antigen (PA, 83kDa), edema factor (EF, 89 kDa) and lethal factor (LF, 90 kDa). These three proteins are the components of two anthrax toxins. PA binds to the cell surface receptors, namely, tumor endothelial marker (TEM) 8 and capillary morphogenesis protein (CMG) 2. TEM8 and CMG2 interact with LDL-receptor related protein (LRP) 6 for endocytosis of EF and LF. On entering the cell, EF acts as a calmodulin-dependent adenylate cyclase that causes a prolonged increase of cytosolic cyclic adenosine monophosphate (cAMP). LF is a metalloprotease that cleaves most isoforms of mitogen-activated protein kinase kinases (MAPKK/MEK) close to their N-terminus. By secreting these two toxins, B.anthracis ascertains death of the host. Once the systemic levels of the toxins rise, antibiotics alone cannot save the host. Therefore, toxin-specific inhibitors have to be developed. In this wake, monoclonal antibodies have been developed for the neutralization of toxic effects of anthrax toxins. We created hybridomas by using spleen of mice that were actively immunized with rLFn (recombinant N-terminal domain of lethal factor of B. anthracis) to obtain anti-toxin antibodies. Later on, separate group of mice were immunized with rLFn to obtain a polyclonal control for passive immunization studies of monoclonal antibodies. This led to the identification of one cohort of rLFn-immunized mice that harboured disease-enhancing polyclonal antibodies. At the same time, the monoclonal antibodies from all the hybridomas were being tested. Two hybridomas secreted monoclonal antibodies (H8 and H10) that were cross-reactive with EF (edema factor) and LF (lethal factor), while the other two hybridomas secreted LF-specific antibodies (H7 and H11). The protective efficacy of H7, H8, H10 and H11 was investigated. H7, H8 and H10 were found to be protective. H11 was found to have disease enhancing characteristics in-vitro and in mouse model of challenge with B. anthracis. In this study the disease enhancing character of H11 monoclonal antibody and anti-rLFn polyclonal sera was investigated. Combination of H11 with protective monoclonal antibodies (H8 and H10) reduced its disease enhancing nature both in-vitro and in-vivo. But combination of H11 with LETscFv (an scFv with VH and VL identical to H10 but lacking Fc region) could not abrogate the disease-enhancing character of H11 mAb. Therefore it was concluded that for suppression of disease enhancement, Fc portion was absolutely essential for interaction of H10 with H11. Our study indicates that the protective potential of an antibody depends equally on its idiotype/ antigen specificity and its isotype. A number of monoclonal and engineered antibodies are being explored as immunotherapeutics but it is absolutely essential to characterize each one for their individual and combined protective potential. Although new in the sphere of toxin-based diseases, it is extremely important to characterize the disease-enhancing nature of polyclonal as well as monoclonal antibodies. This is because several anti-viral therapeutics and vaccines have failed in the face of this phenomenon. The passive –immunotherapy thus needs to be well formulated to avoid any contraindications.

Keywords: immunotherapy, polyclonal, monoclonal, antibody-dependent disease enhancement

Procedia PDF Downloads 386
73 Exploring a Cross-Sectional Analysis Defining Social Work Leadership Competencies in Social Work Education and Practice

Authors: Trevor Stephen, Joshua D. Aceves, David Guyer, Jona Jacobson

Abstract:

As a profession, social work has much to offer individuals, groups, and organizations. A multidisciplinary approach to understanding and solving complex challenges and a commitment to developing and training ethical practitioners outlines characteristics of a profession embedded with leadership skills. This presentation will take an overview of the historical context of social work leadership, examine social work as a unique leadership model composed of its qualities and theories that inform effective leadership capability as it relates to our code of ethics. Reflect critically on leadership theories and their foundational comparison. Finally, a look at recommendations and implementation to social work education and practice. Similar to defining leadership, there is no universally accepted definition of social work leadership. However, some distinct traits and characteristics are essential. Recent studies help set the stage for this research proposal because they measure views on effective social work leadership among social work and non-social leaders and followers. However, this research is interested in working backward from that approach and examining social workers' leadership preparedness perspectives based solely on social work training, competencies, values, and ethics. Social workers understand how to change complex structures and challenge resistance to change to improve the well-being of organizations and those they serve. Furthermore, previous studies align with the idea of practitioners assessing their skill and capacity to engage in leadership but not to lead. In addition, this research is significant because it explores aspiring social work leaders' competence to translate social work practice into direct leadership skills. The research question seeks to answer whether social work training and competencies are sufficient to determine whether social workers believe they possess the capacity and skill to engage in leadership practice. Aim 1: Assess whether social workers have the capacity and skills to assume leadership roles. Aim 2: Evaluate how the development of social workers is sufficient in defining leadership. This research intends to reframe the misconception that social workers do not possess the capacity and skills to be effective leaders. On the contrary, social work encompasses a framework dedicated to lifelong development and growth. Social workers must be skilled, competent, ethical, supportive, and empathic. These are all qualities and traits of effective leadership, whereas leaders are in relation with others and embody partnership and collaboration with followers and stakeholders. The proposed study is a cross-sectional quasi-experimental survey design that will include the distribution of a multi-level social work leadership model and assessment tool. The assessment tool aims to help define leadership in social work using a Likert scale model. A cross-sectional research design is appropriate for answering the research questions because the measurement survey will help gather data using a structured tool. Other than the proposed social work leadership measurement tool, there is no other mechanism based on social work theory and designed to measure the capacity and skill of social work leadership.

Keywords: leadership competencies, leadership education, multi-level social work leadership model, social work core values, social work leadership, social work leadership education, social work leadership measurement tool

Procedia PDF Downloads 174
72 Barriers to Business Model Innovation in the Agri-Food Industry

Authors: Pia Ulvenblad, Henrik Barth, Jennie Cederholm BjöRklund, Maya Hoveskog, Per-Ola Ulvenblad

Abstract:

The importance of business model innovation (BMI) is widely recognized. This is also valid for firms in the agri-food industry, closely connected to global challenges. Worldwide food production will have to increase 70% by 2050 and the United Nations’ sustainable development goals prioritize research and innovation on food security and sustainable agriculture. The firms of the agri-food industry have opportunities to increase their competitive advantage through BMI. However, the process of BMI is complex and the implementation of new business models is associated with high degree of risk and failure. Thus, managers from all industries and scholars need to better understand how to address this complexity. Therefore, the research presented in this paper (i) explores different categories of barriers in research literature on business models in the agri-food industry, and (ii) illustrates categories of barriers with empirical cases. This study is addressing the rather limited understanding on barriers for BMI in the agri-food industry, through a systematic literature review (SLR) of 570 peer-reviewed journal articles that contained a combination of ‘BM’ or ‘BMI’ with agriculture-related and food-related terms (e.g. ‘agri-food sector’) published in the period 1990-2014. The study classifies the barriers in several categories and illustrates the identified barriers with ten empirical cases. Findings from the literature review show that barriers are mainly identified as outcomes. It can be assumed that a perceived barrier to growth can often be initially exaggerated or underestimated before being challenged by appropriate measures or courses of action. What may be considered by the public mind to be a barrier could in reality be very different from an actual barrier that needs to be challenged. One way of addressing barriers to growth is to define barriers according to their origin (internal/external) and nature (tangible/intangible). The framework encompasses barriers related to the firm (internal addressing in-house conditions) or to the industrial or national levels (external addressing environmental conditions). Tangible barriers can include asset shortages in the area of equipment or facilities, while human resources deficiencies or negative willingness towards growth are examples of intangible barriers. Our findings are consistent with previous research on barriers for BMI that has identified human factors barriers (individuals’ attitudes, histories, etc.); contextual barriers related to company and industry settings; and more abstract barriers (government regulations, value chain position, and weather). However, human factor barriers – and opportunities - related to family-owned businesses with idealistic values and attitudes and owning the real estate where the business is situated, are more frequent in the agri-food industry than other industries. This paper contributes by generating a classification of the barriers for BMI as well as illustrating them with empirical cases. We argue that internal barriers such as human factors barriers; values and attitudes are crucial to overcome in order to develop BMI. However, they can be as hard to overcome as for example institutional barriers such as governments’ regulations. Implications for research and practice are to focus on cognitive barriers and to develop the BMI capability of the owners and managers of agri-industry firms.

Keywords: agri-food, barriers, business model, innovation

Procedia PDF Downloads 236
71 Managing the Blue Economy and Responding to the Environmental Dimensions of a Transnational Governance Challenge

Authors: Ivy Chen XQ

Abstract:

This research places a much-needed focus on the conservation of the Blue Economy (BE) by focusing on the design and development of monitoring systems to track critical indicators on the status of the BE. In this process, local experiences provide an insight into important community issues, as well as the necessity to cooperate and collaborate in order to achieve sustainable options. Researchers worldwide and industry initiatives over the last decade show that the exploitation of marine resources has resulted in a significant decrease in the share of total allowable catch (TAC). The result has been strengthening law enforcement, yet the results have shown that problems were related to poor policies, a lack of understanding of over-exploitation, biological uncertainty and political pressures. This reality and other statistics that show a significant negative impact on the attainment of the Sustainable Development Goals (SDGs), warrant an emphasis on the development of national M&E systems, in order to provide evidence-based information, on the nature and scale of especially transnational fisheries crime and under-sea marine resources in the BE. In particular, a need exists to establish a compendium of relevant BE indicators to assess such impact against the SDGs by using selected SDG indicators for this purpose. The research methodology consists of ATLAS.ti qualitative approach and a case study will be developed of Illegal, unregulated and unreported (IUU) poaching and Illegal Wildlife Trade (IWT) as component of the BE as it relates to the case of abalone in southern Africa and Far East. This research project will make an original contribution through the analysis and comparative assessment of available indicators, in the design process of M&E systems and developing indicators and monitoring frameworks in order to track critical trends and tendencies on the status of the BE, to ensure specific objectives to be aligned with the indicators of the SDGs framework. The research will provide a set of recommendations to governments and stakeholders involved in such projects on lessons learned, as well as priorities for future research. The research findings will enable scholars, civil society institutions, donors and public servants, to understand the capability of the M&E systems, the importance of showing multi-level governance, in the coordination of information management, together with knowledge management (KM) and M&E at the international, regional, national and local levels. This coordination should focus on a sustainable development management approach, based on addressing socio-economic challenges to the potential and sustainability of BE, with an emphasis on ecosystem resilience, social equity and resource efficiency. This research and study focus are timely as the opportunities of the post-Covid-19 crisis recovery package will be grasped to set the economy on a path to sustainable development in line with the UN 2030 Agenda. The pandemic raises more awareness for the world to eliminate IUU poaching and illegal wildlife trade (IWT).

Keywords: Blue Economy (BE), transnational governance, Monitoring and Evaluation (M&E), Sustainable Development Goals (SDGs).

Procedia PDF Downloads 175
70 Cost Efficient Receiver Tube Technology for Eco-Friendly Concentrated Solar Thermal Applications

Authors: M. Shiva Prasad, S. R. Atchuta, T. Vijayaraghavan, S. Sakthivel

Abstract:

The world is in need of efficient energy conversion technologies which are affordable, accessible, and sustainable with eco-friendly nature. Solar energy is one of the cornerstones for the world’s economic growth because of its abundancy with zero carbon pollution. Among the various solar energy conversion technologies, solar thermal technology has attracted a substantial renewed interest due to its diversity and compatibility in various applications. Solar thermal systems employ concentrators, tracking systems and heat engines for electricity generation which lead to high cost and complexity in comparison with photovoltaics; however, it is compatible with distinct thermal energy storage capability and dispatchable electricity which creates a tremendous attraction. Apart from that, employing cost-effective solar selective receiver tube in a concentrating solar thermal (CST) system improves the energy conversion efficiency and directly reduces the cost of technology. In addition, the development of solar receiver tubes by low cost methods which can offer high optical properties and corrosion resistance in an open-air atmosphere would be beneficial for low and medium temperature applications. In this regard, our work opens up an approach which has the potential to achieve cost-effective energy conversion. We have developed a highly selective tandem absorber coating through a facile wet chemical route by a combination of chemical oxidation, sol-gel, and nanoparticle coating methods. The developed tandem absorber coating has gradient refractive index nature on stainless steel (SS 304) and exhibited high optical properties (α ≤ 0.95 & ε ≤ 0.14). The first absorber layer (Cr-Mn-Fe oxides) developed by controlled oxidation of SS 304 in a chemical bath reactor. A second composite layer of ZrO2-SiO2 has been applied on the chemically oxidized substrate by So-gel dip coating method to serve as optical enhancing and corrosion resistant layer. Finally, an antireflective layer (MgF2) has been deposited on the second layer, to achieve > 95% of absorption. The developed tandem layer exhibited good thermal stability up to 250 °C in open air atmospheric condition and superior corrosion resistance (withstands for > 200h in salt spray test (ASTM B117)). After the successful development of a coating with targeted properties at a laboratory scale, a prototype of the 1 m tube has been demonstrated with excellent uniformity and reproducibility. Moreover, it has been validated under standard laboratory test condition as well as in field condition with a comparison of the commercial receiver tube. The presented strategy can be widely adapted to develop highly selective coatings for a variety of CST applications ranging from hot water, solar desalination, and industrial process heat and power generation. The high-performance, cost-effective medium temperature receiver tube technology has attracted many industries, and recently the technology has been transferred to Indian industry.

Keywords: concentrated solar thermal system, solar selective coating, tandem absorber, ultralow refractive index

Procedia PDF Downloads 90
69 PolyScan: Comprehending Human Polymicrobial Infections for Vector-Borne Disease Diagnostic Purposes

Authors: Kunal Garg, Louise Theusen Hermansan, Kanoktip Puttaraska, Oliver Hendricks, Heidi Pirttinen, Leona Gilbert

Abstract:

The Germ Theory (one infectious determinant is equal to one disease) has unarguably evolved our capability to diagnose and treat infectious diseases over the years. Nevertheless, the advent of technology, climate change, and volatile human behavior has brought about drastic changes in our environment, leading us to question the relevance of the Germ Theory in our day, i.e. will vector-borne disease (VBD) sufferers produce multiple immune responses when tested for multiple microbes? Vector diseased patients producing multiple immune responses to different microbes would evidently suggest human polymicrobial infections (HPI). Ongoing diagnostic tools are exceedingly unequipped with the current research findings that would aid in diagnosing patients for polymicrobial infections. This shortcoming has caused misdiagnosis at very high rates, consequently diminishing the patient’s quality of life due to inadequate treatment. Equipped with the state-of-art scientific knowledge, PolyScan intends to address the pitfalls in current VBD diagnostics. PolyScan is a multiplex and multifunctional enzyme linked Immunosorbent assay (ELISA) platform that can test for numerous VBD microbes and allow simultaneous screening for multiple types of antibodies. To validate PolyScan, Lyme Borreliosis (LB) and spondyloarthritis (SpA) patient groups (n = 54 each) were tested for Borrelia burgdorferi, Borrelia burgdorferi Round Body (RB), Borrelia afzelii, Borrelia garinii, and Ehrlichia chaffeensis against IgM and IgG antibodies. LB serum samples were obtained from Germany and SpA serum samples were obtained from Denmark under relevant ethical approvals. The SpA group represented chronic LB stage because reactive arthritis (SpA subtype) in the form of Lyme arthritis links to LB. It was hypothesized that patients from both the groups will produce multiple immune responses that as a consequence would evidently suggest HPI. It was also hypothesized that the multiple immune response proportion in SpA patient group would be significantly larger when compared to the LB patient group across both antibodies. It was observed that 26% LB patients and 57% SpA patients produced multiple immune responses in contrast to 33% LB patients and 30% SpA patients that produced solitary immune responses when tested against IgM. Similarly, 52% LB patients and an astounding 73% SpA patients produced multiple immune responses in contrast to 30% LB patients and 8% SpA patients that produced solitary immune responses when tested against IgG. Interestingly, IgM immune dysfunction in both the patient groups was also recorded. Atypically, 6% of the unresponsive 18% LB with IgG antibody was recorded producing multiple immune responses with the IgM antibody. Similarly, 12% of the unresponsive 19% SpA with IgG antibody was recorded producing multiple immune responses with the IgM antibody. Thus, results not only supported hypothesis but also suggested that IgM may atypically prevail longer than IgG. The PolyScan concept will aid clinicians to detect patients for early, persistent, late, polymicrobial, & immune dysfunction conditions linked to different VBD. PolyScan provides a paradigm shift for the VBD diagnostic industry to follow that will drastically shorten patient’s time to receive adequate treatment.

Keywords: diagnostics, immune dysfunction, polymicrobial, TICK-TAG

Procedia PDF Downloads 333
68 Assessment of Biofilm Production Capacity of Industrially Important Bacteria under Electroinductive Conditions

Authors: Omolola Ojetayo, Emmanuel Garuba, Obinna Ajunwa, Abiodun A. Onilude

Abstract:

Introduction: Biofilm is a functional community of microorganisms that are associated with a surface or an interface. These adherent cells become embedded within an extracellular matrix composed of polymeric substances, i.e., biofilms refer to biological deposits consisting of both microbes and their extracellular products on biotic and abiotic surfaces. Despite their detrimental effects in medicine, biofilms as natural cell immobilization have found several applications in biotechnology, such as in the treatment of wastewater, bioremediation and biodegradation, desulfurization of gas, and conversion of agro-derived materials into alcohols and organic acids. The means of enhancing immobilized cells have been chemical-inductive, and this affects the medium composition and final product. Physical factors including electrical, magnetic, and electromagnetic flux have shown potential for enhancing biofilms depending on the bacterial species, nature, and intensity of emitted signals, the duration of exposure, and substratum used. However, the concept of cell immobilisation by electrical and magnetic induction is still underexplored. Methods: To assess the effects of physical factors on biofilm formation, six American typed culture collection (Acetobacter aceti ATCC15973, Pseudomonas aeruginosa ATCC9027, Serratia marcescens ATCC14756, Gluconobacter oxydans ATCC19357, Rhodobacter sphaeroides ATCC17023, and Bacillus subtilis ATCC6633) were used. Standard culture techniques for bacterial cells were adopted. Natural autoimmobilisation potentials of test bacteria were carried out by simple biofilms ring formation on tubes, while crystal violet binding assay techniques were adopted in the characterisation of biofilm quantity. Electroinduction of bacterial cells by direct current (DC) application in cell broth, static magnetic field exposure, and electromagnetic flux were carried out, and autoimmobilisation of cells in a biofilm pattern was determined on various substrata tested, including wood, glass, steel, polyvinylchloride (PVC) and polyethylene terephthalate. Biot Savart law was used in quantifying magnetic field intensity, and statistical analyses of data obtained were carried out using the analyses of variance (ANOVA) as well as other statistical tools. Results: Biofilm formation by the selected test bacteria was enhanced by the physical factors applied. Electromagnetic induction had the greatest effect on biofilm formation, with magnetic induction producing the least effect across all substrata used. Microbial cell-cell communication could be a possible means via which physical signals affected the cells in a polarisable manner. Conclusion: The enhancement of biofilm formation by bacteria using physical factors has shown that their inherent capability as a cell immobilization method can be further optimised for industrial applications. A possible relationship between the presence of voltage-dependent channels, mechanosensitive channels, and bacterial biofilms could shed more light on this phenomenon.

Keywords: bacteria, biofilm, cell immobilization, electromagnetic induction, substrata

Procedia PDF Downloads 190
67 Ectopic Osteoinduction of Porous Composite Scaffolds Reinforced with Graphene Oxide and Hydroxyapatite Gradient Density

Authors: G. M. Vlasceanu, H. Iovu, E. Vasile, M. Ionita

Abstract:

Herein, the synthesis and characterization of chitosan-gelatin highly porous scaffold reinforced with graphene oxide, and hydroxyapatite (HAp), crosslinked with genipin was targeted. In tissue engineering, chitosan and gelatin are two of the most robust biopolymers with wide applicability due to intrinsic biocompatibility, biodegradability, low antigenicity properties, affordability, and ease of processing. HAp, per its exceptional activity in tuning cell-matrix interactions, is acknowledged for its capability of sustaining cellular proliferation by promoting bone-like native micro-media for cell adjustment. Genipin is regarded as a top class cross-linker, while graphene oxide (GO) is viewed as one of the most performant and versatile fillers. The composites with natural bone HAp/biopolymer ratio were obtained by cascading sonochemical treatments, followed by uncomplicated casting methods and by freeze-drying. Their structure was characterized by Fourier Transform Infrared Spectroscopy and X-ray Diffraction, while overall morphology was investigated by Scanning Electron Microscopy (SEM) and micro-Computer Tomography (µ-CT). Ensuing that, in vitro enzyme degradation was performed to detect the most promising compositions for the development of in vivo assays. Suitable GO dispersion was ascertained within the biopolymer mix as nanolayers specific signals lack in both FTIR and XRD spectra, and the specific spectral features of the polymers persisted with GO load enhancement. Overall, correlations between the GO induced material structuration, crystallinity variations, and chemical interaction of the compounds can be correlated with the physical features and bioactivity of each composite formulation. Moreover, the HAp distribution within follows an auspicious density gradient tuned for hybrid osseous/cartilage matter architectures, which were mirrored in the mice model tests. Hence, the synthesis route of a natural polymer blend/hydroxyapatite-graphene oxide composite material is anticipated to emerge as influential formulation in bone tissue engineering. Acknowledgement: This work was supported by the project 'Work-based learning systems using entrepreneurship grants for doctoral and post-doctoral students' (Sisteme de invatare bazate pe munca prin burse antreprenor pentru doctoranzi si postdoctoranzi) - SIMBA, SMIS code 124705 and by a grant of the National Authority for Scientific Research and Innovation, Operational Program Competitiveness Axis 1 - Section E, Program co-financed from European Regional Development Fund 'Investments for your future' under the project number 154/25.11.2016, P_37_221/2015. The nano-CT experiments were possible due to European Regional Development Fund through Competitiveness Operational Program 2014-2020, Priority axis 1, ID P_36_611, MySMIS code 107066, INOVABIOMED.

Keywords: biopolymer blend, ectopic osteoinduction, graphene oxide composite, hydroxyapatite

Procedia PDF Downloads 104