Search results for: fully spatial signal processing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8843

Search results for: fully spatial signal processing

6413 Digitalization: The Uneven Geography of Information and Communication Technology (ICTS) CTSoss Four Major States in India

Authors: Sanchari Mukhopadhyay

Abstract:

Today, almost the entire realm of human activities are becoming increasingly dependent on the power of information, where through ICTs it is now possible to cater distances and avail various services at a few clicks. In principle, ICTs are thus expected to blur location-specific differences and affiliations of development and bring in an inclusive society at the wake of globalization. However, eventually researchers and policy analysts realized that ICTs are also generating inequality in spite of the hope for an integrated world and widespread social well-being. Regarding this unevenness, location plays a major role as often ICT development is seen to be concentrated into pockets, leaving behind large tracks as underprivileged. Thus, understanding the spatial pattern of ICT development and distribution is significant in relation to exploring the extent to which ICTs are fulfilling the promises or reassuring the existing divisions. In addition, it is also profoundly crucial to investigate how regions are connecting and competing both locally and globally. The focus of the research paper is to evaluate the spatial structure of ICT led development in India. Thereby, it attempts to understand the state level (four selected states) pattern of ICT penetration, the pattern of diffusion across districts with respect to large urban centres and the rural-urban disparity of technology adoption. It also tries to assess the changes in access dynamisms of ICTs as one move away from a large metropolitan city towards the periphery. In brief, the analysis investigates into the tendency towards the uneven growth of development through the identification of the core areas of ICT advancement within the country and its diffusion from the core to the periphery. In order to assess the level of ICT development and rural-urban disparity across the districts of selected states, two indices named ICT Development Index and Rural-Urban Digital Divide Index have been constructed. The study mostly encompasses the latest Census (2011) of the country and TRAI (Telecom Regulatory Authority of India) in some cases.

Keywords: ICT development, diffusion, core-periphery, digital divide

Procedia PDF Downloads 333
6412 Machine Learning Approach for Mutation Testing

Authors: Michael Stewart

Abstract:

Mutation testing is a type of software testing proposed in the 1970s where program statements are deliberately changed to introduce simple errors so that test cases can be validated to determine if they can detect the errors. Test cases are executed against the mutant code to determine if one fails, detects the error and ensures the program is correct. One major issue with this type of testing was it became intensive computationally to generate and test all possible mutations for complex programs. This paper used reinforcement learning and parallel processing within the context of mutation testing for the selection of mutation operators and test cases that reduced the computational cost of testing and improved test suite effectiveness. Experiments were conducted using sample programs to determine how well the reinforcement learning-based algorithm performed with one live mutation, multiple live mutations and no live mutations. The experiments, measured by mutation score, were used to update the algorithm and improved accuracy for predictions. The performance was then evaluated on multiple processor computers. With reinforcement learning, the mutation operators utilized were reduced by 50 – 100%.

Keywords: automated-testing, machine learning, mutation testing, parallel processing, reinforcement learning, software engineering, software testing

Procedia PDF Downloads 198
6411 Control and Automation of Sensors in Metering System of Fluid

Authors: Abdelkader Harrouz, Omar Harrouz, Ali Benatiallah

Abstract:

This paper is to present the essential definitions, roles and characteristics of automation of metering system. We discuss measurement, data acquisition and metrological control of a signal sensor from dynamic metering system. After that, we present control of instruments of metering system of fluid with more detailed discussions to the reference standards.

Keywords: communication, metering, computer, sensor

Procedia PDF Downloads 555
6410 Mage Fusion Based Eye Tumor Detection

Authors: Ahmed Ashit

Abstract:

Image fusion is a significant and efficient image processing method used for detecting different types of tumors. This method has been used as an effective combination technique for obtaining high quality images that combine anatomy and physiology of an organ. It is the main key in the huge biomedical machines for diagnosing cancer such as PET-CT machine. This thesis aims to develop an image analysis system for the detection of the eye tumor. Different image processing methods are used to extract the tumor and then mark it on the original image. The images are first smoothed using median filtering. The background of the image is subtracted, to be then added to the original, results in a brighter area of interest or tumor area. The images are adjusted in order to increase the intensity of their pixels which lead to clearer and brighter images. once the images are enhanced, the edges of the images are detected using canny operators results in a segmented image comprises only of the pupil and the tumor for the abnormal images, and the pupil only for the normal images that have no tumor. The images of normal and abnormal images are collected from two sources: “Miles Research” and “Eye Cancer”. The computerized experimental results show that the developed image fusion based eye tumor detection system is capable of detecting the eye tumor and segment it to be superimposed on the original image.

Keywords: image fusion, eye tumor, canny operators, superimposed

Procedia PDF Downloads 363
6409 Renewable Energy Micro-Grid Control Using Microcontroller in LabVIEW

Authors: Meena Agrawal, Chaitanya P. Agrawal

Abstract:

The power systems are transforming and becoming smarter with innovations in technologies to enable embark simultaneously upon the sustainable energy needs, rising environmental concerns, economic benefits and quality requirements. The advantages provided by inter-connection of renewable energy resources are becoming more viable and dependable with the smart controlling technologies. The limitation of most renewable resources have their diversity and intermittency causing problems in power quality, grid stability, reliability, security etc. is being cured by these efforts. A necessitate of optimal energy management by intelligent Micro-Grids at the distribution end of the power system has been accredited to accommodate sustainable renewable Distributed Energy Resources on large scale across the power grid. All over the world Smart Grids are emerging now as foremost concern infrastructure upgrade programs. The hardware setup includes NI cRIO 9022, Compact Reconfigurable Input Output microcontroller board connected to the PC on a LAN router with three hardware modules. The Real-Time Embedded Controller is reconfigurable controller device consisting of an embedded real-time processor controller for communication and processing, a reconfigurable chassis housing the user-programmable FPGA, Eight hot-swappable I/O modules, and graphical LabVIEW system design software. It has been employed for signal analysis, controls and acquisition and logging of the renewable sources with the LabVIEW Real-Time applications. The employed cRIO chassis controls the timing for the module and handles communication with the PC over the USB, Ethernet, or 802.11 Wi-Fi buses. It combines modular I/O, real-time processing, and NI LabVIEW programmable. In the presented setup, the Analog Input Module NI 9205 five channels have been used for input analog voltage signals from renewable energy sources and NI 9227 four channels have been used for input analog current signals of the renewable sources. For switching actions based on the programming logic developed in software, a module having Electromechanical Relays (single-pole single throw) with 4-Channels, electrically isolated and LED indicating the state of that channel have been used for isolating the renewable Sources on fault occurrence, which is decided by the logic in the program. The module for Ethernet based Data Acquisition Interface ENET 9163 Ethernet Carrier, which is connected on the LAN Router for data acquisition from a remote source over Ethernet also has the module NI 9229 installed. The LabVIEW platform has been employed for efficient data acquisition, monitoring and control. Control logic utilized in program for operation of the hardware switching Related to Fault Relays has been portrayed as a flowchart. A communication system has been successfully developed amongst the sources and loads connected on different computers using Hypertext transfer protocol, HTTP or Ethernet Local Stacked area Network TCP/IP protocol. There are two main I/O interfacing clients controlling the operation of the switching control of the renewable energy sources over internet or intranet. The paper presents experimental results of the briefed setup for intelligent control of the micro-grid for renewable energy sources, besides the control of Micro-Grid with data acquisition and control hardware based on a microcontroller with visual program developed in LabVIEW.

Keywords: data acquisition and control, LabVIEW, microcontroller cRIO, Smart Micro-Grid

Procedia PDF Downloads 333
6408 Applying the Quad Model to Estimate the Implicit Self-Esteem of Patients with Depressive Disorders: Comparing the Psychometric Properties with the Implicit Association Test Effect

Authors: Yi-Tung Lin

Abstract:

Researchers commonly assess implicit self-esteem with the Implicit Association Test (IAT). The IAT’s measure, often referred to as the IAT effect, indicates the strengths of automatic preferences for the self relative to others, which is often considered an index of implicit self-esteem. However, based on the Dual-process theory, the IAT does not rely entirely on the automatic process; it is also influenced by a controlled process. The present study, therefore, analyzed the IAT data with the Quad model, separating four processes on the IAT performance: the likelihood that automatic association is activated by the stimulus in the trial (AC); that a correct response is discriminated in the trial (D); that the automatic bias is overcome in favor of a deliberate response (OB); and that when the association is not activated, and the individual fails to discriminate a correct answer, there is a guessing or response bias drives the response (G). The AC and G processes are automatic, while the D and OB processes are controlled. The AC parameter is considered as the strength of the association activated by the stimulus, which reflects what implicit measures of social cognition aim to assess. The stronger the automatic association between self and positive valence, the more likely it will be activated by a relevant stimulus. Therefore, the AC parameter was used as the index of implicit self-esteem in the present study. Meanwhile, the relationship between implicit self-esteem and depression is not fully investigated. In the cognitive theory of depression, it is assumed that the negative self-schema is crucial in depression. Based on this point of view, implicit self-esteem would be negatively associated with depression. However, the results among empirical studies are inconsistent. The aims of the present study were to examine the psychometric properties of the AC (i.e., test-retest reliability and its correlations with explicit self-esteem and depression) and compare it with that of the IAT effect. The present study had 105 patients with depressive disorders completing the Rosenberg Self-Esteem Scale, Beck Depression Inventory-II and the IAT on the pretest. After at least 3 weeks, the participants completed the second IAT. The data were analyzed by the latent-trait multinomial processing tree model (latent-trait MPT) with the TreeBUGS package in R. The result showed that the latent-trait MPT had a satisfactory model fit. The effect size of test-retest reliability of the AC and the IAT effect were medium (r = .43, p < .0001) and small (r = .29, p < .01) respectively. Only the AC showed a significant correlation with explicit self-esteem (r = .19, p < .05). Neither of the two indexes was correlated with depression. Collectively, the AC parameter was a satisfactory index of implicit self-esteem compared with the IAT effect. Also, the present study supported the results that implicit self-esteem was not correlated with depression.

Keywords: cognitive modeling, implicit association test, implicit self-esteem, quad model

Procedia PDF Downloads 127
6407 Strong Ground Motion Characteristics Revealed by Accelerograms in Ms8.0 Wenchuan Earthquake

Authors: Jie Su, Zhenghua Zhou, Yushi Wang, Yongyi Li

Abstract:

The ground motion characteristics, which are given by the analysis of acceleration records, underlie the formulation and revision of the seismic design code of structural engineering. China Digital Strong Motion Network had recorded a lot of accelerograms of main shock from 478 permanent seismic stations, during the Ms8.0 Wenchuan earthquake on 12th May, 2008. These accelerograms provided a large number of essential data for the analysis of ground motion characteristics of the event. The spatial distribution characteristics, rupture directivity effect, hanging-wall and footwall effect had been studied based on these acceleration records. The results showed that the contours of horizontal peak ground acceleration and peak velocity were approximately parallel to the seismogenic fault which demonstrated that the distribution of the ground motion intensity was obviously controlled by the spatial extension direction of the seismogenic fault. Compared with the peak ground acceleration (PGA) recorded on the sites away from which the front of the fault rupture propagates, the PGA recorded on the sites toward which the front of the fault rupture propagates had larger amplitude and shorter duration, which indicated a significant rupture directivity effect. With the similar fault distance, the PGA of the hanging-wall is apparently greater than that of the foot-wall, while the peak velocity fails to observe this rule. Taking account of the seismic intensity distribution of Wenchuan Ms8.0 earthquake, the shape of strong ground motion contours was significantly affected by the directional effect in the regions with Chinese seismic intensity level VI ~ VIII. However, in the regions whose Chinese seismic intensity level are equal or greater than VIII, the mutual positional relationship between the strong ground motion contours and the surface outcrop trace of the fault was evidently influenced by the hanging-wall and foot-wall effect.

Keywords: hanging-wall and foot-wall effect, peak ground acceleration, rupture directivity effect, strong ground motion

Procedia PDF Downloads 350
6406 Geographic Information Systems and a Breath of Opportunities for Supply Chain Management: Results from a Systematic Literature Review

Authors: Anastasia Tsakiridi

Abstract:

Geographic information systems (GIS) have been utilized in numerous spatial problems, such as site research, land suitability, and demographic analysis. Besides, GIS has been applied in scientific fields like geography, health, and economics. In business studies, GIS has been used to provide insights and spatial perspectives in demographic trends, spending indicators, and network analysis. To date, the information regarding the available usages of GIS in supply chain management (SCM) and how these analyses can benefit businesses is limited. A systematic literature review (SLR) of the last 5-year peer-reviewed academic literature was conducted, aiming to explore the existing usages of GIS in SCM. The searches were performed in 3 databases (Web of Science, ProQuest, and Business Source Premier) and reported using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) methodology. The analysis resulted in 79 papers. The results indicate that the existing GIS applications used in SCM were in the following domains: a) network/ transportation analysis (in 53 of the papers), b) location – allocation site search/ selection (multiple-criteria decision analysis) (in 45 papers), c) spatial analysis (demographic or physical) (in 34 papers), d) combination of GIS and supply chain/network optimization tools (in 32 papers), and e) visualization/ monitoring or building information modeling applications (in 8 papers). An additional categorization of the literature was conducted by examining the usage of GIS in the supply chain (SC) by the business sectors, as indicated by the volume of the papers. The results showed that GIS is mainly being applied in the SC of the biomass biofuel/wood industry (33 papers). Other industries that are currently utilizing GIS in their SC were the logistics industry (22 papers), the humanitarian/emergency/health care sector (10 papers), the food/agro-industry sector (5 papers), the petroleum/ coal/ shale gas sector (3 papers), the faecal sludge sector (2 papers), the recycle and product footprint industry (2 papers), and the construction sector (2 papers). The results were also presented by the geography of the included studies and the GIS software used to provide critical business insights and suggestions for future research. The results showed that research case studies of GIS in SCM were conducted in 26 countries (mainly in the USA) and that the most prominent GIS software provider was the Environmental Systems Research Institute’s ArcGIS (in 51 of the papers). This study is a systematic literature review of the usage of GIS in SCM. The results showed that the GIS capabilities could offer substantial benefits in SCM decision-making by providing key insights to cost minimization, supplier selection, facility location, SC network configuration, and asset management. However, as presented in the results, only eight industries/sectors are currently using GIS in their SCM activities. These findings may offer essential tools to SC managers who seek to optimize the SC activities and/or minimize logistic costs and to consultants and business owners that want to make strategic SC decisions. Furthermore, the findings may be of interest to researchers aiming to investigate unexplored research areas where GIS may improve SCM.

Keywords: supply chain management, logistics, systematic literature review, GIS

Procedia PDF Downloads 142
6405 Effect of Different Knee-Joint Positions on Passive Stiffness of Medial Gastrocnemius Muscle and Aponeuroses during Passive Ankle Motion

Authors: Xiyao Shan, Pavlos Evangelidis, Adam Kositsky, Naoki Ikeda, Yasuo Kawakami

Abstract:

The human triceps surae (two bi-articular gastrocnemii and one mono-articular soleus) have aponeuroses in the posterior and anterior aspects of each muscle, where the anterior aponeuroses of the gastrocnemii adjoin the posterior aponeurosis of the soleus, possibly contributing to the intermuscular force transmission between gastrocnemii and soleus. Since the mechanical behavior of these aponeuroses at different knee- and ankle-joint positions remains unclear, the purpose of this study was to clarify this through observations of the localized changes in passive stiffness of the posterior aponeuroses, muscle belly and adjoining aponeuroses of the medial gastrocnemius (MG) induced by different knee and ankle angles. Eleven healthy young males (25 ± 2 yr, 176.7 ± 4.7 cm, 71.1 ± 11.1 kg) participated in this study. Each subject took either a prone position on an isokinetic dynamometer while the knee joint was fully extended (K180) or a kneeling position while the knee joint was 90° flexed (K90), in a randomized and counterbalanced order. The ankle joint was then passively moved through a 50° range of motion (ROM) by the dynamometer from 30° of plantar flexion (PF) to 20° of dorsiflexion (DF) at 2°/s and the ultrasound shear-wave velocity was measured to obtain shear moduli of the posterior aponeurosis, MG belly, and adjoining aponeuroses. The main findings were: 1) shear modulus in K180 was significantly higher (p < 0.05) than K90 for the posterior aponeurosis (across all ankle angles, 10.2 ± 5.7 kPa-59.4 ± 28.7 kPa vs. 5.4 ± 2.2 kPa-11.6 ± 4.1 kPa), MG belly (from PF10° to DF20°, 9.7 ± 2.2 kPa-53.6 ± 18.6 kPa vs. 8.0 ± 2.7 kPa-9.5 ± 3.7 kPa), and adjoining aponeuroses (across all ankle angles, 17.3 ± 7.8 kPa-80 ± 25.7 kPa vs. 12.2 ± 4.5 kPa-52.4 ± 23.0 kPa); 2) shear modulus of the posterior aponeuroses significantly increased (p < 0.05) from PF10° to PF20° in K180, while shear modulus of MG belly significantly increased (p < 0.05) from 0° to PF20° only in K180 and shear modulus of adjoining aponeuroses significantly increased (p < 0.05) across the whole ROM of ankle both in K180 and K90. These results suggest that different knee-joint positions can affect not only the bi-articular gastrocnemius but also influence the mechanical behavior of aponeuroses. In addition, compared to the gradual stiffening of the adjoining aponeuroses across the whole ROM of ankle, the posterior aponeurosis became slack in the plantar flexed positions and then was stiffened gradually as the knee was fully extended. This suggests distinct stiffening for the posterior and adjoining aponeuroses which is joint position-dependent.

Keywords: aponeurosis, plantar flexion and dorsiflexion, shear modulus, shear wave elastography

Procedia PDF Downloads 190
6404 Artificial Habitat Mapping in Adriatic Sea

Authors: Annalisa Gaetani, Anna Nora Tassetti, Gianna Fabi

Abstract:

The hydroacoustic technology is an efficient tool to study the sea environment: the most recent advancement in artificial habitat mapping involves acoustic systems to investigate fish abundance, distribution and behavior in specific areas. Along with a detailed high-coverage bathymetric mapping of the seabed, the high-frequency Multibeam Echosounder (MBES) offers the potential of detecting fine-scale distribution of fish aggregation, combining its ability to detect at the same time the seafloor and the water column. Surveying fish schools distribution around artificial structures, MBES allows to evaluate how their presence modifies the biological natural habitat overtime in terms of fish attraction and abundance. In the last years, artificial habitat mapping experiences have been carried out by CNR-ISMAR in the Adriatic sea: fish assemblages aggregating at offshore gas platforms and artificial reefs have been systematically monitored employing different kinds of methodologies. This work focuses on two case studies: a gas extraction platform founded at 80 meters of depth in the central Adriatic sea, 30 miles far from the coast of Ancona, and the concrete and steel artificial reef of Senigallia, deployed by CNR-ISMAR about 1.2 miles offshore at a depth of 11.2 m . Relating the MBES data (metrical dimensions of fish assemblages, shape, depth, density etc.) with the results coming from other methodologies, such as experimental fishing surveys and underwater video camera, it has been possible to investigate the biological assemblage attracted by artificial structures hypothesizing which species populate the investigated area and their spatial dislocation from these artificial structures. Processing MBES bathymetric and water column data, 3D virtual scenes of the artificial habitats have been created, receiving an intuitive-looking depiction of their state and allowing overtime to evaluate their change in terms of dimensional characteristics and depth fish schools’ disposition. These MBES surveys play a leading part in the general multi-year programs carried out by CNR-ISMAR with the aim to assess potential biological changes linked to human activities on.

Keywords: artificial habitat mapping, fish assemblages, hydroacustic technology, multibeam echosounder

Procedia PDF Downloads 260
6403 Multimodal Integration of EEG, fMRI and Positron Emission Tomography Data Using Principal Component Analysis for Prognosis in Coma Patients

Authors: Denis Jordan, Daniel Golkowski, Mathias Lukas, Katharina Merz, Caroline Mlynarcik, Max Maurer, Valentin Riedl, Stefan Foerster, Eberhard F. Kochs, Andreas Bender, Ruediger Ilg

Abstract:

Introduction: So far, clinical assessments that rely on behavioral responses to differentiate coma states or even predict outcome in coma patients are unreliable, e.g. because of some patients’ motor disabilities. The present study was aimed to provide prognosis in coma patients using markers from electroencephalogram (EEG), blood oxygen level dependent (BOLD) functional magnetic resonance imaging (fMRI) and [18F]-fluorodeoxyglucose (FDG) positron emission tomography (PET). Unsuperwised principal component analysis (PCA) was used for multimodal integration of markers. Methods: Approved by the local ethics committee of the Technical University of Munich (Germany) 20 patients (aged 18-89) with severe brain damage were acquired through intensive care units at the Klinikum rechts der Isar in Munich and at the Therapiezentrum Burgau (Germany). At the day of EEG/fMRI/PET measurement (date I) patients (<3.5 month in coma) were grouped in the minimal conscious state (MCS) or vegetative state (VS) on the basis of their clinical presentation (coma recovery scale-revised, CRS-R). Follow-up assessment (date II) was also based on CRS-R in a period of 8 to 24 month after date I. At date I, 63 channel EEG (Brain Products, Gilching, Germany) was recorded outside the scanner, and subsequently simultaneous FDG-PET/fMRI was acquired on an integrated Siemens Biograph mMR 3T scanner (Siemens Healthineers, Erlangen Germany). Power spectral densities, permutation entropy (PE) and symbolic transfer entropy (STE) were calculated in/between frontal, temporal, parietal and occipital EEG channels. PE and STE are based on symbolic time series analysis and were already introduced as robust markers separating wakefulness from unconsciousness in EEG during general anesthesia. While PE quantifies the regularity structure of the neighboring order of signal values (a surrogate of cortical information processing), STE reflects information transfer between two signals (a surrogate of directed connectivity in cortical networks). fMRI was carried out using SPM12 (Wellcome Trust Center for Neuroimaging, University of London, UK). Functional images were realigned, segmented, normalized and smoothed. PET was acquired for 45 minutes in list-mode. For absolute quantification of brain’s glucose consumption rate in FDG-PET, kinetic modelling was performed with Patlak’s plot method. BOLD signal intensity in fMRI and glucose uptake in PET was calculated in 8 distinct cortical areas. PCA was performed over all markers from EEG/fMRI/PET. Prognosis (persistent VS and deceased patients vs. recovery to MCS/awake from date I to date II) was evaluated using the area under the curve (AUC) including bootstrap confidence intervals (CI, *: p<0.05). Results: Prognosis was reliably indicated by the first component of PCA (AUC=0.99*, CI=0.92-1.00) showing a higher AUC when compared to the best single markers (EEG: AUC<0.96*, fMRI: AUC<0.86*, PET: AUC<0.60). CRS-R did not show prediction (AUC=0.51, CI=0.29-0.78). Conclusion: In a multimodal analysis of EEG/fMRI/PET in coma patients, PCA lead to a reliable prognosis. The impact of this result is evident, as clinical estimates of prognosis are inapt at time and could be supported by quantitative biomarkers from EEG, fMRI and PET. Due to the small sample size, further investigations are required, in particular allowing superwised learning instead of the basic approach of unsuperwised PCA.

Keywords: coma states and prognosis, electroencephalogram, entropy, functional magnetic resonance imaging, machine learning, positron emission tomography, principal component analysis

Procedia PDF Downloads 339
6402 Target-Triggered DNA Motors and their Applications to Biosensing

Authors: Hongquan Zhang

Abstract:

Inspired by endogenous protein motors, researchers have constructed various synthetic DNA motors based on the specificity and predictability of Watson-Crick base pairing. However, the application of DNA motors to signal amplification and biosensing is limited because of low mobility and difficulty in real-time monitoring of the walking process. The objective of our work was to construct a new type of DNA motor termed target-triggered DNA motors that can walk for hundreds of steps in response to a single target binding event. To improve the mobility and processivity of DNA motors, we used gold nanoparticles (AuNPs) as scaffolds to build high-density, three-dimensional tracks. Hundreds of track strands are conjugated to a single AuNP. To enable DNA motors to respond to specific protein and nucleic acid targets, we adapted the binding-induced DNA assembly into the design of the target-triggered DNA motors. In response to the binding of specific target molecules, DNA motors are activated to autonomously walk along AuNP, which is powered by a nicking endonuclease or DNAzyme-catalyzed cleavage of track strands. Each moving step restores the fluorescence of a dye molecule, enabling monitoring of the operation of DNA motors in real time. The motors can translate a single binding event into the generation of hundreds of oligonucleotides from a single nanoparticle. The motors have been applied to amplify the detection of proteins and nucleic acids in test tubes and live cells. The motors were able to detect low pM concentrations of specific protein and nucleic acid targets in homogeneous solutions without the need for separation. Target-triggered DNA motors are significant for broadening applications of DNA motors to molecular sensing, cell imagining, molecular interaction monitoring, and controlled delivery and release of therapeutics.

Keywords: biosensing, DNA motors, gold nanoparticles, signal amplification

Procedia PDF Downloads 84
6401 Process Optimization and Microbial Quality of Provitamin A-Biofortified Amahewu, a Non-Alcoholic Maize Based Beverage

Authors: Temitope D. Awobusuyi, Eric O. Amonsou, Muthulisi Siwela, Oluwatosin A. Ijabadeniyi

Abstract:

Provitamin A-biofortified maize has been developed to alleviate Vitamin A deficiency; a major public health problem in developing countries. Amahewu, a non-alcoholic fermented maize based beverage is produced using white maize, which is deficient in Vitamin A. In this study, the suitable processing conditions for the production of amahewu using provitamin A-biofortified maize and the microbial quality of the processed products were evaluated. Provitamin A-biofortified amahewu was produced with reference to traditional processing method. Processing variables were Inoculum types (Malted provitamin A maize, Wheat bran, and lactobacillus mixed starter culture with either malted provitamin A or wheat bran) and concentration (0.5 %, 1 % and 2 %). A total of four provitamin A-biofortified amahewu products after fermentation were subjected to different storage conditions: 4ᴼC, 25ᴼC and 37ᴼC. pH and TTA were monitored throughout the storage period. Sample of provitamin A-biofortified amahewu were plated and observed every day for 5 days to assess the presence of Aerobic and Anaerobic spore formers, E.coli, Lactobacillus and Mould. The addition of starter culture substantially reduced the fermentation time (6 hour, pH 3.3) compared to those with no addition of starter culture (24 hour pH 3.5). It was observed that Lactobacillus were present from day 0 for all the storage temperatures. The presence of aerobic spore former and mould were observed on day 3. E.coli and Anaerobic spore formers were not present throughout the storage period. These microbial growth were minimal at 4ᴼC while 25ᴼC had higher counts of growth with 37ᴼC having the highest colony count. Throughout the storage period, pH of provitamin A-biofortified amahewu was stable. Provitamin A-biofortified amahewu stored under refrigerated condition (4ᴼC) had better storability compared to 25ᴼC and 37ᴼC. The production and microbial quality of provitamin A-biofortified amahewu might be important in combating Vitamin A Deficiency.

Keywords: biofortification, fermentation, maize, vitamin A deficiency

Procedia PDF Downloads 432
6400 Power Quality Modeling Using Recognition Learning Methods for Waveform Disturbances

Authors: Sang-Keun Moon, Hong-Rok Lim, Jin-O Kim

Abstract:

This paper presents a Power Quality (PQ) modeling and filtering processes for the distribution system disturbances using recognition learning methods. Typical PQ waveforms with mathematical applications and gathered field data are applied to the proposed models. The objective of this paper is analyzing PQ data with respect to monitoring, discriminating, and evaluating the waveform of power disturbances to ensure the system preventative system failure protections and complex system problem estimations. Examined signal filtering techniques are used for the field waveform noises and feature extractions. Using extraction and learning classification techniques, the efficiency was verified for the recognition of the PQ disturbances with focusing on interactive modeling methods in this paper. The waveform of selected 8 disturbances is modeled with randomized parameters of IEEE 1159 PQ ranges. The range, parameters, and weights are updated regarding field waveform obtained. Along with voltages, currents have same process to obtain the waveform features as the voltage apart from some of ratings and filters. Changing loads are causing the distortion in the voltage waveform due to the drawing of the different patterns of current variation. In the conclusion, PQ disturbances in the voltage and current waveforms indicate different types of patterns of variations and disturbance, and a modified technique based on the symmetrical components in time domain was proposed in this paper for the PQ disturbances detection and then classification. Our method is based on the fact that obtained waveforms from suggested trigger conditions contain potential information for abnormality detections. The extracted features are sequentially applied to estimation and recognition learning modules for further studies.

Keywords: power quality recognition, PQ modeling, waveform feature extraction, disturbance trigger condition, PQ signal filtering

Procedia PDF Downloads 186
6399 Predicting Polyethylene Processing Properties Based on Reaction Conditions via a Coupled Kinetic, Stochastic and Rheological Modelling Approach

Authors: Kristina Pflug, Markus Busch

Abstract:

Being able to predict polymer properties and processing behavior based on the applied operating reaction conditions in one of the key challenges in modern polymer reaction engineering. Especially, for cost-intensive processes such as the high-pressure polymerization of low-density polyethylene (LDPE) with high safety-requirements, the need for simulation-based process optimization and product design is high. A multi-scale modelling approach was set-up and validated via a series of high-pressure mini-plant autoclave reactor experiments. The approach starts with the numerical modelling of the complex reaction network of the LDPE polymerization taking into consideration the actual reaction conditions. While this gives average product properties, the complex polymeric microstructure including random short- and long-chain branching is calculated via a hybrid Monte Carlo-approach. Finally, the processing behavior of LDPE -its melt flow behavior- is determined in dependence of the previously determined polymeric microstructure using the branch on branch algorithm for randomly branched polymer systems. All three steps of the multi-scale modelling approach can be independently validated against analytical data. A triple-detector GPC containing an IR, viscosimetry and multi-angle light scattering detector is applied. It serves to determine molecular weight distributions as well as chain-length dependent short- and long-chain branching frequencies. 13C-NMR measurements give average branching frequencies, and rheological measurements in shear and extension serve to characterize the polymeric flow behavior. The accordance of experimental and modelled results was found to be extraordinary, especially taking into consideration that the applied multi-scale modelling approach does not contain parameter fitting of the data. This validates the suggested approach and proves its universality at the same time. In the next step, the modelling approach can be applied to other reactor types, such as tubular reactors or industrial scale. Moreover, sensitivity analysis for systematically varying process conditions is easily feasible. The developed multi-scale modelling approach finally gives the opportunity to predict and design LDPE processing behavior simply based on process conditions such as feed streams and inlet temperatures and pressures.

Keywords: low-density polyethylene, multi-scale modelling, polymer properties, reaction engineering, rheology

Procedia PDF Downloads 124
6398 The Diagnostic Utility and Sensitivity of the Xpert® MTB/RIF Assay in Diagnosing Mycobacterium tuberculosis in Bone Marrow Aspirate Specimens

Authors: Nadhiya N. Subramony, Jenifer Vaughan, Lesley E. Scott

Abstract:

In South Africa, the World Health Organisation estimated 454000 new cases of Mycobacterium tuberculosis (M.tb) infection (MTB) in 2015. Disseminated tuberculosis arises from the haematogenous spread and seeding of the bacilli in extrapulmonary sites. The gold standard for the detection of MTB in bone marrow is TB culture which has an average turnaround time of 6 weeks. Histological examinations of trephine biopsies to diagnose MTB also have a time delay owing mainly to the 5-7 day processing period prior to microscopic examination. Adding to the diagnostic delay is the non-specific nature of granulomatous inflammation which is the hallmark of MTB involvement of the bone marrow. A Ziehl-Neelson stain (which highlights acid-fast bacilli) is therefore mandatory to confirm the diagnosis but can take up to 3 days for processing and evaluation. Owing to this delay in diagnosis, many patients are lost to follow up or remain untreated whilst results are awaited, thus encouraging the spread of undiagnosed TB. The Xpert® MTB/RIF (Cepheid, Sunnyvale, CA) is the molecular test used in the South African national TB program as the initial diagnostic test for pulmonary TB. This study investigates the optimisation and performance of the Xpert® MTB/RIF on bone marrow aspirate specimens (BMA), a first since the introduction of the assay in the diagnosis of extrapulmonary TB. BMA received for immunophenotypic analysis as part of the investigation into disseminated MTB or in the evaluation of cytopenias in immunocompromised patients were used. Processing BMA on the Xpert® MTB/RIF was optimised to ensure bone marrow in EDTA and heparin did not inhibit the PCR reaction. Inactivated M.tb was spiked into the clinical bone marrow specimen and distilled water (as a control). A volume of 500mcl and an incubation time of 15 minutes with sample reagent were investigated as the processing protocol. A total of 135 BMA specimens had sufficient residual volume for Xpert® MTB/RIF testing however 22 specimens (16.3%) were not included in the final statistical analysis as an adequate trephine biopsy and/or TB culture was not available. Xpert® MTB/RIF testing was not affected by BMA material in the presence of heparin or EDTA, but the overall detection of MTB in BMA was low compared to histology and culture. Sensitivity of the Xpert® MTB/RIF compared to both histology and culture was 8.7% (95% confidence interval (CI): 1.07-28.04%) and sensitivity compared to histology only was 11.1% (95% CI: 1.38-34.7%). Specificity of the Xpert® MTB/RIF was 98.9% (95% CI: 93.9-99.7%). Although the Xpert® MTB/RIF generates a faster result than histology and TB culture and is less expensive than culture and drug susceptibility testing, the low sensitivity of the Xpert® MTB/RIF precludes its use for the diagnosis of MTB in bone marrow aspirate specimens and warrants alternative/additional testing to optimise the assay.

Keywords: bone marrow aspirate , extrapulmonary TB, low sensitivity, Xpert® MTB/RIF

Procedia PDF Downloads 172
6397 Coffee Consumption Has No Acute Effects on Glucose Metabolism in Healthy Men: A Randomized Crossover Clinical Trial

Authors: Caio E. G. Reis, Sara Wassell, Adriana L. Porto, Angélica A. Amato, Leslie J. C. Bluck, Teresa H. M. da Costa

Abstract:

Background: Multiple epidemiologic studies have consistently reported association between increased coffee consumption and a lowered risk of Type 2 Diabetes Mellitus. However, the mechanisms behind this finding have not been fully elucidated. Objective: We investigate the effect of coffee (caffeinated and decaffeinated) on glucose effectiveness and insulin sensitivity using the stable isotope minimal model protocol with oral glucose administration in healthy men. Design: Fifteen healthy men underwent 5 arms randomized crossover single-blinding (researchers) clinical trial. They consumed decaffeinated coffee, caffeinated coffee (with and without sugar), and controls – water (with and without sugar) followed 1 hour by an oral glucose tolerance test (75 g of available carbohydrate) with intravenous labeled dosing interpreted by the two compartment minimal model (225 minutes). One-way ANOVA with Bonferroni adjustment were used to compare the effects of the tested beverages on glucose metabolism parameters. Results: Decaffeinated coffee resulted in 29% and 85% higher insulin sensitivity compared with caffeinated coffee and water, respectively, and the caffeinated coffee showed 15% and 60% higher glucose effectiveness compared with decaffeinated coffee and water, respectively. However, these differences were not significant (p > 0.10). In overall analyze (0 – 225 min) there were no significant differences on glucose effectiveness, insulin sensitivity, and glucose and insulin area under the curve between the groups. The beneficial effects of coffee did not seem to act in the short-term (hours) on glucose metabolism parameters mainly on insulin sensitivity indices. The benefits of coffee consumption occur in the long-term (years) as has been shown in the reduction of Type 2 Diabetes Mellitus risk in epidemiological studies. The clinical relevance of the present findings is that there is no need to avoid coffee as the drink choice for healthy people. Conclusions: The findings of this study demonstrate that the consumption of caffeinated and decaffeinated coffee with or without sugar has no acute effects on glucose metabolism in healthy men. Further researches, including long-term interventional studies, are needed to fully elucidate the mechanisms behind the coffee effects on reduced risk for Type 2 Diabetes Mellitus.

Keywords: coffee, diabetes mellitus type 2, glucose, insulin

Procedia PDF Downloads 436
6396 Production and Evaluation of Mango Pulp by Using Ohmic Heating Process

Authors: Sobhy M. Mohsen, Mohamed M. El-Nikeety, Tarek G. Mohamed, Michael Murkovic

Abstract:

The present work aimed to study the use of ohmic heating in the processing of mango pulp comparing to conventional method. Mango pulp was processed by using ohmic heating under the studied suitable conditions. Physical, chemical and microbiological properties of mango pulp were studied. The results showed that processing of mango pulp by using either ohmic heating or conventional method caused a decrease in the contents of TSS, total carbohydrates, total acidity, total sugars (reducing and non-reducing sugar) and an increase in phenol content, ascorbic acid and carotenoids compared to the conventional process. The increase in electric conductivity of mango pulp during ohmic heating was due to the addition of some electrolytes (salts) to increase the ions and enhance the process. The results also indicate that mango pulp processed by ohmic heating contained more phenols, carbohydrates and vitamin C and less HMF compared to that produced by conventional one. Total pectin and its fractions had slightly reduced by ohmic heating compared to conventional method. Enzymatic activities showed a reduction in poly phenoloxidase (PPO) and polygalacturonase (PG) activity in mango pulp processed by conventional method. However, ohmic heating completely inhibited PPO and PG activities.

Keywords: ohmic heating, mango pulp, phenolic, sarotenoids

Procedia PDF Downloads 455
6395 Economic Valuation of Emissions from Mobile Sources in the Urban Environment of Bogotá

Authors: Dayron Camilo Bermudez Mendoza

Abstract:

Road transportation is a significant source of externalities, notably in terms of environmental degradation and the emission of pollutants. These emissions adversely affect public health, attributable to criteria pollutants like particulate matter (PM2.5 and PM10) and carbon monoxide (CO), and also contribute to climate change through the release of greenhouse gases, such as carbon dioxide (CO2). It is, therefore, crucial to quantify the emissions from mobile sources and develop a methodological framework for their economic valuation, aiding in the assessment of associated costs and informing policy decisions. The forthcoming congress will shed light on the externalities of transportation in Bogotá, showcasing methodologies and findings from the construction of emission inventories and their spatial analysis within the city. This research focuses on the economic valuation of emissions from mobile sources in Bogotá, employing methods like hedonic pricing and contingent valuation. Conducted within the urban confines of Bogotá, the study leverages demographic, transportation, and emission data sourced from the Mobility Survey, official emission inventories, and tailored estimates and measurements. The use of hedonic pricing and contingent valuation methodologies facilitates the estimation of the influence of transportation emissions on real estate values and gauges the willingness of Bogotá's residents to invest in reducing these emissions. The findings are anticipated to be instrumental in the formulation and execution of public policies aimed at emission reduction and air quality enhancement. In compiling the emission inventory, innovative data sources were identified to determine activity factors, including information from automotive diagnostic centers and used vehicle sales websites. The COPERT model was utilized to ascertain emission factors, requiring diverse inputs such as data from the national transit registry (RUNT), OpenStreetMap road network details, climatological data from the IDEAM portal, and Google API for speed analysis. Spatial disaggregation employed GIS tools and publicly available official spatial data. The development of the valuation methodology involved an exhaustive systematic review, utilizing platforms like the EVRI (Environmental Valuation Reference Inventory) portal and other relevant sources. The contingent valuation method was implemented via surveys in various public settings across the city, using a referendum-style approach for a sample of 400 residents. For the hedonic price valuation, an extensive database was developed, integrating data from several official sources and basing analyses on the per-square meter property values in each city block. The upcoming conference anticipates the presentation and publication of these results, embodying a multidisciplinary knowledge integration and culminating in a master's thesis.

Keywords: economic valuation, transport economics, pollutant emissions, urban transportation, sustainable mobility

Procedia PDF Downloads 58
6394 Perception of Nursing Students’ Engagement With Emergency Remote Learning During COVID 19 Pandemic

Authors: Jansirani Natarajan, Mickael Antoinne Joseph

Abstract:

The COVID-19 pandemic has interrupted face-to-face education and forced universities into an emergency remote teaching curriculum over a short duration. This abrupt transition in the Spring 2020 semester left both faculty and students without proper preparation for continuing higher education in an online environment. Online learning took place in different formats, including fully synchronous, fully asynchronous, and blended in our university through the e-learning platform MOODLE. Studies have shown that students’ engagement, is a critical factor for optimal online teaching. Very few studies have assessed online engagement with ERT during the COVID-19 pandemic. Purpose: Therefore, this study, sought to understand how the sudden transition to emergency remote teaching impacted nursing students’ engagement with online courses in a Middle Eastern public university. Method: A cross-sectional descriptive research design was adopted in this study. Data were collected through a self-reported online survey using Dixon’s online students’ engagement questionnaire from a sample of 177 nursing students after the ERT learning semester. Results The maximum possible engagement score was 95, and the maximum scores in the domains of skills engagement, emotional engagement, participation engagement, and performance engagement were 30, 25, 30, and 10 respectively. Dixson (2010) noted that a mean item score of ≥3.5 (total score of ≥66.5) represents a highly engaged student. The majority of the participants were females (71.8%) and 84.2% were regular BSN students. Most of them (32.2%) were second-year students and 52% had a CGPA between 2 and 3. Most participants (56.5%) had low engagement scores with ERT learning during the COVID lockdown. Among the four engagement domains, 78% had low engagement scores for the participation domain. There was no significant association found between the engagement and the demographic characteristics of the participants. Conclusion The findings supported the importance of engaging students in all four categories skill, emotional, performance, and participation. Based on the results, training sessions were organized for faculty on various strategies for engaging nursing students in all domains by using the facilities available in the MOODLE (online e-learning platform). It added value as a dashboard of information regarding ERT for the administrators and nurse educators to introduce numerous active learning strategies to improve the quality of teaching and learning of nursing students in the University.

Keywords: engagement, perception, emergency remote learning, COVID-19

Procedia PDF Downloads 63
6393 Using a Character’s Inner Monologue for Song Analysis

Authors: Robert Roznowski

Abstract:

The thought process of the character is never more evident than when singing alone onstage. The composer scores the emotional state and the lyricist voices the inner conflict as the character shares with an audience her or his deepest feelings. It is at these moments that a character may be thought of as voicing her or his inner monologue. Using examples from several musical theatre songs, this presentation will look at a codified approach to analyze a song from a more psychological perspective. Using the clues from the score, traditional character analysis and a psychological-based scoring method an actor may explore more fully inhabit and express the sung and unsung thoughts of the character. The approach yields a richer and more complex approach to acting the song.

Keywords: acting, analysis, musical theatre, psychology

Procedia PDF Downloads 479
6392 Use of the Gas Chromatography Method for Hydrocarbons' Quality Evaluation in the Offshore Fields of the Baltic Sea

Authors: Pavel Shcherban, Vlad Golovanov

Abstract:

Currently, there is an active geological exploration and development of the subsoil shelf of the Kaliningrad region. To carry out a comprehensive and accurate assessment of the volumes and degree of extraction of hydrocarbons from open deposits, it is necessary to establish not only a number of geological and lithological characteristics of the structures under study, but also to determine the oil quality, its viscosity, density, fractional composition as accurately as possible. In terms of considered works, gas chromatography is one of the most capacious methods that allow the rapid formation of a significant amount of initial data. The aspects of the application of the gas chromatography method for determining the chemical characteristics of the hydrocarbons of the Kaliningrad shelf fields are observed in the article, as well as the correlation-regression analysis of these parameters in comparison with the previously obtained chemical characteristics of hydrocarbon deposits located on the land of the region. In the process of research, a number of methods of mathematical statistics and computer processing of large data sets have been applied, which makes it possible to evaluate the identity of the deposits, to specify the amount of reserves and to make a number of assumptions about the genesis of the hydrocarbons under analysis.

Keywords: computer processing of large databases, correlation-regression analysis, hydrocarbon deposits, method of gas chromatography

Procedia PDF Downloads 157
6391 Quality Assurances for an On-Board Imaging System of a Linear Accelerator: Five Months Data Analysis

Authors: Liyun Chang, Cheng-Hsiang Tsai

Abstract:

To ensure the radiation precisely delivering to the target of cancer patients, the linear accelerator equipped with the pretreatment on-board imaging system is introduced and through it the patient setup is verified before the daily treatment. New generation radiotherapy using beam-intensity modulation, usually associated the treatment with steep dose gradients, claimed to have achieved both a higher degree of dose conformation in the targets and a further reduction of toxicity in normal tissues. However, this benefit is counterproductive if the beam is delivered imprecisely. To avoid shooting critical organs or normal tissues rather than the target, it is very important to carry out the quality assurance (QA) of this on-board imaging system. The QA of the On-Board Imager® (OBI) system of one Varian Clinac-iX linear accelerator was performed through our procedures modified from a relevant report and AAPM TG142. Two image modalities, 2D radiography and 3D cone-beam computed tomography (CBCT), of the OBI system were examined. The daily and monthly QA was executed for five months in the categories of safety, geometrical accuracy and image quality. A marker phantom and a blade calibration plate were used for the QA of geometrical accuracy, while the Leeds phantom and Catphan 504 phantom were used in the QA of radiographic and CBCT image quality, respectively. The reference images were generated through a GE LightSpeed CT simulator with an ADAC Pinnacle treatment planning system. Finally, the image quality was analyzed via an OsiriX medical imaging system. For the geometrical accuracy test, the average deviations of the OBI isocenter in each direction are less than 0.6 mm with uncertainties less than 0.2 mm, while all the other items have the displacements less than 1 mm. For radiographic image quality, the spatial resolution is 1.6 lp/cm with contrasts less than 2.2%. The spatial resolution, low contrast, and HU homogenous of CBCT are larger than 6 lp/cm, less than 1% and within 20 HU, respectively. All tests are within the criteria, except the HU value of Teflon measured with the full fan mode exceeding the suggested value that could be due to itself high HU value and needed to be rechecked. The OBI system in our facility was then demonstrated to be reliable with stable image quality. The QA of OBI system is really necessary to achieve the best treatment for a patient.

Keywords: CBCT, image quality, quality assurance, OBI

Procedia PDF Downloads 298
6390 Virtual 3D Environments for Image-Based Navigation Algorithms

Authors: V. B. Bastos, M. P. Lima, P. R. G. Kurka

Abstract:

This paper applies to the creation of virtual 3D environments for the study and development of mobile robot image based navigation algorithms and techniques, which need to operate robustly and efficiently. The test of these algorithms can be performed in a physical way, from conducting experiments on a prototype, or by numerical simulations. Current simulation platforms for robotic applications do not have flexible and updated models for image rendering, being unable to reproduce complex light effects and materials. Thus, it is necessary to create a test platform that integrates sophisticated simulated applications of real environments for navigation, with data and image processing. This work proposes the development of a high-level platform for building 3D model’s environments and the test of image-based navigation algorithms for mobile robots. Techniques were used for applying texture and lighting effects in order to accurately represent the generation of rendered images regarding the real world version. The application will integrate image processing scripts, trajectory control, dynamic modeling and simulation techniques for physics representation and picture rendering with the open source 3D creation suite - Blender.

Keywords: simulation, visual navigation, mobile robot, data visualization

Procedia PDF Downloads 255
6389 Assessment of Tidal Influence in Spatial and Temporal Variations of Water Quality in Masan Bay, Korea

Authors: S. J. Kim, Y. J. Yoo

Abstract:

Slack-tide sampling was carried out at seven stations at high and low tides for a tidal cycle, in summer (7, 8, 9) and fall (10), 2016 to determine the differences of water quality according to tides in Masan Bay. The data were analyzed by Pearson correlation and factor analysis. The mixing state of all the water quality components investigated is well explained by the correlation with salinity (SAL). Turbidity (TURB), dissolved silica (DSi), nitrite and nitrate nitrogen (NNN) and total nitrogen (TN), which find their way into the bay from the streams and have no internal source and sink reaction, showed a strong negative correlation with SAL at low tide, indicating the property of conservative mixing. On the contrary, in summer and fall, dissolved oxygen (DO), hydrogen sulfide (H2S) and chemical oxygen demand with KMnO4 (CODMn) of the surface and bottom water, which were sensitive to an internal source and sink reaction, showed no significant correlation with SAL at high and low tides. The remaining water quality parameters showed a conservative or a non-conservative mixing pattern depending on the mixing characteristics at high and low tides, determined by the functional relationship between the changes of the flushing time and the changes of the characteristics of water quality components of the end-members in the bay. Factor analysis performed on the concentration difference data sets between high and low tides helped in identifying the principal latent variables for them. The concentration differences varied spatially and temporally. Principal factors (PFs) scores plots for each monitoring situation showed high associations of the variations to the monitoring sites. At sampling station 1 (ST1), temperature (TEMP), SAL, DSi, TURB, NNN and TN of the surface water in summer, TEMP, SAL, DSi, DO, TURB, NNN, TN, reactive soluble phosphorus (RSP) and total phosphorus (TP) of the bottom water in summer, TEMP, pH, SAL, DSi, DO, TURB, CODMn, particulate organic carbon (POC), ammonia nitrogen (AMN), NNN, TN and fecal coliform (FC) of the surface water in fall, TEMP, pH, SAL, DSi, H2S, TURB, CODMn, AMN, NNN and TN of the bottom water in fall commonly showed up as the most significant parameters and the large concentration differences between high and low tides. At other stations, the significant parameters showed differently according to the spatial and temporal variations of mixing pattern in the bay. In fact, there is no estuary that always maintains steady-state flow conditions. The mixing regime of an estuary might be changed at any time from linear to non-linear, due to the change of flushing time according to the combination of hydrogeometric properties, inflow of freshwater and tidal action, And furthermore the change of end-member conditions due to the internal sinks and sources makes the occurrence of concentration difference inevitable. Therefore, when investigating the water quality of the estuary, it is necessary to take a sampling method considering the tide to obtain average water quality data.

Keywords: conservative mixing, end-member, factor analysis, flushing time, high and low tide, latent variables, non-conservative mixing, slack-tide sampling, spatial and temporal variations, surface and bottom water

Procedia PDF Downloads 130
6388 Positivity Rate of Person under Surveillance among Institut Jantung Negara’s Patients with Various Vaccination Statuses in the First Quarter of 2022, Malaysia

Authors: Mohd Izzat Md. Nor, Norfazlina Jaffar, Noor Zaitulakma Md. Zain, Nur Izyanti Mohd Suppian, Subhashini Balakrishnan, Geetha Kandavello

Abstract:

During the Coronavirus (COVID-19) pandemic, Malaysia has been focusing on building herd immunity by introducing vaccination programs into the community. Hospital Standard Operating Procedures (SOP) were developed to prevent inpatient transmission. Objective: In this study, we focus on the positivity rate of inpatient Person Under Surveillance (PUS) becoming COVID-19 positive and compare this to the National rate in order to see the outcomes of the patient who becomes COVID-19 positive in relation to their vaccination status. Methodology: This is a retrospective observational study carried out from 1 January until 30 March 2022 in Institut Jantung Negara (IJN). There were 5,255 patients admitted during the time of this study. Pre-admission Polymerase Chain Reaction (PCR) swab was done for all patients. Patients with positive PCR on pre-admission screening were excluded. The patient who had exposure to COVID-19-positive staff or patients during hospitalization was defined as PUS and were quarantined and monitored for potential COVID-19 infection. Their frequency and risk of exposure (WHO definition) were recorded. A repeat PCR swab was done for PUS patients that have clinical deterioration with or without COVID symptoms and on their last day of quarantine. The severity of COVID-19 infection was defined as category 1-5A. All patients' vaccination status was recorded, and they were divided into three groups: fully immunised, partially immunised, and unvaccinated. We analyzed the positivity rate of PUS patients becoming COVID-positive, outcomes, and correlation with the vaccination status. Result: Total inpatient PUS to patients and staff was 492; only 13 became positive, giving a positivity rate of 2.6%. Eight (62%) had multiple exposures. The majority, 8/13(72.7%), had a high-risk exposure, and the remaining 5 had medium-risk exposure. Four (30.8%) were boostered, 7(53.8%) were fully vaccinated, and 2(15.4%) were partial/unvaccinated. Eight patients were in categories 1-2, whilst 38% were in categories 3-5. Vaccination status did not correlate with COVID-19 Category (P=0.641). One (7.7%) patient died due to COVID-19 complications and sepsis. Conclusion: Within the first quarter of 2022, our institution's positivity rate (2.6%) is significantly lower than the country's (14.4%). High-risk exposure and multiple exposures to positive COVID-19 cases increased the risk of PUS becoming COVID-19 positive despite their underlying vaccination status.

Keywords: COVID-19, boostered, high risk, Malaysia, quarantine, vaccination status

Procedia PDF Downloads 88
6387 Comfort Sensor Using Fuzzy Logic and Arduino

Authors: Samuel John, S. Sharanya

Abstract:

Automation has become an important part of our life. It has been used to control home entertainment systems, changing the ambience of rooms for different events etc. One of the main parameters to control in a smart home is the atmospheric comfort. Atmospheric comfort mainly includes temperature and relative humidity. In homes, the desired temperature of different rooms varies from 20 °C to 25 °C and relative humidity is around 50%. However, it varies widely. Hence, automated measurement of these parameters to ensure comfort assumes significance. To achieve this, a fuzzy logic controller using Arduino was developed using MATLAB. Arduino is an open source hardware consisting of a 24 pin ATMEGA chip (atmega328), 14 digital input /output pins and an inbuilt ADC. It runs on 5v and 3.3v power supported by a board voltage regulator. Some of the digital pins in Aruduino provide PWM (pulse width modulation) signals, which can be used in different applications. The Arduino platform provides an integrated development environment, which includes support for c, c++ and java programming languages. In the present work, soft sensor was introduced in this system that can indirectly measure temperature and humidity and can be used for processing several measurements these to ensure comfort. The Sugeno method (output variables are functions or singleton/constant, more suitable for implementing on microcontrollers) was used in the soft sensor in MATLAB and then interfaced to the Arduino, which is again interfaced to the temperature and humidity sensor DHT11. The temperature-humidity sensor DHT11 acts as the sensing element in this system. Further, a capacitive humidity sensor and a thermistor were also used to support the measurement of temperature and relative humidity of the surrounding to provide a digital signal on the data pin. The comfort sensor developed was able to measure temperature and relative humidity correctly. The comfort percentage was calculated and accordingly the temperature in the room was controlled. This system was placed in different rooms of the house to ensure that it modifies the comfort values depending on temperature and relative humidity of the environment. Compared to the existing comfort control sensors, this system was found to provide an accurate comfort percentage. Depending on the comfort percentage, the air conditioners and the coolers in the room were controlled. The main highlight of the project is its cost efficiency.

Keywords: arduino, DHT11, soft sensor, sugeno

Procedia PDF Downloads 312
6386 Neonatal Mortality, Infant Mortality, and Under-five Mortality Rates in the Provinces of Zimbabwe: A Geostatistical and Spatial Analysis of Public Health Policy Provisions

Authors: Jevonte Abioye, Dylan Savary

Abstract:

The aim of this research is to present a disaggregated geostatistical analysis of the subnational provincial trends of child mortality variation in Zimbabwe from a child health policy perspective. Soon after gaining independence in 1980, the government embarked on efforts towards promoting equitable health care, namely through the provision of primary health care. Government intervention programmes brought hope and promise, but achieving equity in primary health care coverage was hindered by previous existing disparities in maternal health care disproportionately concentrated in urban settings to the detriment of rural communities. The article highlights policies and programs adopted by the government during the millennium development goals period between 1990-2015 as a response to the inequities that characterised the country’s maternal health care. A longitudinal comparative method for a spatial variation on child mortality rates across provinces is developed based on geostatistical analysis. Cross-sectional and time-series data was extracted from the World Health Organisation (WHO) global health observatory data repository, demographic health survey reports, and previous academic and technical publications. Results suggest that although health care policy was uniform across provinces, not all provinces received the same antenatal and perinatal services. Accordingly, provincial rates of child mortality growth between 1994 and 2015 varied significantly. Evidence on the trends of child mortality rates and maternal health policies in Zimbabwe can be valuable for public child health policy planning and public service delivery design both in Zimbabwe and across developing countries pursuing the sustainable development agenda.

Keywords: antenatal care, perinatal care, infant mortality rate, neonatal mortality rate, under-five mortality rate, millennium development goals, sustainable development agenda

Procedia PDF Downloads 203
6385 Sustainable Tourism and Heritage in Sığacık/Seferihisar

Authors: Sibel Ecemiş Kılıç, Muhammed Aydoğan

Abstract:

The rapid development of culture tourism has drawn attention to conserving cultural values especially by developing countries that would like to benefit from the economic contribution this type of tourism attracts. Tourism can have both positive and negative outcomes for historical settlements and their residents. The accommodation-purposed rehabilitation and revitalization project in “Sigacik Old City Zone” are to be discussed with spatial, economic, social and organizational dimensions. It is aimed to evaluate the relationship between the development of tourism and sustainable heritage conservation.

Keywords: Sığacık, urban conservation, sustainable tourism, Seferihisar

Procedia PDF Downloads 505
6384 Detection and Classification Strabismus Using Convolutional Neural Network and Spatial Image Processing

Authors: Anoop T. R., Otman Basir, Robert F. Hess, Eileen E. Birch, Brooke A. Koritala, Reed M. Jost, Becky Luu, David Stager, Ben Thompson

Abstract:

Strabismus refers to a misalignment of the eyes. Early detection and treatment of strabismus in childhood can prevent the development of permanent vision loss due to abnormal development of visual brain areas. We developed a two-stage method for strabismus detection and classification based on photographs of the face. The first stage detects the presence or absence of strabismus, and the second stage classifies the type of strabismus. The first stage comprises face detection using Haar cascade, facial landmark estimation, face alignment, aligned face landmark detection, segmentation of the eye region, and detection of strabismus using VGG 16 convolution neural networks. Face alignment transforms the face to a canonical pose to ensure consistency in subsequent analysis. Using facial landmarks, the eye region is segmented from the aligned face and fed into a VGG 16 CNN model, which has been trained to classify strabismus. The CNN determines whether strabismus is present and classifies the type of strabismus (exotropia, esotropia, and vertical deviation). If stage 1 detects strabismus, the eye region image is fed into stage 2, which starts with the estimation of pupil center coordinates using mask R-CNN deep neural networks. Then, the distance between the pupil coordinates and eye landmarks is calculated along with the angle that the pupil coordinates make with the horizontal and vertical axis. The distance and angle information is used to characterize the degree and direction of the strabismic eye misalignment. This model was tested on 100 clinically labeled images of children with (n = 50) and without (n = 50) strabismus. The True Positive Rate (TPR) and False Positive Rate (FPR) of the first stage were 94% and 6% respectively. The classification stage has produced a TPR of 94.73%, 94.44%, and 100% for esotropia, exotropia, and vertical deviations, respectively. This method also had an FPR of 5.26%, 5.55%, and 0% for esotropia, exotropia, and vertical deviation, respectively. The addition of one more feature related to the location of corneal light reflections may reduce the FPR, which was primarily due to children with pseudo-strabismus (the appearance of strabismus due to a wide nasal bridge or skin folds on the nasal side of the eyes).

Keywords: strabismus, deep neural networks, face detection, facial landmarks, face alignment, segmentation, VGG 16, mask R-CNN, pupil coordinates, angle deviation, horizontal and vertical deviation

Procedia PDF Downloads 93