Search results for: myoelectric signal processing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5007

Search results for: myoelectric signal processing

3627 Automatic Fluid-Structure Interaction Modeling and Analysis of Butterfly Valve Using Python Script

Authors: N. Guru Prasath, Sangjin Ma, Chang-Wan Kim

Abstract:

A butterfly valve is a quarter turn valve which is used to control the flow of a fluid through a section of pipe. Generally, butterfly valve is used in wide range of applications such as water distribution, sewage, oil and gas plants. In particular, butterfly valve with larger diameter finds its immense applications in hydro power plants to control the fluid flow. In-lieu with the constraints in cost and size to run laboratory setup, analysis of large diameter values will be mostly studied by computational method which is the best and inexpensive solution. For fluid and structural analysis, CFD and FEM software is used to perform large scale valve analyses, respectively. In order to perform above analysis in butterfly valve, the CAD model has to recreate and perform mesh in conventional software’s for various dimensions of valve. Therefore, its limitation is time consuming process. In-order to overcome that issue, python code was created to outcome complete pre-processing setup automatically in Salome software. Applying dimensions of the model clearly in the python code makes the running time comparatively lower and easier way to perform analysis of the valve. Hence, in this paper, an attempt was made to study the fluid-structure interaction (FSI) of butterfly valves by varying the valve angles and dimensions using python code in pre-processing software, and results are produced.

Keywords: butterfly valve, flow coefficient, automatic CFD analysis, FSI analysis

Procedia PDF Downloads 239
3626 Music Reading Expertise Facilitates Implicit Statistical Learning of Sentence Structures in a Novel Language: Evidence from Eye Movement Behavior

Authors: Sara T. K. Li, Belinda H. J. Chung, Jeffery C. N. Yip, Janet H. Hsiao

Abstract:

Music notation and text reading both involve statistical learning of music or linguistic structures. However, it remains unclear how music reading expertise influences text reading behavior. The present study examined this issue through an eye-tracking study. Chinese-English bilingual musicians and non-musicians read English sentences, Chinese sentences, musical phrases, and sentences in Tibetan, a language novel to the participants, with their eye movement recorded. Each set of stimuli consisted of two conditions in terms of structural regularity: syntactically correct and syntactically incorrect musical phrases/sentences. They then completed a sentence comprehension (for syntactically correct sentences) or a musical segment/word recognition task afterwards to test their comprehension/recognition abilities. The results showed that in reading musical phrases, as compared with non-musicians, musicians had a higher accuracy in the recognition task, and had shorter reading time, fewer fixations, and shorter fixation duration when reading syntactically correct (i.e., in diatonic key) than incorrect (i.e., in non-diatonic key/atonal) musical phrases. This result reflects their expertise in music reading. Interestingly, in reading Tibetan sentences, which was novel to both participant groups, while non-musicians did not show any behavior differences between reading syntactically correct or incorrect Tibetan sentences, musicians showed a shorter reading time and had marginally fewer fixations when reading syntactically correct sentences than syntactically incorrect ones. However, none of the musicians reported discovering any structural regularities in the Tibetan stimuli after the experiment when being asked explicitly, suggesting that they may have implicitly acquired the structural regularities in Tibetan sentences. This group difference was not observed when they read English or Chinese sentences. This result suggests that music reading expertise facilities reading texts in a novel language (i.e., Tibetan), but not in languages that the readers are already familiar with (i.e., English and Chinese). This phenomenon may be due to the similarities between reading music notations and reading texts in a novel language, as in both cases the stimuli follow particular statistical structures but do not involve semantic or lexical processing. Thus, musicians may transfer their statistical learning skills stemmed from music notation reading experience to implicitly discover structures of sentences in a novel language. This speculation is consistent with a recent finding showing that music reading expertise modulates the processing of English nonwords (i.e., words that do not follow morphological or orthographic rules) but not pseudo- or real words. These results suggest that the modulation of music reading expertise on language processing depends on the similarities in the cognitive processes involved. It also has important implications for the benefits of music education on language and cognitive development.

Keywords: eye movement behavior, eye-tracking, music reading expertise, sentence reading, structural regularity, visual processing

Procedia PDF Downloads 380
3625 Automatic Early Breast Cancer Segmentation Enhancement by Image Analysis and Hough Transform

Authors: David Jurado, Carlos Ávila

Abstract:

Detection of early signs of breast cancer development is crucial to quickly diagnose the disease and to define adequate treatment to increase the survival probability of the patient. Computer Aided Detection systems (CADs), along with modern data techniques such as Machine Learning (ML) and Neural Networks (NN), have shown an overall improvement in digital mammography cancer diagnosis, reducing the false positive and false negative rates becoming important tools for the diagnostic evaluations performed by specialized radiologists. However, ML and NN-based algorithms rely on datasets that might bring issues to the segmentation tasks. In the present work, an automatic segmentation and detection algorithm is described. This algorithm uses image processing techniques along with the Hough transform to automatically identify microcalcifications that are highly correlated with breast cancer development in the early stages. Along with image processing, automatic segmentation of high-contrast objects is done using edge extraction and circle Hough transform. This provides the geometrical features needed for an automatic mask design which extracts statistical features of the regions of interest. The results shown in this study prove the potential of this tool for further diagnostics and classification of mammographic images due to the low sensitivity to noisy images and low contrast mammographies.

Keywords: breast cancer, segmentation, X-ray imaging, hough transform, image analysis

Procedia PDF Downloads 82
3624 Contribution of Remote Sensing and GIS to the Study of the Impact of the Salinity of Sebkhas on the Quality of Groundwater: Case of Sebkhet Halk El Menjel (Sousse)

Authors: Gannouni Sonia, Hammami Asma, Saidi Salwa, Rebai Noamen

Abstract:

Water resources in Tunisia have experienced quantitative and qualitative degradation, especially when talking about wetlands and Sbekhas. Indeed, the objective of this work is to study the spatio-temporal evolution of salinity for 29 years (from 1987 to 2016). A study of the connection between surface water and groundwater is necessary to know the degree of influence of the Sebkha brines on the water table. The evolution of surface salinity is determined by remote sensing based on Landsat TM and OLI/TIRS satellite images of the years 1987, 2007, 2010, and 2016. The processing of these images allowed us to determine the NDVI(Normalized Difference Vegetation Index), the salinity index, and the surface temperature around Sebkha. In addition, through a geographic information system(GIS), we could establish a map of the distribution of salinity in the subsurface of the water table of Chott Mariem and Hergla/SidiBouAli/Kondar. The results of image processing and the calculation of the index and surface temperature show an increase in salinity downstream of in addition to the sebkha and the development of vegetation cover upstream and the western part of the sebkha. This richness may be due both to contamination by seawater infiltration from the barrier beach of Hergla as well as the passage of groundwater to the sebkha.

Keywords: spatio-temporal monitoring, salinity, satellite images, NDVI, sebkha

Procedia PDF Downloads 130
3623 Alphabet Recognition Using Pixel Probability Distribution

Authors: Vaidehi Murarka, Sneha Mehta, Dishant Upadhyay

Abstract:

Our project topic is “Alphabet Recognition using pixel probability distribution”. The project uses techniques of Image Processing and Machine Learning in Computer Vision. Alphabet recognition is the mechanical or electronic translation of scanned images of handwritten, typewritten or printed text into machine-encoded text. It is widely used to convert books and documents into electronic files etc. Alphabet Recognition based OCR application is sometimes used in signature recognition which is used in bank and other high security buildings. One of the popular mobile applications includes reading a visiting card and directly storing it to the contacts. OCR's are known to be used in radar systems for reading speeders license plates and lots of other things. The implementation of our project has been done using Visual Studio and Open CV (Open Source Computer Vision). Our algorithm is based on Neural Networks (machine learning). The project was implemented in three modules: (1) Training: This module aims “Database Generation”. Database was generated using two methods: (a) Run-time generation included database generation at compilation time using inbuilt fonts of OpenCV library. Human intervention is not necessary for generating this database. (b) Contour–detection: ‘jpeg’ template containing different fonts of an alphabet is converted to the weighted matrix using specialized functions (contour detection and blob detection) of OpenCV. The main advantage of this type of database generation is that the algorithm becomes self-learning and the final database requires little memory to be stored (119kb precisely). (2) Preprocessing: Input image is pre-processed using image processing concepts such as adaptive thresholding, binarizing, dilating etc. and is made ready for segmentation. “Segmentation” includes extraction of lines, words, and letters from the processed text image. (3) Testing and prediction: The extracted letters are classified and predicted using the neural networks algorithm. The algorithm recognizes an alphabet based on certain mathematical parameters calculated using the database and weight matrix of the segmented image.

Keywords: contour-detection, neural networks, pre-processing, recognition coefficient, runtime-template generation, segmentation, weight matrix

Procedia PDF Downloads 388
3622 Robustness of the Deep Chroma Extractor and Locally-Normalized Quarter Tone Filters in Automatic Chord Estimation under Reverberant Conditions

Authors: Luis Alvarado, Victor Poblete, Isaac Gonzalez, Yetzabeth Gonzalez

Abstract:

In MIREX 2016 (http://www.music-ir.org/mirex), the deep neural network (DNN)-Deep Chroma Extractor, proposed by Korzeniowski and Wiedmer, reached the highest score in an audio chord recognition task. In the present paper, this tool is assessed under acoustic reverberant environments and distinct source-microphone distances. The evaluation dataset comprises The Beatles and Queen datasets. These datasets are sequentially re-recorded with a single microphone in a real reverberant chamber at four reverberation times (0 -anechoic-, 1, 2, and 3 s, approximately), as well as four source-microphone distances (32, 64, 128, and 256 cm). It is expected that the performance of the trained DNN will dramatically decrease under these acoustic conditions with signals degraded by room reverberation and distance to the source. Recently, the effect of the bio-inspired Locally-Normalized Cepstral Coefficients (LNCC), has been assessed in a text independent speaker verification task using speech signals degraded by additive noise at different signal-to-noise ratios with variations of recording distance, and it has also been assessed under reverberant conditions with variations of recording distance. LNCC showed a performance so high as the state-of-the-art Mel Frequency Cepstral Coefficient filters. Based on these results, this paper proposes a variation of locally-normalized triangular filters called Locally-Normalized Quarter Tone (LNQT) filters. By using the LNQT spectrogram, robustness improvements of the trained Deep Chroma Extractor are expected, compared with classical triangular filters, and thus compensating the music signal degradation improving the accuracy of the chord recognition system.

Keywords: chord recognition, deep neural networks, feature extraction, music information retrieval

Procedia PDF Downloads 231
3621 Modernization of Garri-Frying Technologies with Respect to Women Anthromophic Quality in Nigeria

Authors: Adegbite Bashiru Adeniyi, Olaniyi Akeem Olawale, Ayobamidele Sinatu Juliet

Abstract:

The study was carried out in the 6 South Western states of Nigeria to analyze socio-economic characteristic of garri processors and their anthropometric qualities with respect to modern technologies used in garri processing. About 20 respondents were randomly selected from each of the 6 workstations purposively considered for the study due to their daily processing activities already attracted high patronage of customers. These include Oguntolu village (Ogun State), Igoba-Akure (Ondo State), Imo-Ilesa (Osun State), Odo Oba-Ileri (Oyo State), Irasa village (Ekiti State) and Epe in Lagos state. Interview schedule was conducted for 120 respondents to elicit information. Data were analyzed using descriptive statistical tools. It was observed from the findings that respondents were in their most productive age range (36-45 years) except Ogun state where majority (45%) were relatively older than 45 years. A fewer processors were much younger than 26 years old. It furthers revealed that not less than 55% have body weight greater than 50.0 kilogram, also not less than 70% were taller than 1.5 meter. So also, the hand length and hand thickness of the majority were long and bulky which are considered suitable for operating some modern and improved technologies in garri-frying process. This information could be used by various technological developers to enhance production of modern equipment and tools for a greater efficiency.

Keywords: agro-business, anthromorphic, modernization, proficiency

Procedia PDF Downloads 510
3620 Instant Location Detection of Objects Moving at High Speed in C-OTDR Monitoring Systems

Authors: Andrey V. Timofeev

Abstract:

The practical efficient approach is suggested to estimate the high-speed objects instant bounds in C-OTDR monitoring systems. In case of super-dynamic objects (trains, cars) is difficult to obtain the adequate estimate of the instantaneous object localization because of estimation lag. In other words, reliable estimation coordinates of monitored object requires taking some time for data observation collection by means of C-OTDR system, and only if the required sample volume will be collected the final decision could be issued. But it is contrary to requirements of many real applications. For example, in rail traffic management systems we need to get data off the dynamic objects localization in real time. The way to solve this problem is to use the set of statistical independent parameters of C-OTDR signals for obtaining the most reliable solution in real time. The parameters of this type we can call as 'signaling parameters' (SP). There are several the SP’s which carry information about dynamic objects instant localization for each of C-OTDR channels. The problem is that some of these parameters are very sensitive to dynamics of seismoacoustic emission sources but are non-stable. On the other hand, in case the SP is very stable it becomes insensitive as a rule. This report contains describing the method for SP’s co-processing which is designed to get the most effective dynamic objects localization estimates in the C-OTDR monitoring system framework.

Keywords: C-OTDR-system, co-processing of signaling parameters, high-speed objects localization, multichannel monitoring systems

Procedia PDF Downloads 468
3619 Drivers of Farmers' Contract Compliance Behaviour: Evidence from a Case Study of Dangote Tomato Processing Plant in Northern Nigeria.

Authors: Umar Shehu Umar

Abstract:

Contract farming is a viable strategy agribusinesses rely on to strengthen vertical coordination. However, low contract compliance remains a significant setback to agribusinesses' contract performance. The present study aims to understand what drives smallholder farmers’ contract compliance behaviour. Qualitative information was collected through Focus Group Discussions to enrich the design of the survey questionnaire administered on a sample of 300 randomly selected farmers contracted by the Dangote Tomato Processing Plant (DTPP) in four regions of northern Nigeria. Novel transaction level data of tomato sales covering one season were collected in addition to socio-economic information of the sampled farmers. Binary logistic model results revealed that open fresh market tomato prices and payment delays negatively affect farmers' compliance behaviour while quantity harvested, education level and input provision correlated positively with compliance. The study suggests that contract compliance will increase if contracting firms devise a reliable and timely payment plan (e.g., digital payment), continue input and service provisions (e.g., improved seeds, extension services) and incentives (e.g., loyalty rewards, bonuses) in the contract.

Keywords: contract farming, compliance, farmers and processors., smallholder

Procedia PDF Downloads 54
3618 Tracking and Classifying Client Interactions with Personal Coaches

Authors: Kartik Thakore, Anna-Roza Tamas, Adam Cole

Abstract:

The world health organization (WHO) reports that by 2030 more than 23.7 million deaths annually will be caused by Cardiovascular Diseases (CVDs); with a 2008 economic impact of $3.76 T. Metabolic syndrome is a disorder of multiple metabolic risk factors strongly indicated in the development of cardiovascular diseases. Guided lifestyle intervention driven by live coaching has been shown to have a positive impact on metabolic risk factors. Individuals’ path to improved (decreased) metabolic risk factors are driven by personal motivation and personalized messages delivered by coaches and augmented by technology. Using interactions captured between 400 individuals and 3 coaches over a program period of 500 days, a preliminary model was designed. A novel real time event tracking system was created to track and classify clients based on their genetic profile, baseline questionnaires and usage of a mobile application with live coaching sessions. Classification of clients and coaches was done using a support vector machines application build on Apache Spark, Stanford Natural Language Processing Library (SNLPL) and decision-modeling.

Keywords: guided lifestyle intervention, metabolic risk factors, personal coaching, support vector machines application, Apache Spark, natural language processing

Procedia PDF Downloads 431
3617 Evaluation of Different Cowpea Genotypes Using Grain Yield and Canning Quality Traits

Authors: Magdeline Pakeng Mohlala, R. L. Molatudi, M. A. Mofokeng

Abstract:

Cowpea (Vigna unguiculata (L.) Walp) is an important annual leguminous crop in semi-arid and tropics. Most of cowpea grain production in South Africa is mainly used for domestic consumption, as seed planting and little or none gets to be used in industrial processing; thus, there is a need to expand the utilization of cowpea through industrial processing. Agronomic traits contribute to the understanding of the association between yield and its component traits to facilitate effective selection for yield improvement. The aim of this study was to evaluate cowpea genotypes using grain yield and canning quality traits. The field experiment was conducted in two locations in Limpopo Province, namely Syferkuil Agricultural Experimental farm and Ga-Molepo village during 2017/2018 growing season and canning took place at ARC-Grain Crops Potchefstroom. The experiment comprised of 100 cowpea genotypes laid out in a Randomized Complete Block Designs (RCBD). The grain yield, yield components, and canning quality traits were analysed using Genstat software. About 62 genotypes were suitable for canning, 38 were not due to their seed coat texture, and water uptake was less than 80% resulting in too soft (mushy) seeds. Grain yield for RV115, 99k-494-6, ITOOK1263, RV111, RV353 and 53 other genotypes recorded high positive association with number of branches, pods per plant, and number of seeds per pod, unshelled weight and shelled weight for Syferkuil than at Ga-Molepo are therefore recommended for canning quality.

Keywords: agronomic traits, canning quality, genotypes, yield

Procedia PDF Downloads 150
3616 Restoration of Digital Design Using Row and Column Major Parsing Technique from the Old/Used Jacquard Punched Cards

Authors: R. Kumaravelu, S. Poornima, Sunil Kumar Kashyap

Abstract:

The optimized and digitalized restoration of the information from the old and used manual jacquard punched card in textile industry is referred to as Jacquard Punch Card (JPC) reader. In this paper, we present a novel design and development of photo electronics based system for reading old and used punched cards and storing its binary information for transforming them into an effective image file format. In our textile industry the jacquard punched cards holes diameters having the sizes of 3mm, 5mm and 5.5mm pitch. Before the adaptation of computing systems in the field of textile industry those punched cards were prepared manually without digital design source, but those punched cards are having rich woven designs. Now, the idea is to retrieve binary information from the jacquard punched cards and store them in digital (Non-Graphics) format before processing it. After processing the digital format (Non-Graphics) it is converted into an effective image file format through either by Row major or Column major parsing technique.To accomplish these activities, an embedded system based device and software integration is developed. As part of the test and trial activity the device was tested and installed for industrial service at Weavers Service Centre, Kanchipuram, Tamilnadu in India.

Keywords: file system, SPI. UART, ARM controller, jacquard, punched card, photo LED, photo diode

Procedia PDF Downloads 165
3615 Identification of Lipo-Alkaloids and Fatty Acids in Aconitum carmichaelii Using Liquid Chromatography–Mass Spectrometry and Gas Chromatography–Mass Spectrometry

Authors: Ying Liang, Na Li

Abstract:

Lipo-alkaloid is a kind of C19-norditerpenoid alkaloids existed in Aconitum species, which usually contains an aconitane skeleton and one or two fatty acid residues. The structures are very similar to that of diester-type alkaloids, which are considered as the main bioactive components in Aconitum carmichaelii. They have anti-inflammatory, anti-nociceptive, and anti-proliferative activities. So far, more than 200 lipo-alkaloids were reported from plants, semisynthesis, and biotransformations. In our research, by the combination of ultra-high performance liquid chromatography-quadruple-time of flight mass spectrometry (UHPLC-Q-TOF-MS) and an in-house database, 148 lipo-alkaloids were identified from A. carmichaelii, including 93 potential new compounds and 38 compounds with oxygenated fatty acid moieties. To our knowledge, this is the first time of the reporting of the oxygenated fatty acids as the side chains in naturally-occurring lipo-alkaloids. Considering the fatty acid residues in lipo-alkaloids should come from the free acids in the plant, the fatty acids and their relationship with lipo-alkaloids were further investigated by GC-MS and LC-MS. Among 17 fatty acids identified by GC-MS, 12 were detected as the side chains of lipo-alkaloids, which accounted for about 1/3 of total lipo-alkaloids, while these fatty acid residues were less than 1/4 of total fatty acid residues. And, total of 37 fatty acids were determined by UHPCL-Q-TOF-MS, including 18 oxidized fatty acids firstly identified from A. carmichaelii. These fatty acids were observed as the side chains of lipo-alkaloids. In addition, although over 140 lipo-alkaloids were identified, six lipo-alkaloids, 8-O-linoleoyl-14-benzoylmesaconine (1), 8-O-linoleoyl-14-benzoylaconine (2), 8-O-palmitoyl-14-benzoylmesaconine (3), 8-O-oleoyl-14-benzoylmesaconine (4), 8-O-pal-benzoylaconine (5), and 8-O-ole-Benzoylaconine (6), were found to be the main components, which accounted for over 90% content of total lipo-alkaloids. Therefore, using these six components as standards, a UHPLC-Triple Quadrupole-MS (UHPLC-QQQ-MS) approach was established to investigate the influence of processing on the contents of lipo-alkaloids. Although it was commonly supposed that the contents of lipo-alkaloids increased after processing, our research showed that no significant change was observed before and after processing. Using the same methods, the lipo-alkaloids in the lateral roots of A. carmichaelii and the roots of A. kusnezoffii were determined and quantified. The contents of lipo-alkaloids in A. kusnezoffii were close to that of the parent roots of A. carmichaelii, while the lateral roots had less lipo-alkaloids than the parent roots. This work was supported by Macao Science and Technology Development Fund (086/2013/A3 and 003/2016/A1).

Keywords: Aconitum carmichaelii, fatty acids, GC-MS, LC-MS, lipo-alkaloids

Procedia PDF Downloads 298
3614 Signals Monitored During Anaesthesia

Authors: Launcelot McGrath, Xiaoxiao Liu, Colin Flanagan

Abstract:

It is widely recognised that a comprehensive understanding of physiological data is a vital aid to the anaesthesiologist in monitoring and maintaining the well-being of a patient undergoing surgery. Bio signal analysis is one of the most important topics that researchers have tried to develop over the last century to understand numerous human diseases. There are tremendous biological signals during anaesthesia, and not all of them are important, which to choose to observe is a significant decision. It is important that the anaesthesiologist understand both the signals themselves, and the limitations introduced by the processes of acquisition. In this article, we provide an all-sided overview of different types of biological signals as well as the mechanisms applied to acquire them.

Keywords: general biosignals, anaesthesia, biological, electroencephalogram

Procedia PDF Downloads 103
3613 Comparison of Spiking Neuron Models in Terms of Biological Neuron Behaviours

Authors: Fikret Yalcinkaya, Hamza Unsal

Abstract:

To understand how neurons work, it is required to combine experimental studies on neural science with numerical simulations of neuron models in a computer environment. In this regard, the simplicity and applicability of spiking neuron modeling functions have been of great interest in computational neuron science and numerical neuroscience in recent years. Spiking neuron models can be classified by exhibiting various neuronal behaviors, such as spiking and bursting. These classifications are important for researchers working on theoretical neuroscience. In this paper, three different spiking neuron models; Izhikevich, Adaptive Exponential Integrate Fire (AEIF) and Hindmarsh Rose (HR), which are based on first order differential equations, are discussed and compared. First, the physical meanings, derivatives, and differential equations of each model are provided and simulated in the Matlab environment. Then, by selecting appropriate parameters, the models were visually examined in the Matlab environment and it was aimed to demonstrate which model can simulate well-known biological neuron behaviours such as Tonic Spiking, Tonic Bursting, Mixed Mode Firing, Spike Frequency Adaptation, Resonator and Integrator. As a result, the Izhikevich model has been shown to perform Regular Spiking, Continuous Explosion, Intrinsically Bursting, Thalmo Cortical, Low-Threshold Spiking and Resonator. The Adaptive Exponential Integrate Fire model has been able to produce firing patterns such as Regular Ignition, Adaptive Ignition, Initially Explosive Ignition, Regular Explosive Ignition, Delayed Ignition, Delayed Regular Explosive Ignition, Temporary Ignition and Irregular Ignition. The Hindmarsh Rose model showed three different dynamic neuron behaviours; Spike, Burst and Chaotic. From these results, the Izhikevich cell model may be preferred due to its ability to reflect the true behavior of the nerve cell, the ability to produce different types of spikes, and the suitability for use in larger scale brain models. The most important reason for choosing the Adaptive Exponential Integrate Fire model is that it can create rich ignition patterns with fewer parameters. The chaotic behaviours of the Hindmarsh Rose neuron model, like some chaotic systems, is thought to be used in many scientific and engineering applications such as physics, secure communication and signal processing.

Keywords: Izhikevich, adaptive exponential integrate fire, Hindmarsh Rose, biological neuron behaviours, spiking neuron models

Procedia PDF Downloads 179
3612 In vitro Characterization of Mice Bone Microstructural Changes by Low-Field and High-Field Nuclear Magnetic Resonance

Authors: Q. Ni, J. A. Serna, D. Holland, X. Wang

Abstract:

The objective of this study is to develop Nuclear Magnetic Resonance (NMR) techniques to enhance bone related research applied on normal and disuse (Biglycan knockout) mice bone in vitro by using both low-field and high-field NMR simultaneously. It is known that the total amplitude of T₂ relaxation envelopes, measured by the Carr-Purcell-Meiboom-Gill NMR spin echo train (CPMG), is a representation of the liquid phase inside the pores. Therefore, the NMR CPMG magnetization amplitude can be transferred to the volume of water after calibration with the NMR signal amplitude of the known volume of the selected water. In this study, the distribution of mobile water, porosity that can be determined by using low-field (20 MHz) CPMG relaxation technique, and the pore size distributions can be determined by a computational inversion relaxation method. It is also known that the total proton intensity of magnetization from the NMR free induction decay (FID) signal is due to the water present inside the pores (mobile water), the water that has undergone hydration with the bone (bound water), and the protons in the collagen and mineral matter (solid-like protons). Therefore, the components of total mobile and bound water within bone that can be determined by low-field NMR free induction decay technique. Furthermore, the bound water in solid phase (mineral and organic constituents), especially, the dominated component of calcium hydroxyapatite (Ca₁₀(OH)₂(PO₄)₆) can be determined by using high-field (400 MHz) magic angle spinning (MAS) NMR. With MAS technique reducing NMR spectral linewidth inhomogeneous broadening and susceptibility broadening of liquid-solid mix, in particular, we can conduct further research into the ¹H and ³¹P elements and environments of bone materials to identify the locations of bound water such as OH- group within minerals and bone architecture. We hypothesize that with low-field and high-field magic angle spinning NMR can provide a more complete interpretation of water distribution, particularly, in bound water, and these data are important to access bone quality and predict the mechanical behavior of bone.

Keywords: bone, mice bone, NMR, water in bone

Procedia PDF Downloads 175
3611 Application to Monitor the Citizens for Corona and Get Medical Aids or Assistance from Hospitals

Authors: Vathsala Kaluarachchi, Oshani Wimalarathna, Charith Vandebona, Gayani Chandrarathna, Lakmal Rupasinghe, Windhya Rankothge

Abstract:

It is the fundamental function of a monitoring system to allow users to collect and process data. A worldwide threat, the corona outbreak has wreaked havoc in Sri Lanka, and the situation has gotten out of hand. Since the epidemic, the Sri Lankan government has been unable to establish a systematic system for monitoring corona patients and providing emergency care in the event of an outbreak. Most patients have been held at home because of the high number of patients reported in the nation, but they do not yet have access to a functioning medical system. It has resulted in an increase in the number of patients who have been left untreated because of a lack of medical care. The absence of competent medical monitoring is the biggest cause of mortality for many people nowadays, according to our survey. As a result, a smartphone app for analyzing the patient's state and determining whether they should be hospitalized will be developed. Using the data supplied, we are aiming to send an alarm letter or SMS to the hospital once the system recognizes them. Since we know what those patients need and when they need it, we will put up a desktop program at the hospital to monitor their progress. Deep learning, image processing and application development, natural language processing, and blockchain management are some of the components of the research solution. The purpose of this research paper is to introduce a mechanism to connect hospitals and patients even when they are physically apart. Further data security and user-friendliness are enhanced through blockchain and NLP.

Keywords: blockchain, deep learning, NLP, monitoring system

Procedia PDF Downloads 132
3610 The Role of Hypothalamus Mediators in Energy Imbalance

Authors: Maftunakhon Latipova, Feruza Khaydarova

Abstract:

Obesity is considered a chronic metabolic disease that occurs at any age. Regulation of body weight in the body is carried out through complex interaction of a complex of interrelated systems that control the body's energy system. Energy imbalance is the cause of obesity and overweight, in which the supply of energy from food exceeds the energy needs of the body. Obesity is closely related to impaired appetite regulation, and a hypothalamus is a key place for neural regulation of food consumption. The nucleus of the hypothalamus is connected and interdependent on receiving, integrating and sending hunger signals to regulate appetite. Purpose of the study: to identify markers of food behavior. Materials and methods: The screening was carried out to identify eating disorders in 200 men and women aged 18 to 35 years with overweight and obesity and to check the effects of Orexin A and Neuropeptide Y markers. A questionnaire and questionnaires were conducted with over 200 people aged 18 to 35 years. Questionnaires were for eating disorders and hidden depression (on the Zang scale). Anthropometry is measured by OT, OB, BMI, Weight, and Height. Based on the results of the collected data, 3 groups were divided: People with obesity, People with overweight, Control Group of Healthy People. Results: Of the 200 analysed persons, 86% had eating disorders. Of these, 60% of eating disorders were associated with childhood. According to the Zang test result: Normal condition was about 37%, mild depressive disorder 20%, moderate depressive disorder 25% and 18% of people suffered from severe depressive disorder without knowing it. One group of people with obesity had eating disorders and moderate and severe depressive disorder, and group 2 was overweight with mild depressive disorder. According to laboratory data, the first group had the lowest concentration of Orexin A and Neuropeptide U in blood serum. Conclusions: Being overweight and obese are the first signal of many diseases, and prevention and detection of these disorders will prevent various diseases, including type 2 diabetes. Obesity etiology is associated with eating disorders and signal transmission of the orexinorghetic system of the hypothalamus.

Keywords: obesity, endocrinology, hypothalamus, overweight

Procedia PDF Downloads 74
3609 Electrochemical Biosensor Based on Chitosan-Gold Nanoparticles, Carbon Nanotubes for Detection of Ovarian Cancer Biomarker

Authors: Parvin Samadi Pakchin, Reza Saber, Hossein Ghanbari, Yadollah Omidi

Abstract:

Ovarian cancer is one of the leading cause of mortality among the gynecological malignancies, and it remains the one of the most prevalent cancer in females worldwide. Tumor markers are biochemical molecules in blood or tissues which can indicates cancers occurrence in the human body. So, the sensitive and specific detection of cancer markers typically recruited for diagnosing and evaluating cancers. Recently extensive research efforts are underway to achieve a simple, inexpensive and accurate device for detection of cancer biomarkers. Compared with conventional immunoassay techniques, electrochemical immunosensors are of great interest, because they are specific, simple, inexpensive, easy to handling and miniaturization. Moreover, in the past decade nanotechnology has played a crucial role in the development of biosensors. In this study, a signal-off electrochemical immunosensor for the detection of CA125 antigen has been developed using chitosan-gold nanoparticles (CS-AuNP) and multi-wall carbon nanotubes (MWCNT) composites. Toluidine blue (TB) is used as redox probe which is immobilized on the electrode surface. CS-AuNP is synthesized by a simple one step method that HAuCl4 is reduced by NH2 groups of chitosan. The CS-AuNP-MWCNT modified electrode has shown excellent electrochemical performance compared with bare Au electrode. MWCNTs and AuNPs increased electrochemical conductivity and accelerate electrons transfer between solution and electrode surface while excessive amine groups on chitosan lead to the effective loading of the biological material (CA125 antibody) and TB on the electrode surface. The electrochemical, immobilization and sensing properties CS-AuNP-MWCNT-TB modified electrodes are characterized by cyclic voltammetry, electrochemical impedance spectroscopy, differential pulse voltammetry and square wave voltammetry with Fe(CN)63−/4−as an electrochemical redox indicator.

Keywords: signal-off electrochemical biosensor, CA125, ovarian cancer, chitosan-gold nanoparticles

Procedia PDF Downloads 286
3608 Methodology to Achieve Non-Cooperative Target Identification Using High Resolution Range Profiles

Authors: Olga Hernán-Vega, Patricia López-Rodríguez, David Escot-Bocanegra, Raúl Fernández-Recio, Ignacio Bravo

Abstract:

Non-Cooperative Target Identification has become a key research domain in the Defense industry since it provides the ability to recognize targets at long distance and under any weather condition. High Resolution Range Profiles, one-dimensional radar images where the reflectivity of a target is projected onto the radar line of sight, are widely used for identification of flying targets. According to that, to face this problem, an approach to Non-Cooperative Target Identification based on the exploitation of Singular Value Decomposition to a matrix of range profiles is presented. Target Identification based on one-dimensional radar images compares a collection of profiles of a given target, namely test set, with the profiles included in a pre-loaded database, namely training set. The classification is improved by using Singular Value Decomposition since it allows to model each aircraft as a subspace and to accomplish recognition in a transformed domain where the main features are easier to extract hence, reducing unwanted information such as noise. Singular Value Decomposition permits to define a signal subspace which contain the highest percentage of the energy, and a noise subspace which will be discarded. This way, only the valuable information of each target is used in the recognition process. The identification algorithm is based on finding the target that minimizes the angle between subspaces and takes place in a transformed domain. Two metrics, F1 and F2, based on Singular Value Decomposition are accomplished in the identification process. In the case of F2, the angle is weighted, since the top vectors set the importance in the contribution to the formation of a target signal, on the contrary F1 simply shows the evolution of the unweighted angle. In order to have a wide database or radar signatures and evaluate the performance, range profiles are obtained through numerical simulation of seven civil aircraft at defined trajectories taken from an actual measurement. Taking into account the nature of the datasets, the main drawback of using simulated profiles instead of actual measured profiles is that the former implies an ideal identification scenario, since measured profiles suffer from noise, clutter and other unwanted information and simulated profiles don't. In this case, the test and training samples have similar nature and usually a similar high signal-to-noise ratio, so as to assess the feasibility of the approach, the addition of noise has been considered before the creation of the test set. The identification results applying the unweighted and weighted metrics are analysed for demonstrating which algorithm provides the best robustness against noise in an actual possible scenario. So as to confirm the validity of the methodology, identification experiments of profiles coming from electromagnetic simulations are conducted, revealing promising results. Considering the dissimilarities between the test and training sets when noise is added, the recognition performance has been improved when weighting is applied. Future experiments with larger sets are expected to be conducted with the aim of finally using actual profiles as test sets in a real hostile situation.

Keywords: HRRP, NCTI, simulated/synthetic database, SVD

Procedia PDF Downloads 353
3607 Learner's Difficulties Acquiring English: The Case of Native Speakers of Rio de La Plata Spanish Towards Justifying the Need for Corpora

Authors: Maria Zinnia Bardas Hoffmann

Abstract:

Contrastive Analysis (CA) is the systematic comparison between two languages. It stems from the notion that errors are caused by interference of the L1 system in the acquisition process of an L2. CA represents a useful tool to understand the nature of learning and acquisition. Also, this particular method promises a path to un-derstand the nature of underlying cognitive processes, even when other factors such as intrinsic motivation and teaching strategies were found to best explain student’s problems in acquisition. CA study is justified not only from the need to get a deeper understanding of the nature of SLA, but as an invaluable source to provide clues, at a cognitive level, for those general processes involved in rule formation and abstract thought. It is relevant for cross disciplinary studies and the fields of Computational Thought, Natural Language processing, Applied Linguistics, Cognitive Linguistics and Math Theory. That being said, this paper intends to address here as well its own set of constraints and limitations. Finally, this paper: (a) aims at identifying some of the difficulties students may find in their learning process due to the nature of their specific variety of L1, Rio de la Plata Spanish (RPS), (b) represents an attempt to discuss the necessity for specific models to approach CA.

Keywords: second language acquisition, applied linguistics, contrastive analysis, applied contrastive analysis English language department, meta-linguistic rules, cross-linguistics studies, computational thought, natural language processing

Procedia PDF Downloads 150
3606 Control of Sensors in Metering System of Fluid

Authors: A. Harrouz, O. Harrouz, A. Benatiallah

Abstract:

This paper is to review the essential definitions, roles, and characteristics of communication of metering system. We discuss measurement, data acquisition, and metrological control of a signal sensor from dynamic metering system. After that, we present control of instruments of metering system of fluid with more detailed discussions to the reference standards.

Keywords: data acquisition, dynamic metering system, reference standards, metrological control

Procedia PDF Downloads 491
3605 Distant Speech Recognition Using Laser Doppler Vibrometer

Authors: Yunbin Deng

Abstract:

Most existing applications of automatic speech recognition relies on cooperative subjects at a short distance to a microphone. Standoff speech recognition using microphone arrays can extend the subject to sensor distance somewhat, but it is still limited to only a few feet. As such, most deployed applications of standoff speech recognitions are limited to indoor use at short range. Moreover, these applications require air passway between the subject and the sensor to achieve reasonable signal to noise ratio. This study reports long range (50 feet) automatic speech recognition experiments using a Laser Doppler Vibrometer (LDV) sensor. This study shows that the LDV sensor modality can extend the speech acquisition standoff distance far beyond microphone arrays to hundreds of feet. In addition, LDV enables 'listening' through the windows for uncooperative subjects. This enables new capabilities in automatic audio and speech intelligence, surveillance, and reconnaissance (ISR) for law enforcement, homeland security and counter terrorism applications. The Polytec LDV model OFV-505 is used in this study. To investigate the impact of different vibrating materials, five parallel LDV speech corpora, each consisting of 630 speakers, are collected from the vibrations of a glass window, a metal plate, a plastic box, a wood slate, and a concrete wall. These are the common materials the application could encounter in a daily life. These data were compared with the microphone counterpart to manifest the impact of various materials on the spectrum of the LDV speech signal. State of the art deep neural network modeling approaches is used to conduct continuous speaker independent speech recognition on these LDV speech datasets. Preliminary phoneme recognition results using time-delay neural network, bi-directional long short term memory, and model fusion shows great promise of using LDV for long range speech recognition. To author’s best knowledge, this is the first time an LDV is reported for long distance speech recognition application.

Keywords: covert speech acquisition, distant speech recognition, DSR, laser Doppler vibrometer, LDV, speech intelligence surveillance and reconnaissance, ISR

Procedia PDF Downloads 177
3604 Analyzing Competition in Public Construction Projects

Authors: Khaled Hesham Hyari, Amjad Almani

Abstract:

Construction projects in the public sector are commonly awarded through competitive bidding. In the last decade, the Construction projects environment in the Middle East went through many changes. These changes have been caused by different factors including the economic crisis, delays in monthly payments, international competition and reduced number of projects. These factors had a great impact on the bidding behaviors of contractors and their pricing strategies. This paper examines the competition characteristics in public construction projects through an analysis of bidding results of contractors in public construction projects over a period of 6 years (2006-2011) in Jordan. The analyzed projects include all categories of projects such as infrastructure, buildings, transportation and engineering services (design and supervision contracts). Data for the projects were obtained from the General Tender’s Directorate in Jordan and includes 462 projects. The analysis performed in this projects includes, studying the bid spread in all projects as it is an indication of the level of competition in the analyzed bids. The analysis studied the factors that affect bid spread such as number of bidders, Value of the project, Project category and years. It also studying the “Signal to Noise Ratio” in all projects as it is an indication of the accuracy of cost estimating performed by competing bidders and bidder´s evaluation of project risks. The analysis performed includes the relationship between signal to noise ratio and different parameters such as project category, number of bidders and changes over years. Moreover, the analysis includes determining the bidder´s aggressiveness in bidding as it is an indication of competition level in such projects. This was performed by determining the pack price which can be considered as the true value of the project and comparing it with the lowest bid submitted for each project to determine the level of aggressiveness in submitted bids. The analysis performed in this project should prove to be useful to owners in understanding bidding behaviors of contractors and pointing out areas that needs improvement in preparing bidding documents. Also the project should be useful to contractors in understanding the competitive bidding environment and should help them to improve their bidding strategies to maximize the success rate in obtaining contracts.

Keywords: construction projects, competitive bidding, public construction, competition

Procedia PDF Downloads 332
3603 Theorizing Optimal Use of Numbers and Anecdotes: The Science of Storytelling in Newsrooms

Authors: Hai L. Tran

Abstract:

When covering events and issues, the news media often employ both personal accounts as well as facts and figures. However, the process of using numbers and narratives in the newsroom is mostly operated through trial and error. There is a demonstrated need for the news industry to better understand the specific effects of storytelling and data-driven reporting on the audience as well as explanatory factors driving such effects. In the academic world, anecdotal evidence and statistical evidence have been studied in a mutually exclusive manner. Existing research tends to treat pertinent effects as though the use of one form precludes the other and as if a tradeoff is required. Meanwhile, narratives and statistical facts are often combined in various communication contexts, especially in news presentations. There is value in reconceptualizing and theorizing about both relative and collective impacts of numbers and narratives as well as the mechanism underlying such effects. The current undertaking seeks to link theory to practice by providing a complete picture of how and why people are influenced by information conveyed through quantitative and qualitative accounts. Specifically, the cognitive-experiential theory is invoked to argue that humans employ two distinct systems to process information. The rational system requires the processing of logical evidence effortful analytical cognitions, which are affect-free. Meanwhile, the experiential system is intuitive, rapid, automatic, and holistic, thereby demanding minimum cognitive resources and relating to the experience of affect. In certain situations, one system might dominate the other, but rational and experiential modes of processing operations in parallel and at the same time. As such, anecdotes and quantified facts impact audience response differently and a combination of data and narratives is more effective than either form of evidence. In addition, the present study identifies several media variables and human factors driving the effects of statistics and anecdotes. An integrative model is proposed to explain how message characteristics (modality, vividness, salience, congruency, position) and individual differences (involvement, numeracy skills, cognitive resources, cultural orientation) impact selective exposure, which in turn activates pertinent modes of processing, and thereby induces corresponding responses. The present study represents a step toward bridging theoretical frameworks from various disciplines to better understand the specific effects and the conditions under which the use of anecdotal evidence and/or statistical evidence enhances or undermines information processing. In addition to theoretical contributions, this research helps inform news professionals about the benefits and pitfalls of incorporating quantitative and qualitative accounts in reporting. It proposes a typology of possible scenarios and appropriate strategies for journalists to use when presenting news with anecdotes and numbers.

Keywords: data, narrative, number, anecdote, storytelling, news

Procedia PDF Downloads 78
3602 A Palmprint Identification System Based Multi-Layer Perceptron

Authors: David P. Tantua, Abdulkader Helwan

Abstract:

Biometrics has been recently used for the human identification systems using the biological traits such as the fingerprints and iris scanning. Identification systems based biometrics show great efficiency and accuracy in such human identification applications. However, these types of systems are so far based on some image processing techniques only, which may decrease the efficiency of such applications. Thus, this paper aims to develop a human palmprint identification system using multi-layer perceptron neural network which has the capability to learn using a backpropagation learning algorithms. The developed system uses images obtained from a public database available on the internet (CASIA). The processing system is as follows: image filtering using median filter, image adjustment, image skeletonizing, edge detection using canny operator to extract features, clear unwanted components of the image. The second phase is to feed those processed images into a neural network classifier which will adaptively learn and create a class for each different image. 100 different images are used for training the system. Since this is an identification system, it should be tested with the same images. Therefore, the same 100 images are used for testing it, and any image out of the training set should be unrecognized. The experimental results shows that this developed system has a great accuracy 100% and it can be implemented in real life applications.

Keywords: biometrics, biological traits, multi-layer perceptron neural network, image skeletonizing, edge detection using canny operator

Procedia PDF Downloads 371
3601 Low Field Microwave Absorption and Magnetic Anisotropy in TM Co-Doped ZnO System

Authors: J. Das, T. S. Mahule, V. V. Srinivasu

Abstract:

Electron spin resonance (ESR) study at 9.45 GHz and a field modulation frequency of 100Hz was performed on bulk polycrystalline samples of Mn:TM (Fe/Ni) and Mn:RE (Gd/Sm) co doped ZnO samples with composition Zn1-xMn:TM/RE)xO synthesised by solid state reaction route and sintered at 500 0C temperature. The room temperature microwave absorption data collected by sweeping the DC magnetic field from -500 to 9500 G for the Mn:Fe and Mn:Ni co doped ZnO samples exhibit a rarely reported non resonant low field absorption (NRLFA) in addition to a strong absorption at around 3350G, usually associated with ferromagnetic resonance (FMR) satisfying Larmor’s relation due to absorption in the full saturation state. Observed low field absorption is distinct to ferromagnetic resonance even at low temperature and shows hysteresis. Interestingly, it shows a phase opposite with respect to the main ESR signal of the samples, which indicates that the low field absorption has a minimum value at zero magnetic field whereas the ESR signal has a maximum value. The major resonance peak as well as the peak corresponding to low field absorption exhibit asymmetric nature indicating magnetic anisotropy in the sample normally associated with intrinsic ferromagnetism. Anisotropy parameter for Mn:Ni codoped ZnO sample is noticed to be quite higher. The g values also support the presence of oxygen vacancies and clusters in the samples. These samples have shown room temperature ferromagnetism in the SQUID measurement. However, in rare earth (RE) co doped samples (Zn1-x (Mn: Gd/Sm)xO), which show paramagnetic behavior at room temperature, the low field microwave signals are not observed. As microwave currents due to itinerary electrons can lead to ohmic losses inside the sample, we speculate that more delocalized 3d electrons contributed from the TM dopants facilitate such microwave currents leading to the loss and hence absorption at the low field which is also supported by the increase in current with increased micro wave power. Besides, since Fe and Ni has intrinsic spin polarization with polarisability of around 45%, doping of Fe and Ni is expected to enhance the spin polarization related effect in ZnO. We emphasize that in this case Fe and Ni doping contribute to polarized current which interacts with the magnetization (spin) vector and get scattered giving rise to the absorption loss.

Keywords: co-doping, electron spin resonance, hysteresis, non-resonant microwave absorption

Procedia PDF Downloads 313
3600 Thiosulfate Leaching of the Auriferous Ore from Castromil Deposit: A Case Study

Authors: Rui Sousa, Aurora Futuro, António Fiúza

Abstract:

The exploitation of gold ore deposits is highly dependent on efficient mineral processing methods, although actual perspectives based on life-cycle assessment introduce difficulties that were unforeseen in a very recent past. Cyanidation is the most applied gold processing method, but the potential environmental problems derived from the usage of cyanide as leaching reagent led to a demand for alternative methods. Ammoniacal thiosulfate leaching is one of the most important alternatives to cyanidation. In this article, some experimental studies carried out in order to assess the feasibility of thiosulfate as a leaching agent for the ore from the unexploited Portuguese gold mine of Castromil. It became clear that the process depends on the concentrations of ammonia, thiosulfate and copper. Based on this fact, a few leaching tests were performed in order to assess the best reagent prescription, and also the effects of different combination of these concentrations. Higher thiosulfate concentrations cause the decrease of gold dissolution. Lower concentrations of ammonia require higher thiosulfate concentrations, and higher ammonia concentrations require lower thiosulfate concentrations. The addition of copper increases the gold dissolution ratio. Subsequently, some alternative operatory conditions were tested such as variations in temperature and in the solid/liquid ratio as well as the application of a pre-treatment before the leaching stage. Finally, thiosulfate leaching was compared to cyanidation. Thiosulfate leaching showed to be an important alternative, although a pre-treatment is required to increase the yield of the gold dissolution.

Keywords: gold, leaching, pre-treatment, thiosulfate

Procedia PDF Downloads 309
3599 Synthesis and Thermoluminescence Investigations of Doped LiF Nanophosphor

Authors: Pooja Seth, Shruti Aggarwal

Abstract:

Thermoluminescence dosimetry (TLD) is one of the most effective methods for the assessment of dose during diagnostic radiology and radiotherapy applications. In these applications monitoring of absorbed dose is essential to prevent patient from undue exposure and to evaluate the risks that may arise due to exposure. LiF based thermoluminescence (TL) dosimeters are promising materials for the estimation, calibration and monitoring of dose due to their favourable dosimetric characteristics like tissue-equivalence, high sensitivity, energy independence and dose linearity. As the TL efficiency of a phosphor strongly depends on the preparation route, it is interesting to investigate the TL properties of LiF based phosphor in nanocrystalline form. LiF doped with magnesium (Mg), copper (Cu), sodium (Na) and silicon (Si) in nanocrystalline form has been prepared using chemical co-precipitation method. Cubical shape LiF nanostructures are formed. TL dosimetry properties have been investigated by exposing it to gamma rays. TL glow curve structure of nanocrystalline form consists of a single peak at 419 K as compared to the multiple peaks observed in microcrystalline form. A consistent glow curve structure with maximum TL intensity at annealing temperature of 573 K and linear dose response from 0.1 to 1000 Gy is observed which is advantageous for radiotherapy application. Good reusability, low fading (5 % over a month) and negligible residual signal (0.0019%) are observed. As per photoluminescence measurements, wide emission band at 360 nm - 550 nm is observed in an undoped LiF. However, an intense peak at 488 nm is observed in doped LiF nanophosphor. The phosphor also exhibits the intense optically stimulated luminescence. Nanocrystalline LiF: Mg, Cu, Na, Si phosphor prepared by co-precipitation method showed simple glow curve structure, linear dose response, reproducibility, negligible residual signal, good thermal stability and low fading. The LiF: Mg, Cu, Na, Si phosphor in nanocrystalline form has tremendous potential in diagnostic radiology, radiotherapy and high energy radiation application.

Keywords: thermoluminescence, nanophosphor, optically stimulated luminescence, co-precipitation method

Procedia PDF Downloads 403
3598 Enhancing Project Management Performance in Prefabricated Building Construction under Uncertainty: A Comprehensive Approach

Authors: Niyongabo Elyse

Abstract:

Prefabricated building construction is a pioneering approach that combines design, production, and assembly to attain energy efficiency, environmental sustainability, and economic feasibility. Despite continuous development in the industry in China, the low technical maturity of standardized design, factory production, and construction assembly introduces uncertainties affecting prefabricated component production and on-site assembly processes. This research focuses on enhancing project management performance under uncertainty to help enterprises navigate these challenges and optimize project resources. The study introduces a perspective on how uncertain factors influence the implementation of prefabricated building construction projects. It proposes a theoretical model considering project process management ability, adaptability to uncertain environments, and collaboration ability of project participants. The impact of uncertain factors is demonstrated through case studies and quantitative analysis, revealing constraints on implementation time, cost, quality, and safety. To address uncertainties in prefabricated component production scheduling, a fuzzy model is presented, expressing processing times in interval values. The model utilizes a cooperative co-evolution evolution algorithm (CCEA) to optimize scheduling, demonstrated through a real case study showcasing reduced project duration and minimized effects of processing time disturbances. Additionally, the research addresses on-site assembly construction scheduling, considering the relationship between task processing times and assigned resources. A multi-objective model with fuzzy activity durations is proposed, employing a hybrid cooperative co-evolution evolution algorithm (HCCEA) to optimize project scheduling. Results from real case studies indicate improved project performance in terms of duration, cost, and resilience to processing time delays and resource changes. The study also introduces a multistage dynamic process control model, utilizing IoT technology for real-time monitoring during component production and construction assembly. This approach dynamically adjusts schedules when constraints arise, leading to enhanced project management performance, as demonstrated in a real prefabricated housing project. Key contributions include a fuzzy prefabricated components production scheduling model, a multi-objective multi-mode resource-constrained construction project scheduling model with fuzzy activity durations, a multi-stage dynamic process control model, and a cooperative co-evolution evolution algorithm. The integrated mathematical model addresses the complexity of prefabricated building construction project management, providing a theoretical foundation for practical decision-making in the field.

Keywords: prefabricated construction, project management performance, uncertainty, fuzzy scheduling

Procedia PDF Downloads 49