Search results for: manual areal profiling technique
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7374

Search results for: manual areal profiling technique

5274 A Comparative Assessment of Information Value, Fuzzy Expert System Models for Landslide Susceptibility Mapping of Dharamshala and Surrounding, Himachal Pradesh, India

Authors: Kumari Sweta, Ajanta Goswami, Abhilasha Dixit

Abstract:

Landslide is a geomorphic process that plays an essential role in the evolution of the hill-slope and long-term landscape evolution. But its abrupt nature and the associated catastrophic forces of the process can have undesirable socio-economic impacts, like substantial economic losses, fatalities, ecosystem, geomorphologic and infrastructure disturbances. The estimated fatality rate is approximately 1person /100 sq. Km and the average economic loss is more than 550 crores/year in the Himalayan belt due to landslides. This study presents a comparative performance of a statistical bivariate method and a machine learning technique for landslide susceptibility mapping in and around Dharamshala, Himachal Pradesh. The final produced landslide susceptibility maps (LSMs) with better accuracy could be used for land-use planning to prevent future losses. Dharamshala, a part of North-western Himalaya, is one of the fastest-growing tourism hubs with a total population of 30,764 according to the 2011 census and is amongst one of the hundred Indian cities to be developed as a smart city under PM’s Smart Cities Mission. A total of 209 landslide locations were identified in using high-resolution linear imaging self-scanning (LISS IV) data. The thematic maps of parameters influencing landslide occurrence were generated using remote sensing and other ancillary data in the GIS environment. The landslide causative parameters used in the study are slope angle, slope aspect, elevation, curvature, topographic wetness index, relative relief, distance from lineaments, land use land cover, and geology. LSMs were prepared using information value (Info Val), and Fuzzy Expert System (FES) models. Info Val is a statistical bivariate method, in which information values were calculated as the ratio of the landslide pixels per factor class (Si/Ni) to the total landslide pixel per parameter (S/N). Using this information values all parameters were reclassified and then summed in GIS to obtain the landslide susceptibility index (LSI) map. The FES method is a machine learning technique based on ‘mean and neighbour’ strategy for the construction of fuzzifier (input) and defuzzifier (output) membership function (MF) structure, and the FR method is used for formulating if-then rules. Two types of membership structures were utilized for membership function Bell-Gaussian (BG) and Trapezoidal-Triangular (TT). LSI for BG and TT were obtained applying membership function and if-then rules in MATLAB. The final LSMs were spatially and statistically validated. The validation results showed that in terms of accuracy, Info Val (83.4%) is better than BG (83.0%) and TT (82.6%), whereas, in terms of spatial distribution, BG is best. Hence, considering both statistical and spatial accuracy, BG is the most accurate one.

Keywords: bivariate statistical techniques, BG and TT membership structure, fuzzy expert system, information value method, machine learning technique

Procedia PDF Downloads 116
5273 Validation of the Female Sexual Function Index and the Female Sexual Distress Scale-Desire/Arousal/Orgasm in Chinese Women

Authors: Lan Luo, Jingjing Huang, Huafang Li

Abstract:

Introduction: Distressing low sexual desire is common in China, while the lack of reliable and valid instruments to evaluate symptoms of hypoactive sexual desire disorder (HSDD) impedes related research and clinical services. Aim: This study aimed to validate the reliability and validity of the Female Sexual Function Index (FSFI) and the Female Sexual Distress Scale-Desire/Arousal/Orgasm (FSDS-DAO) in Chinese female HSDD patients. Methods: We administered FSFI and FSDS-DAO in a convenient sample of Chinese adult women. Participants were diagnosed by a psychiatrist according to the Diagnostic and Statistical Manual of Mental Disorders, 4th Edition, Text Revision (DSM-IV-TR). Results: We had a valid analysis sample of 279 Chinese women, of which 107 were HSDD patients. The Cronbach's α of FSFI and FSDS-DAO were 0.947 and 0.956, respectively, and the intraclass correlation coefficients of which were 0.86 and 0.89, respectively (the interval was 13-15 days). The correlation coefficient between the Revised Adult Attachment Scale (RAAS) and FSFI (or FSDS-DAO) did not exceed 0.4; the area under the receiver operating characteristic (ROC) curve was 0. 83 when combined FSFI-d (the desire domain of FSFI) and FSDS-DAO to diagnose HSDD, which was significantly different from that of using these scales individually. FSFI-d of less than 2.7 (1.2-6) and FSDS-DAO of no less than 15 (0-60) (Sensitivity 65%, Specificity 83%), or FSFI-d of no more than 3.0 (1.2-6) and FSDS-DAO of no less than 14 (0-60) (Sensitivity 74%, Specificity 77%) can be used as cutoff scores in clinical research or outpatient screening. Clinical implications: FSFI (including FSFI-d) and FSDS-DAO are suitable for the screening and evaluation of Chinese female HSDD patients of childbearing age. Strengths and limitations: Strengths include a thorough validation of FSFI and FSDS-DAO and the exploration of the cutoff score combing FSFI-d and FSDS-DAO. Limitations include a small convenience sample and the requirement of being sexually active for HSDD patients. Conclusion: FSFI (including FSFI-d) and FSDS-DAO have good internal consistency, test-retest reliability, construct validity, and criterion validity in Chinese female HSDD patients of childbearing age.

Keywords: sexual desire, sexual distress, hypoactive sexual desire disorder, scale

Procedia PDF Downloads 66
5272 Field Emission Scanning Microscope Image Analysis for Porosity Characterization of Autoclaved Aerated Concrete

Authors: Venuka Kuruwita Arachchige Don, Mohamed Shaheen, Chris Goodier

Abstract:

Aerated autoclaved concrete (AAC) is known for its lightweight, easy handling, high thermal insulation, and extremely porous structure. Investigation of pore behavior in AAC is crucial for characterizing the material, standardizing design and production techniques, enhancing the mechanical, durability, and thermal performance, studying the effectiveness of protective measures, and analyzing the effects of weather conditions. The significant details of pores are complicated to observe with acknowledged accuracy. The High-resolution Field Emission Scanning Electron Microscope (FESEM) image analysis is a promising technique for investigating the pore behavior and density of AAC, which is adopted in this study. Mercury intrusion porosimeter and gas pycnometer were employed to characterize porosity distribution and density parameters. The analysis considered three different densities of AAC blocks and three layers in the altitude direction within each block. A set of understandings was presented to extract and analyze the details of pore shape, pore size, pore connectivity, and pore percentages from FESEM images of AAC. Average pore behavior outcomes per unit area were presented. Comparison of porosity distribution and density parameters revealed significant variations. FESEM imaging offered unparalleled insights into porosity behavior, surpassing the capabilities of other techniques. The analysis conducted from a multi-staged approach provides porosity percentage occupied by various pore categories, total porosity, variation of pore distribution compared to AAC densities and layers, number of two-dimensional and three-dimensional pores, variation of apparent and matrix densities concerning pore behaviors, variation of pore behavior with respect to aluminum content, and relationship among shape, diameter, connectivity, and percentage in each pore classification.

Keywords: autoclaved aerated concrete, density, imaging technique, microstructure, porosity behavior

Procedia PDF Downloads 42
5271 H.263 Based Video Transceiver for Wireless Camera System

Authors: Won-Ho Kim

Abstract:

In this paper, a design of H.263 based wireless video transceiver is presented for wireless camera system. It uses standard WIFI transceiver and the covering area is up to 100m. Furthermore the standard H.263 video encoding technique is used for video compression since wireless video transmitter is unable to transmit high capacity raw data in real time and the implemented system is capable of streaming at speed of less than 1Mbps using NTSC 720x480 video.

Keywords: wireless video transceiver, video surveillance camera, H.263 video encoding digital signal processing

Procedia PDF Downloads 354
5270 Ill-Posed Inverse Problems in Molecular Imaging

Authors: Ranadhir Roy

Abstract:

Inverse problems arise in medical (molecular) imaging. These problems are characterized by large in three dimensions, and by the diffusion equation which models the physical phenomena within the media. The inverse problems are posed as a nonlinear optimization where the unknown parameters are found by minimizing the difference between the predicted data and the measured data. To obtain a unique and stable solution to an ill-posed inverse problem, a priori information must be used. Mathematical conditions to obtain stable solutions are established in Tikhonov’s regularization method, where the a priori information is introduced via a stabilizing functional, which may be designed to incorporate some relevant information of an inverse problem. Effective determination of the Tikhonov regularization parameter requires knowledge of the true solution, or in the case of optical imaging, the true image. Yet, in, clinically-based imaging, true image is not known. To alleviate these difficulties we have applied the penalty/modified barrier function (PMBF) method instead of Tikhonov regularization technique to make the inverse problems well-posed. Unlike the Tikhonov regularization method, the constrained optimization technique, which is based on simple bounds of the optical parameter properties of the tissue, can easily be implemented in the PMBF method. Imposing the constraints on the optical properties of the tissue explicitly restricts solution sets and can restore uniqueness. Like the Tikhonov regularization method, the PMBF method limits the size of the condition number of the Hessian matrix of the given objective function. The accuracy and the rapid convergence of the PMBF method require a good initial guess of the Lagrange multipliers. To obtain the initial guess of the multipliers, we use a least square unconstrained minimization problem. Three-dimensional images of fluorescence absorption coefficients and lifetimes were reconstructed from contact and noncontact experimentally measured data.

Keywords: constrained minimization, ill-conditioned inverse problems, Tikhonov regularization method, penalty modified barrier function method

Procedia PDF Downloads 261
5269 Analytic Hierarchy Process

Authors: Hadia Rafi

Abstract:

To make any decision in any work/task/project it involves many factors that needed to be looked. The analytic Hierarchy process (AHP) is based on the judgments of experts to derive the required results this technique measures the intangibles and then by the help of judgment and software analysis the comparisons are made which shows how much a certain element/unit leads another. AHP includes how an inconsistent judgment should be made consistent and how the judgment should be improved when possible. The Priority scales are obtained by multiplying them with the priority of their parent node and after that they are added.

Keywords: AHP, priority scales, parent node, software analysis

Procedia PDF Downloads 394
5268 Career Guidance System Using Machine Learning

Authors: Mane Darbinyan, Lusine Hayrapetyan, Elen Matevosyan

Abstract:

Artificial Intelligence in Education (AIED) has been created to help students get ready for the workforce, and over the past 25 years, it has grown significantly, offering a variety of technologies to support academic, institutional, and administrative services. However, this is still challenging, especially considering the labor market's rapid change. While choosing a career, people face various obstacles because they do not take into consideration their own preferences, which might lead to many other problems like shifting jobs, work stress, occupational infirmity, reduced productivity, and manual error. Besides preferences, people should properly evaluate their technical and non-technical skills, as well as their personalities. Professional counseling has become a difficult undertaking for counselors due to the wide range of career choices brought on by changing technological trends. It is necessary to close this gap by utilizing technology that makes sophisticated predictions about a person's career goals based on their personality. Hence, there is a need to create an automated model that would help in decision-making based on user inputs. Improving career guidance can be achieved by embedding machine learning into the career consulting ecosystem. There are various systems of career guidance that work based on the same logic, such as the classification of applicants, matching applications with appropriate departments or jobs, making predictions, and providing suitable recommendations. Methodologies like KNN, Neural Networks, K-means clustering, D-Tree, and many other advanced algorithms are applied in the fields of data and compute some data, which is helpful to predict the right careers. Besides helping users with their career choice, these systems provide numerous opportunities which are very useful while making this hard decision. They help the candidate to recognize where he/she specifically lacks sufficient skills so that the candidate can improve those skills. They are also capable to offer an e-learning platform, taking into account the user's lack of knowledge. Furthermore, users can be provided with details on a particular job, such as the abilities required to excel in that industry.

Keywords: career guidance system, machine learning, career prediction, predictive decision, data mining, technical and non-technical skills

Procedia PDF Downloads 69
5267 Using the Bootstrap for Problems Statistics

Authors: Brahim Boukabcha, Amar Rebbouh

Abstract:

The bootstrap method based on the idea of exploiting all the information provided by the initial sample, allows us to study the properties of estimators. In this article we will present a theoretical study on the different methods of bootstrapping and using the technique of re-sampling in statistics inference to calculate the standard error of means of an estimator and determining a confidence interval for an estimated parameter. We apply these methods tested in the regression models and Pareto model, giving the best approximations.

Keywords: bootstrap, error standard, bias, jackknife, mean, median, variance, confidence interval, regression models

Procedia PDF Downloads 371
5266 Development of Ketorolac Tromethamine Encapsulated Stealth Liposomes: Pharmacokinetics and Bio Distribution

Authors: Yasmin Begum Mohammed

Abstract:

Ketorolac tromethamine (KTM) is a non-steroidal anti-inflammatory drug with a potent analgesic and anti-inflammatory activity due to prostaglandin related inhibitory effect of drug. It is a non-selective cyclo-oxygenase inhibitor. The drug is currently used orally and intramuscularly in multiple divided doses, clinically for the management arthritis, cancer pain, post-surgical pain, and in the treatment of migraine pain. KTM has short biological half-life of 4 to 6 hours, which necessitates frequent dosing to retain the action. The frequent occurrence of gastrointestinal bleeding, perforation, peptic ulceration, and renal failure lead to the development of other drug delivery strategies for the appropriate delivery of KTM. The ideal solution would be to target the drug only to the cells or tissues affected by the disease. Drug targeting could be achieved effectively by liposomes that are biocompatible and biodegradable. The aim of the study was to develop a parenteral liposome formulation of KTM with improved efficacy while reducing side effects by targeting the inflammation due to arthritis. PEG-anchored (stealth) and non-PEG-anchored liposomes were prepared by thin film hydration technique followed by extrusion cycle and characterized for in vitro and in vivo. Stealth liposomes (SLs) exhibited increase in percent encapsulation efficiency (94%) and 52% percent of drug retention during release studies in 24 h with good stability for a period of 1 month at -20°C and 4°C. SLs showed about maximum 55% of edema inhibition with significant analgesic effect. SLs produced marked differences over those of non-SL formulations with an increase in area under plasma concentration time curve, t₁/₂, mean residence time, and reduced clearance. 0.3% of the drug was detected in arthritic induced paw with significantly reduced drug localization in liver, spleen, and kidney for SLs when compared to other conventional liposomes. Thus SLs help to increase the therapeutic efficacy of KTM by increasing the targeting potential at the inflammatory region.

Keywords: biodistribution, ketorolac tromethamine, stealth liposomes, thin film hydration technique

Procedia PDF Downloads 280
5265 A Multi Criteria Approach for Prioritization of Low Volume Rural Roads for Maintenance and Improvement

Authors: L. V. S. S. Phaneendra Bolem, S. Shankar

Abstract:

Low Volume Rural Roads (LVRRs) constitute an integral component of the road system in all countries. These encompass all aspects of the social and economic development of rural communities. It is known that on a worldwide basis the number of low traffic roads far exceeds the length of high volume roads. Across India, 90% of the roads are LVRRs, and they often form the most important link in terms of providing access to educational, medical, recreational and commercial activities in local and regional areas. In the recent past, Government of India (GoI), with the initiation of the ambitious programme namely 'Pradhan Mantri Gram Sadak Yojana' (PMGSY) gave greater importance to LVRRs realizing their role in economic development of rural communities. The vast expansion of the road network has brought connectivity to the rural areas of the country. Further, it is noticed that due to increasing axle loads and lack of timely maintenance, is accelerated the process of deterioration of LVRRs. In addition to this due to limited budget for maintenance of these roads systematic and scientific approach in utilizing the available resources has been necessitated. This would enable better prioritization and ranking for the maintenance and make ‘all-weather roads’. Taking this into account the present study has adopted a multi-criteria approach. The multi-criteria approach includes parameters such as social, economic, environmental and pavement condition as the main criterion and some sub-criteria to find the best suitable parameters and their weight. For this purpose the expert’s opinion survey was carried out using Delphi Technique (DT) considering Likert scale, pairwise comparison and ranking methods and entire data was analyzed. Finally, this study developed the maintenance criterion considering the socio-economic, environmental and pavement condition parameters for effective maintenance of low volume roads based on the engineering judgment.

Keywords: Delphi technique, experts opinion survey, low volume rural road maintenance, multi criteria analysis

Procedia PDF Downloads 154
5264 Acrylamide-Induced Thoracic Spinal Cord Axonopathy

Authors: Afshin Zahedi, Keivan Jamshidi

Abstract:

This study was conducted to determine the neurotoxic effects of different doses of ACR on the thoracic axons of the spinal cord of rat. To evaluate this hypothesis in the thoracic axons, amino-cupric silver staining technique of the de Olmos was conducted to define the histopathologic characteristic (argyrophilia) of axonal damage following ACR exposure. For this purpose 60 adult male rats (Wistar, approximately 250 g) were selected. Rats were hosed in polycarbonate boxes as two per each. Randomly assigned groups of rats (10 rats per exposure group, total 5 exposure groups as A, B, C, D and E) were exposed to 0.5, 5, 50, 100 and 500 mg/kg per day×11days intraperitoneal injection (IP injection) respectively. The remaining 10 rats were housed in group (F) as control group. Control rats received daily injections of 0.9% saline (3ml/kg). As indices of developing neurotoxicity, weight gain, gait scores and landing hindlimb foot splay (LHF) were determined. Weight gains were measured daily prior to injection. Gait scoring involved observation of spontaneous open field locomotion, included evaluations of ataxia, hopping, rearing and hind foot placement, and hindlimb foot splay were determined 3-4 times per week. Gait score was assigned from 1-4. After 11 days, two rats for silver stain, were randomly selected, dissected and proper samples were collected from thoracic portion of the spinal cord of rat. Results did show no neurological behavior in groups A, B and F, whereas severe neurotoxicity was observed in groups C and D. Rats in groups E died within 1-2 hours due to severe toxemia. In histopathological studies based on the de Olmos technique no argyrophilic neurons or processes were observed in stained sections obtained from the thoracic portion of the spinal cord of rats belong to groups A, B and F, while moderate to severe argyrophilic changes were observed in different stained sections obtained from the thoracic portion of the spinal cord of rats belong to groups C and D.

Keywords: acrylamide, rat, axonopathy, argyrophily, de Olmos

Procedia PDF Downloads 332
5263 Career Guidance System Using Machine Learning

Authors: Mane Darbinyan, Lusine Hayrapetyan, Elen Matevosyan

Abstract:

Artificial Intelligence in Education (AIED) has been created to help students get ready for the workforce, and over the past 25 years, it has grown significantly, offering a variety of technologies to support academic, institutional, and administrative services. However, this is still challenging, especially considering the labor market's rapid change. While choosing a career, people face various obstacles because they do not take into consideration their own preferences, which might lead to many other problems like shifting jobs, work stress, occupational infirmity, reduced productivity, and manual error. Besides preferences, people should evaluate properly their technical and non-technical skills, as well as their personalities. Professional counseling has become a difficult undertaking for counselors due to the wide range of career choices brought on by changing technological trends. It is necessary to close this gap by utilizing technology that makes sophisticated predictions about a person's career goals based on their personality. Hence, there is a need to create an automated model that would help in decision-making based on user inputs. Improving career guidance can be achieved by embedding machine learning into the career consulting ecosystem. There are various systems of career guidance that work based on the same logic, such as the classification of applicants, matching applications with appropriate departments or jobs, making predictions, and providing suitable recommendations. Methodologies like KNN, neural networks, K-means clustering, D-Tree, and many other advanced algorithms are applied in the fields of data and compute some data, which is helpful to predict the right careers. Besides helping users with their career choice, these systems provide numerous opportunities which are very useful while making this hard decision. They help the candidate to recognize where he/she specifically lacks sufficient skills so that the candidate can improve those skills. They are also capable of offering an e-learning platform, taking into account the user's lack of knowledge. Furthermore, users can be provided with details on a particular job, such as the abilities required to excel in that industry.

Keywords: career guidance system, machine learning, career prediction, predictive decision, data mining, technical and non-technical skills

Procedia PDF Downloads 57
5262 Methods for Enhancing Ensemble Learning or Improving Classifiers of This Technique in the Analysis and Classification of Brain Signals

Authors: Seyed Mehdi Ghezi, Hesam Hasanpoor

Abstract:

This scientific article explores enhancement methods for ensemble learning with the aim of improving the performance of classifiers in the analysis and classification of brain signals. The research approach in this field consists of two main parts, each with its own strengths and weaknesses. The choice of approach depends on the specific research question and available resources. By combining these approaches and leveraging their respective strengths, researchers can enhance the accuracy and reliability of classification results, consequently advancing our understanding of the brain and its functions. The first approach focuses on utilizing machine learning methods to identify the best features among the vast array of features present in brain signals. The selection of features varies depending on the research objective, and different techniques have been employed for this purpose. For instance, the genetic algorithm has been used in some studies to identify the best features, while optimization methods have been utilized in others to identify the most influential features. Additionally, machine learning techniques have been applied to determine the influential electrodes in classification. Ensemble learning plays a crucial role in identifying the best features that contribute to learning, thereby improving the overall results. The second approach concentrates on designing and implementing methods for selecting the best classifier or utilizing meta-classifiers to enhance the final results in ensemble learning. In a different section of the research, a single classifier is used instead of multiple classifiers, employing different sets of features to improve the results. The article provides an in-depth examination of each technique, highlighting their advantages and limitations. By integrating these techniques, researchers can enhance the performance of classifiers in the analysis and classification of brain signals. This advancement in ensemble learning methodologies contributes to a better understanding of the brain and its functions, ultimately leading to improved accuracy and reliability in brain signal analysis and classification.

Keywords: ensemble learning, brain signals, classification, feature selection, machine learning, genetic algorithm, optimization methods, influential features, influential electrodes, meta-classifiers

Procedia PDF Downloads 64
5261 Enhanced Efficiency for Propagation of Phalaenopsis cornu-cervi (Breda) Blume & Rchb. F. Using Trimmed Leaf Technique

Authors: Suphat Rittirat, Sutha Klaocheed, Kanchit Thammasiri

Abstract:

The effects of thidiazuron (TDZ) and benzyladenine (BA) on protocorm-like bodies (PLBs) induction from leaf explants was investigated. It was found that TDZ was superior to BA. The highest percentage and number of PLBs per leaf explant at 30 and 5.3 respectively were obtained on ½ MS medium supplemented with 9µM TDZ. The regenerated plantlets were potted and acclimatized in the greenhouse. These plants grew well and developed into normal plants after 3 month of transplantation. The 100% survival of plantlets was achieved when planted on pots containing sphagnum moss.

Keywords: orchid, PLBs, sphagnum moss, thidiazuron

Procedia PDF Downloads 312
5260 Utilizing Spatial Uncertainty of On-The-Go Measurements to Design Adaptive Sampling of Soil Electrical Conductivity in a Rice Field

Authors: Ismaila Olabisi Ogundiji, Hakeem Mayowa Olujide, Qasim Usamot

Abstract:

The main reasons for site-specific management for agricultural inputs are to increase the profitability of crop production, to protect the environment and to improve products’ quality. Information about the variability of different soil attributes within a field is highly essential for the decision-making process. Lack of fast and accurate acquisition of soil characteristics remains one of the biggest limitations of precision agriculture due to being expensive and time-consuming. Adaptive sampling has been proven as an accurate and affordable sampling technique for planning within a field for site-specific management of agricultural inputs. This study employed spatial uncertainty of soil apparent electrical conductivity (ECa) estimates to identify adaptive re-survey areas in the field. The original dataset was grouped into validation and calibration groups where the calibration group was sub-grouped into three sets of different measurements pass intervals. A conditional simulation was performed on the field ECa to evaluate the ECa spatial uncertainty estimates by the use of the geostatistical technique. The grouping of high-uncertainty areas for each set was done using image segmentation in MATLAB, then, high and low area value-separate was identified. Finally, an adaptive re-survey was carried out on those areas of high-uncertainty. Adding adaptive re-surveying significantly minimized the time required for resampling whole field and resulted in ECa with minimal error. For the most spacious transect, the root mean square error (RMSE) yielded from an initial crude sampling survey was minimized after an adaptive re-survey, which was close to that value of the ECa yielded with an all-field re-survey. The estimated sampling time for the adaptive re-survey was found to be 45% lesser than that of all-field re-survey. The results indicate that designing adaptive sampling through spatial uncertainty models significantly mitigates sampling cost, and there was still conformity in the accuracy of the observations.

Keywords: soil electrical conductivity, adaptive sampling, conditional simulation, spatial uncertainty, site-specific management

Procedia PDF Downloads 118
5259 Improving Public Sectors’ Policy Direction on Large Infrastructure Investment Projects: A Developmental Approach

Authors: Ncedo Cameron Xhala

Abstract:

Several public sector institutions lack policy direction on how to successfully implement their large infrastructure investment projects. It is significant to improve strategic policy direction in public sector institutions in order to improve planning, management and implementation of large infrastructure investment projects. It is significant to improve an understanding of internal and external pressures that exerts pressure on large infrastructure projects. The significance is to fulfill the public sector’s mandate, align the sectors’ scarce resources, stakeholders and to improve project management processes. The study used a case study approach which was underpinned by a constructionist approach. The study used a theoretical sampling technique when selecting study participants, and was followed by a snowball sampling technique that was used to select an identified case study project purposefully. The study was qualitative in nature, collected and analyzed qualitative empirical data from the purposefully selected five subject matter experts and has analyzed the case study documents. The study used a semi-structured interview approach, analysed case study documents in a qualitative approach. The interviews were on a face-to-face basis and were guided by an interview guide with focused questions. The study used a three coding process step comprising of one to three steps when analysing the qualitative empirical data. Findings reveal that an improvement of strategic policy direction in public sector institutions improves the integration in planning, management and on implementation on large infrastructure investment projects. Findings show the importance of understanding the external and internal pressures when implementing public sector’s large infrastructure investment projects. The study concludes that strategic policy direction in public sector institutions results in improvement of planning, financing, delivery, monitoring and evaluation and successful implementation of the public sector’s large infrastructure investment projects.

Keywords: implementation, infrastructure, investment, management

Procedia PDF Downloads 140
5258 Decommissioning of Nuclear Power Plants: The Current Position and Requirements

Authors: A. Stifi, S. Gentes

Abstract:

Undoubtedly from construction's perspective, the use of explosives will remove a large facility such as a 40-storey building , that took almost 3 to 4 years for construction, in few minutes. Usually, the reconstruction or decommissioning, the last phase of life cycle of any facility, is considered to be the shortest. However, this is proved to be wrong in the case of nuclear power plant. Statistics says that in the last 30 years, the construction of a nuclear power plant took an average time of 6 years whereas it is estimated that decommissioning of such plants may take even a decade or more. This paper is all about the decommissioning phase of a nuclear power plant which needs to be given more attention and encouragement from the research institutes as well as the nuclear industry. Currently, there are 437 nuclear power reactors in operation and 70 reactors in construction. With around 139 nuclear facilities already been shut down and are in different decommissioning stages and approximately 347 nuclear reactors will be in decommissioning phase in the next 20 years (assuming the operation time of a reactor as 40 years), This fact raises the following two questions (1) How far is the nuclear and construction Industry ready to face the challenges of decommissioning project? (2) What is required for a safety and reliable decommissioning project delivery? The decommissioning of nuclear facilities across the global have severe time and budget overruns. Largely the decommissioning processes are being executed by the force of manual labour where the change in regulations is respectively observed. In term of research and development, some research projects and activities are being carried out in this area, but the requirement seems to be much more. The near future of decommissioning shall be better through a sustainable development strategy where all stakeholders agree to implement innovative technologies especially for dismantling and decontamination processes and to deliever a reliable and safety decommissioning. The scope of technology transfer from other industries shall be explored. For example, remotery operated robotic technologies used in automobile and production industry to reduce time and improve effecincy and saftey shall be tried here. However, the innovative technologies are highly requested but they are alone not enough, the implementation of creative and innovative management methodologies should be also investigated and applied. Lean Management with it main concept "elimination of waste within process", is a suitable example here. Thus, the cooperation between international organisations and related industries and the knowledge-sharing may serve as a key factor for the successful decommissioning projects.

Keywords: decommissioning of nuclear facilities, innovative technology, innovative management, sustainable development

Procedia PDF Downloads 459
5257 Microfluidic Device for Real-Time Electrical Impedance Measurements of Biological Cells

Authors: Anil Koklu, Amin Mansoorifar, Ali Beskok

Abstract:

Dielectric spectroscopy (DS) is a noninvasive, label free technique for a long term real-time measurements of the impedance spectra of biological cells. DS enables characterization of cellular dielectric properties such as membrane capacitance and cytoplasmic conductivity. We have developed a lab-on-a-chip device that uses an electro-activated microwells array for loading, DS measurements, and unloading of biological cells. We utilized from dielectrophoresis (DEP) to capture target cells inside the wells and release them after DS measurement. DEP is a label-free technique that exploits differences among dielectric properties of the particles. In detail, DEP is the motion of polarizable particles suspended in an ionic solution and subjected to a spatially non-uniform external electric field. To the best of our knowledge, this is the first microfluidic chip that combines DEP and DS to analyze biological cells using electro-activated wells. Device performance is tested using two different cell lines of prostate cancer cells (RV122, PC-3). Impedance measurements were conducted at 0.2 V in the 10 kHz to 40 MHz range with 6 s time resolution. An equivalent circuit model was developed to extract the cell membrane capacitance and cell cytoplasmic conductivity from the impedance spectra. We report the time course of the variations in dielectric properties of PC-3 and RV122 cells suspended in low conductivity medium (LCB), which enhances dielectrophoretic and impedance responses, and their response to sudden pH change from a pH of 7.3 to a pH of 5.8. It is shown that microfluidic chip allowed online measurements of dielectric properties of prostate cancer cells and the assessment of the cellular level variations under external stimuli such as different buffer conductivity and pH. Based on these data, we intend to deploy the current device for single cell measurements by fabricating separately addressable N × N electrode platforms. Such a device will allow time-dependent dielectric response measurements for individual cells with the ability of selectively releasing them using negative-DEP and pressure driven flow.

Keywords: microfluidic, microfabrication, lab on a chip, AC electrokinetics, dielectric spectroscopy

Procedia PDF Downloads 136
5256 Combining Experiments and Surveys to Understand the Pinterest User Experience

Authors: Jolie M. Martin

Abstract:

Running experiments while logging detailed user actions has become the standard way of testing product features at Pinterest, as at many other Internet companies. While this technique offers plenty of statistical power to assess the effects of product changes on behavioral metrics, it does not often give us much insight into why users respond the way they do. By combining at-scale experiments with smaller surveys of users in each experimental condition, we have developed a unique approach for measuring the impact of our product and communication treatments on user sentiment, attitudes, and comprehension.

Keywords: experiments, methodology, surveys, user experience

Procedia PDF Downloads 304
5255 An Efficient Motion Recognition System Based on LMA Technique and a Discrete Hidden Markov Model

Authors: Insaf Ajili, Malik Mallem, Jean-Yves Didier

Abstract:

Human motion recognition has been extensively increased in recent years due to its importance in a wide range of applications, such as human-computer interaction, intelligent surveillance, augmented reality, content-based video compression and retrieval, etc. However, it is still regarded as a challenging task especially in realistic scenarios. It can be seen as a general machine learning problem which requires an effective human motion representation and an efficient learning method. In this work, we introduce a descriptor based on Laban Movement Analysis technique, a formal and universal language for human movement, to capture both quantitative and qualitative aspects of movement. We use Discrete Hidden Markov Model (DHMM) for training and classification motions. We improve the classification algorithm by proposing two DHMMs for each motion class to process the motion sequence in two different directions, forward and backward. Such modification allows avoiding the misclassification that can happen when recognizing similar motions. Two experiments are conducted. In the first one, we evaluate our method on a public dataset, the Microsoft Research Cambridge-12 Kinect gesture data set (MSRC-12) which is a widely used dataset for evaluating action/gesture recognition methods. In the second experiment, we build a dataset composed of 10 gestures(Introduce yourself, waving, Dance, move, turn left, turn right, stop, sit down, increase velocity, decrease velocity) performed by 20 persons. The evaluation of the system includes testing the efficiency of our descriptor vector based on LMA with basic DHMM method and comparing the recognition results of the modified DHMM with the original one. Experiment results demonstrate that our method outperforms most of existing methods that used the MSRC-12 dataset, and a near perfect classification rate in our dataset.

Keywords: human motion recognition, motion representation, Laban Movement Analysis, Discrete Hidden Markov Model

Procedia PDF Downloads 194
5254 Design of DNA Origami Structures Using LAMP Products as a Combined System for the Detection of Extended Spectrum B-Lactamases

Authors: Kalaumari Mayoral-Peña, Ana I. Montejano-Montelongo, Josué Reyes-Muñoz, Gonzalo A. Ortiz-Mancilla, Mayrin Rodríguez-Cruz, Víctor Hernández-Villalobos, Jesús A. Guzmán-López, Santiago García-Jacobo, Iván Licona-Vázquez, Grisel Fierros-Romero, Rosario Flores-Vallejo

Abstract:

The group B-lactamic antibiotics include some of the most frequently used small drug molecules against bacterial infections. Nevertheless, an alarming decrease in their efficacy has been reported due to the emergence of antibiotic-resistant bacteria. Infections caused by bacteria expressing extended Spectrum B-lactamases (ESBLs) are difficult to treat and account for higher morbidity and mortality rates, delayed recovery, and high economic burden. According to the Global Report on Antimicrobial Resistance Surveillance, it is estimated that mortality due to resistant bacteria will ascend to 10 million cases per year worldwide. These facts highlight the importance of developing low-cost and readily accessible detection methods of drug-resistant ESBLs bacteria to prevent their spread and promote accurate and fast diagnosis. Bacterial detection is commonly done using molecular diagnostic techniques, where PCR stands out for its high performance. However, this technique requires specialized equipment not available everywhere, is time-consuming, and has a high cost. Loop-Mediated Isothermal Amplification (LAMP) is an alternative technique that works at a constant temperature, significantly decreasing the equipment cost. It yields double-stranded DNA of several lengths with repetitions of the target DNA sequence as a product. Although positive and negative results from LAMP can be discriminated by colorimetry, fluorescence, and turbidity, there is still a large room for improvement in the point-of-care implementation. DNA origami is a technique that allows the formation of 3D nanometric structures by folding a large single-stranded DNA (scaffold) into a determined shape with the help of short DNA sequences (staples), which hybridize with the scaffold. This research aimed to generate DNA origami structures using LAMP products as scaffolds to improve the sensitivity to detect ESBLs in point-of-care diagnosis. For this study, the coding sequence of the CTM-X-15 ESBL of E. coli was used to generate the LAMP products. The set of LAMP primers were designed using PrimerExplorerV5. As a result, a target sequence of 200 nucleotides from CTM-X-15 ESBL was obtained. Afterward, eight different DNA origami structures were designed using the target sequence in the SDCadnano and analyzed with CanDo to evaluate the stability of the 3D structures. The designs were constructed minimizing the total number of staples to reduce costs and complexity for point-of-care applications. After analyzing the DNA origami designs, two structures were selected. The first one was a zig-zag flat structure, while the second one was a wall-like shape. Given the sequence repetitions in the scaffold sequence, both were able to be assembled with only 6 different staples each one, ranging between 18 to 80 nucleotides. Simulations of both structures were performed using scaffolds of different sizes yielding stable structures in all the cases. The generation of the LAMP products were tested by colorimetry and electrophoresis. The formation of the DNA structures was analyzed using electrophoresis and colorimetry. The modeling of novel detection methods through bioinformatics tools allows reliable control and prediction of results. To our knowledge, this is the first study that uses LAMP products and DNA-origami in combination to delect ESBL-producing bacterial strains, which represent a promising methodology for diagnosis in the point-of-care.

Keywords: beta-lactamases, antibiotic resistance, DNA origami, isothermal amplification, LAMP technique, molecular diagnosis

Procedia PDF Downloads 205
5253 Structural Health Monitoring using Fibre Bragg Grating Sensors in Slab and Beams

Authors: Pierre van Tonder, Dinesh Muthoo, Kim twiname

Abstract:

Many existing and newly built structures are constructed on the design basis of the engineer and the workmanship of the construction company. However, when considering larger structures where more people are exposed to the building, its structural integrity is of great importance considering the safety of its occupants (Raghu, 2013). But how can the structural integrity of a building be monitored efficiently and effectively. This is where the fourth industrial revolution step in, and with minimal human interaction, data can be collected, analysed, and stored, which could also give an indication of any inconsistencies found in the data collected, this is where the Fibre Bragg Grating (FBG) monitoring system is introduced. This paper illustrates how data can be collected and converted to develop stress – strain behaviour and to produce bending moment diagrams for the utilisation and prediction of the structure’s integrity. Embedded fibre optic sensors were used in this study– fibre Bragg grating sensors in particular. The procedure entailed making use of the shift in wavelength demodulation technique and an inscription process of the phase mask technique. The fibre optic sensors considered in this report were photosensitive and embedded in the slab and beams for data collection and analysis. Two sets of fibre cables have been inserted, one purposely to collect temperature recordings and the other to collect strain and temperature. The data was collected over a time period and analysed used to produce bending moment diagrams to make predictions of the structure’s integrity. The data indicated the fibre Bragg grating sensing system proved to be useful and can be used for structural health monitoring in any environment. From the experimental data for the slab and beams, the moments were found to be64.33 kN.m, 64.35 kN.m and 45.20 kN.m (from the experimental bending moment diagram), and as per the idealistic (Ultimate Limit State), the data of 133 kN.m and 226.2 kN.m were obtained. The difference in values gave room for an early warning system, in other words, a reserve capacity of approximately 50% to failure.

Keywords: fibre bragg grating, structural health monitoring, fibre optic sensors, beams

Procedia PDF Downloads 124
5252 Efficacy of Gamma Radiation on the Productivity of Bactrocera oleae Gmelin (Diptera: Tephritidae)

Authors: Mehrdad Ahmadi, Mohamad Babaie, Shiva Osouli, Bahareh Salehi, Nadia Kalantaraian

Abstract:

The olive fruit fly, Bactrocera oleae Gmelin (Diptera: Tephritidae), is one of the most serious pests in olive orchards in growing province in Iran. The female lay eggs in green olive fruit and larvae hatch inside the fruit, where they feed upon the fruit matters. One of the main ecologically friendly and species-specific systems of pest control is the sterile insect technique (SIT) which is based on the release of large numbers of sterilized insects. The objective of our work was to develop a SIT against B. oleae by using of gamma radiation for the laboratory and field trial in Iran. Oviposition of female mated by irradiated males is one of the main parameters to determine achievement of SIT. To conclude the sterile dose, pupae were placed under 0 to 160 Gy of gamma radiation. The main factor in SIT is the productivity of females which are mated by irradiated males. The emerged adults from irradiated pupae were mated with untreated adults of the same age by confining them inside the transparent cages. The fecundity of the irradiated males mated with non-irradiated females was decreased with the increasing radiation dose level. It was observed that the number of eggs and also the percentage of the egg hatching was significantly (P < 0.05) affected in either IM x NF crosses compared with NM x NF crosses in F1 generation at all doses. Also, the statistical analysis showed a significant difference (P < 0.05) in the mean number of eggs laid between irradiated and non-irradiated females crossed with irradiated males, which suggests that the males were susceptible to gamma radiation. The egg hatching percentage declined markedly with the increase of the radiation dose of the treated males in mating trials which demonstrated that egg hatch rate was dose dependent. Our results specified that gamma radiation affects the longevity of irradiated B. oleae larvae (established from irradiated pupae) and significantly increased their larval duration. Results show the gamma radiation, and SIT can be used successfully against olive fruit flies.

Keywords: fertility, olive fruit fly, radiation, sterile insect technique

Procedia PDF Downloads 188
5251 CFD Simulation of the Pressure Distribution in the Upper Airway of an Obstructive Sleep Apnea Patient

Authors: Christina Hagen, Pragathi Kamale Gurmurthy, Thorsten M. Buzug

Abstract:

CFD simulations are performed in the upper airway of a patient suffering from obstructive sleep apnea (OSA) that is a sleep related breathing disorder characterized by repetitive partial or complete closures of the upper airways. The simulations are aimed at getting a better understanding of the pathophysiological flow patterns in an OSA patient. The simulation is compared to medical data of a sleep endoscopic examination under sedation. A digital model consisting of surface triangles of the upper airway is extracted from the MR images by a region growing segmentation process and is followed by a careful manual refinement. The computational domain includes the nasal cavity with the nostrils as the inlet areas and the pharyngeal volume with an outlet underneath the larynx. At the nostrils a flat inflow velocity profile is prescribed by choosing the velocity such that a volume flow rate of 150 ml/s is reached. Behind the larynx at the outlet a pressure of -10 Pa is prescribed. The stationary incompressible Navier-Stokes equations are numerically solved using finite elements. A grid convergence study has been performed. The results show an amplification of the maximal velocity of about 2.5 times the inlet velocity at a constriction of the pharyngeal volume in the area of the tongue. It is the same region that also shows the highest pressure drop from about 5 Pa. This is in agreement with the sleep endoscopic examinations of the same patient under sedation showing complete contractions in the area of the tongue. CFD simulations can become a useful tool in the diagnosis and therapy of obstructive sleep apnea by giving insight into the patient’s individual fluid dynamical situation in the upper airways giving a better understanding of the disease where experimental measurements are not feasible. Within this study, it could been shown on one hand that constriction areas within the upper airway lead to a significant pressure drop and on the other hand a good agreement of the area of pressure drop and the area of contraction could be shown.

Keywords: biomedical engineering, obstructive sleep apnea, pharynx, upper airways

Procedia PDF Downloads 294
5250 Modelling and Simulation of Aero-Elastic Vibrations Using System Dynamic Approach

Authors: Cosmas Pandit Pagwiwoko, Ammar Khaled Abdelaziz Abdelsamia

Abstract:

Flutter as a phenomenon of flow-induced and self-excited vibration has to be recognized considering its harmful effect on the structure especially in a stage of aircraft design. This phenomenon is also important for a wind energy harvester based on the fluttering surface due to its effective operational velocity range. This multi-physics occurrence can be presented by two governing equations in both fluid and structure simultaneously in respecting certain boundary conditions on the surface of the body. In this work, the equations are resolved separately by two distinct solvers, one-time step of each domain. The modelling and simulation of this flow-structure interaction in ANSYS show the effectiveness of this loosely coupled method in representing flutter phenomenon however the process is time-consuming for design purposes. Therefore, another technique using the same weak coupled aero-structure is proposed by using system dynamics approach. In this technique, the aerodynamic forces were calculated using singularity function for a range of frequencies and certain natural mode shapes are transformed into time domain by employing an approximation model of fraction rational function in Laplace variable. The representation of structure in a multi-degree-of-freedom coupled with a transfer function of aerodynamic forces can then be simulated in time domain on a block-diagram platform such as Simulink MATLAB. The dynamic response of flutter at certain velocity can be evaluated with another established flutter calculation in frequency domain k-method. In this method, a parameter of artificial structural damping is inserted in the equation of motion to assure the energy balance of flow and vibrating structure. The simulation in time domain is particularly interested as it enables to apply the structural non-linear factors accurately. Experimental tests on a fluttering airfoil in the wind tunnel are also conducted to validate the method.

Keywords: flutter, flow-induced vibration, flow-structure interaction, non-linear structure

Procedia PDF Downloads 298
5249 Characterization of Ethanol-Air Combustion in a Constant Volume Combustion Bomb Under Cellularity Conditions

Authors: M. Reyes, R. Sastre, P. Gabana, F. V. Tinaut

Abstract:

In this work, an optical characterization of the ethanol-air laminar combustion is presented in order to investigate the origin of the instabilities developed during the combustion, the onset of the cellular structure and the laminar burning velocity. Experimental tests of ethanol-air have been developed in an optical cylindrical constant volume combustion bomb equipped with a Schlieren technique to record the flame development and the flame front surface wrinkling. With this procedure, it is possible to obtain the flame radius and characterize the time when the instabilities are visible through the cell's apparition and the cellular structure development. Ethanol is an aliphatic alcohol with interesting characteristics to be used as a fuel in Internal Combustion Engines and can be biologically synthesized from biomass. Laminar burning velocity is an important parameter used in simulations to obtain the turbulent flame speed, whereas the flame front structure and the instabilities developed during the combustion are important to understand the transition to turbulent combustion and characterize the increment in the flame propagation speed in premixed flames. The cellular structure is spontaneously generated by volume forces, diffusional-thermal and hydrodynamic instabilities. Many authors have studied the combustion of ethanol air and mixtures of ethanol with other fuels. However, there is a lack of works that investigate the instabilities and the development of a cellular structure in ethanol flames, a few works as characterized the ethanol-air combustion instabilities in spherical flames. In the present work, a parametrical study is made by varying the fuel/air equivalence ratio (0.8-1.4), initial pressure (0.15-0.3 MPa) and initial temperature (343-373K), using a design of experiments type I-optimal. In reach mixtures, it is possible to distinguish the cellular structure formed by the hydrodynamic effect and by from the thermo-diffusive. Results show that ethanol-air flames tend to stabilize as the equivalence ratio decreases in lean mixtures and develop a cellular structure with the increment of initial pressure and temperature.

Keywords: ethanol, instabilities, premixed combustion, schlieren technique, cellularity

Procedia PDF Downloads 54
5248 Adaptive Routing in NoC-Based Heterogeneous MPSoCs

Authors: M. K. Benhaoua, A. E. H. Benyamina, T. Djeradi, P. Boulet

Abstract:

In this paper, we propose adaptive routing that considers the routing of communications in order to optimize the overall performance. The routing technique uses a newly proposed Algorithm to route communications between the tasks. The routing we propose of the communications leads to a better optimization of several performance metrics (time and energy consumption). Experimental results show that the proposed routing approach provides significant performance improvements when compared to those using static routing.

Keywords: multi-processor systems-on-chip (mpsocs), network-on-chip (noc), heterogeneous architectures, adaptive routin

Procedia PDF Downloads 364
5247 Smart Automated Furrow Irrigation: A Preliminary Evaluation

Authors: Jasim Uddin, Rod Smith, Malcolm Gillies

Abstract:

Surface irrigation is the most popular irrigation method all over the world. However, two issues: low efficiency and huge labour involvement concern irrigators due to scarcity in recent years. To address these issues, a smart automated furrow is conceptualised that can be operated using digital devices like smartphone, iPad or computer and a preliminary evaluation was conducted in this study. The smart automated system is the integration of commercially available software and hardware. It includes real-time surface irrigation optimisation software (SISCO) and Rubicon Water’s surface irrigation automation hardware and software. The automated system consists of automatic water delivery system with 300 mm flexible pipes attached to both sides of a remotely controlled valve to operate the irrigation. A water level sensor to obtain the real-time inflow rate from the measured head in the channel, advance sensors to measure the advance time to particular points of an irrigated field, a solar-powered telemetry system including a base station to communicate all the field sensors with the main server. On the basis of field data, the software (SISCO) is optimised the ongoing irrigation and determine the optimum cut-off for particular irrigation and send this information to the control valve to stop the irrigation in a particular (cut-off) time. The preliminary evaluation shows that the automated surface irrigation worked reasonably well without manual intervention. The evaluation of farmers managed irrigation events show the potentials to save a significant amount of water and labour. A substantial amount of economic and social benefits are expected in rural industries by adopting this system. The future outcome of this work would be a fully tested commercial adaptive real-time furrow irrigation system able to compete with the pressurised alternative of centre pivot or lateral move machines on capital cost, water and labour savings but without the massive energy costs.

Keywords: furrow irrigation, smart automation, infiltration, SISCO, real-time irrigation, adoptive control

Procedia PDF Downloads 435
5246 A Case Study on the Effect of a Mobility Focused Exercise Training in Rehabilitation of an Elite Weightlifter with Shoulder Pain and Weakness

Authors: Lingling Li, Peng Zhao, Runze Guan, Alice Jones, Tao Yu

Abstract:

Background: Shoulder pain and weakness are associated with complex pathologies and often precludes weightlifters from participation in training. The role and mode of exercise training in weightlifters with shoulder pathology remains unclear. Objectives: This case report described an exercise program in management of an elite weightlifter with primary complaint of right shoulder pain and weakness. Methods: A 22-year-old weightlifter presented with 2-year duration of right shoulder pain and weakness which was worsened by routine weightlifting training, and symptoms were not relieved with steroid injection, manual therapy nor usual physiotherapy. There was a limitation in all active range of motion especially horizontal extension (13ᵒ) and external rotation (41ᵒ) with pain intensity at 4/10 and 10/10 (numeric pain rating score) respectively. Muscle weakness was most significant at supraspinatus and teres minor, 38% and 27% respectively compared to his left shoulder (hand-held dynamometry, Micro FET2). An exercise training program focusing on improving mobility was designed for this athlete following a comprehensive physical assessment. Exercises included specific stretching, muscle activating and scapular stability training; once per day, and for 60 minutes each session. All exercises were completed under instruction as pain allowed. Quantitative assessment was conducted at the end of each week for 3 weeks. Outcomes: After the program, the athlete was pain-free in all movements except the O’Brien active compression internal rotation test, the pain was however reduced from 10/10 to 3/10. The horizontal extension and external rotation range increased to 79ᵒ to 120ᵒ respectively, and strength of all rotator cuff muscles returned to normal. At 1-month follow up, the athlete was totally pain-free and had returned to normal function and weightlifting training activities. The outcomes sustained through 6-month and one year. Conclusion: This case report supports the use of a mobility-focused exercise program for management of shoulder pain and weakness in an elite weightlifter athlete.

Keywords: exercise training, mobility, rehabilitation, shoulder pain, weightlifting

Procedia PDF Downloads 165
5245 Lipid Extraction from Microbial Cell by Electroporation Technique and Its Influence on Direct Transesterification for Biodiesel Synthesis

Authors: Abu Yousuf, Maksudur Rahman Khan, Ahasanul Karim, Amirul Islam, Minhaj Uddin Monir, Sharmin Sultana, Domenico Pirozzi

Abstract:

Traditional biodiesel feedstock like edible oils or plant oils, animal fats and cooking waste oil have been replaced by microbial oil in recent research of biodiesel synthesis. The well-known community of microbial oil producers includes microalgae, oleaginous yeast and seaweeds. Conventional transesterification of microbial oil to produce biodiesel is lethargic, energy consuming, cost-ineffective and environmentally unhealthy. This process follows several steps such as microbial biomass drying, cell disruption, oil extraction, solvent recovery, oil separation and transesterification. Therefore, direct transesterification of biodiesel synthesis has been studying for last few years. It combines all the steps in a single reactor and it eliminates the steps of biomass drying, oil extraction and separation from solvent. Apparently, it seems to be cost-effective and faster process but number of difficulties need to be solved to make it large scale applicable. The main challenges are microbial cell disruption in bulk volume and make faster the esterification reaction, because water contents of the medium sluggish the reaction rate. Several methods have been proposed but none of them is up to the level to implement in large scale. It is still a great challenge to extract maximum lipid from microbial cells (yeast, fungi, algae) investing minimum energy. Electroporation technique results a significant increase in cell conductivity and permeability caused due to the application of an external electric field. Electroporation is required to alter the size and structure of the cells to increase their porosity as well as to disrupt the microbial cell walls within few seconds to leak out the intracellular lipid to the solution. Therefore, incorporation of electroporation techniques contributed in direct transesterification of microbial lipids by increasing the efficiency of biodiesel production rate.

Keywords: biodiesel, electroporation, microbial lipids, transesterification

Procedia PDF Downloads 265