Search results for: Challenge
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 546

Search results for: Challenge

126 An Intelligent Scheme Switching for MIMO Systems Using Fuzzy Logic Technique

Authors: Robert O. Abolade, Olumide O. Ajayi, Zacheaus K. Adeyemo, Solomon A. Adeniran

Abstract:

Link adaptation is an important strategy for achieving robust wireless multimedia communications based on quality of service (QoS) demand. Scheme switching in multiple-input multiple-output (MIMO) systems is an aspect of link adaptation, and it involves selecting among different MIMO transmission schemes or modes so as to adapt to the varying radio channel conditions for the purpose of achieving QoS delivery. However, finding the most appropriate switching method in MIMO links is still a challenge as existing methods are either computationally complex or not always accurate. This paper presents an intelligent switching method for the MIMO system consisting of two schemes - transmit diversity (TD) and spatial multiplexing (SM) - using fuzzy logic technique. In this method, two channel quality indicators (CQI) namely average received signal-to-noise ratio (RSNR) and received signal strength indicator (RSSI) are measured and are passed as inputs to the fuzzy logic system which then gives a decision – an inference. The switching decision of the fuzzy logic system is fed back to the transmitter to switch between the TD and SM schemes. Simulation results show that the proposed fuzzy logic – based switching technique outperforms conventional static switching technique in terms of bit error rate and spectral efficiency.

Keywords: Channel quality indicator, fuzzy logic, link adaptation, MIMO, spatial multiplexing, transmit diversity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 691
125 To Cloudify or Not to Cloudify

Authors: Laila Yasir Al-Harthy, Ali H. Al-Badi

Abstract:

As an emerging business model, cloud computing has been initiated to satisfy the need of organizations and to push Information Technology as a utility. The shift to the cloud has changed the way Information Technology departments are managed traditionally and has raised many concerns for both, public and private sectors.

The purpose of this study is to investigate the possibility of cloud computing services replacing services provided traditionally by IT departments. Therefore, it aims to 1) explore whether organizations in Oman are ready to move to the cloud; 2) identify the deciding factors leading to the adoption or rejection of cloud computing services in Oman; and 3) provide two case studies, one for a successful Cloud provider and another for a successful adopter.

This paper is based on multiple research methods including conducting a set of interviews with cloud service providers and current cloud users in Oman; and collecting data using questionnaires from experts in the field and potential users of cloud services.

Despite the limitation of bandwidth capacity and Internet coverage offered in Oman that create a challenge in adopting the cloud, it was found that many information technology professionals are encouraged to move to the cloud while few are resistant to change.

The recent launch of a new Omani cloud service provider and the entrance of other international cloud service providers in the Omani market make this research extremely valuable as it aims to provide real-life experience as well as two case studies on the successful provision of cloud services and the successful adoption of these services.

Keywords: Cloud computing, cloud deployment models, cloud service models and deciding factors.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2263
124 Current Status and Future Trends of Mechanized Fruit Thinning Devices and Sensor Technology

Authors: Marco Lopes, Pedro D. Gaspar, Maria P. Simões

Abstract:

This paper reviews the different concepts that have been investigated concerning the mechanization of fruit thinning as well as multiple working principles and solutions that have been developed for feature extraction of horticultural products, both in the field and industrial environments. The research should be committed towards selective methods, which inevitably need to incorporate some kinds of sensor technology. Computer vision often comes out as an obvious solution for unstructured detection problems, although leaves despite the chosen point of view frequently occlude fruits. Further research on non-traditional sensors that are capable of object differentiation is needed. Ultrasonic and Near Infrared (NIR) technologies have been investigated for applications related to horticultural produce and show a potential to satisfy this need while simultaneously providing spatial information as time of flight sensors. Light Detection and Ranging (LIDAR) technology also shows a huge potential but it implies much greater costs and the related equipment is usually much larger, making it less suitable for portable devices, which may serve a purpose on smaller unstructured orchards. Portable devices may serve a purpose on these types of orchards. In what concerns sensor methods, on-tree fruit detection, major challenge is to overcome the problem of fruits’ occlusion by leaves and branches. Hence, nontraditional sensors capable of providing some type of differentiation should be investigated.

Keywords: Fruit thinning, horticultural field, portable devices, sensor technologies.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 940
123 Optimal Duty-Cycle Modulation Scheme for Analog-To-Digital Conversion Systems

Authors: G. Sonfack, J. Mbihi, B. Lonla Moffo

Abstract:

This paper presents an optimal duty-cycle modulation (ODCM) scheme for analog-to-digital conversion (ADC) systems. The overall ODCM-Based ADC problem is decoupled into optimal DCM and digital filtering sub-problems, while taking into account constraints of mutual design parameters between the two. Using a set of three lemmas and four morphological theorems, the ODCM sub-problem is modelled as a nonlinear cost function with nonlinear constraints. Then, a weighted least pth norm of the error between ideal and predicted frequency responses is used as a cost function for the digital filtering sub-problem. In addition, MATLAB fmincon and MATLAB iirlnorm tools are used as optimal DCM and least pth norm solvers respectively. Furthermore, the virtual simulation scheme of an overall prototyping ODCM-based ADC system is implemented and well tested with the help of Simulink tool according to relevant set of design data, i.e., 3 KHz of modulating bandwidth, 172 KHz of maximum modulation frequency and 25 MHZ of sampling frequency. Finally, the results obtained and presented show that the ODCM-based ADC achieves under 3 KHz of modulating bandwidth: 57 dBc of SINAD (signal-to-noise and distorsion), 58 dB of SFDR (Surpious free dynamic range) -80 dBc of THD (total harmonic distorsion), and 10 bits of minimum resolution. These performance levels appear to be a great challenge within the class of oversampling ADC topologies, with 2nd order IIR (infinite impulse response) decimation filter.

Keywords: Digital IIR filter, morphological lemmas and theorems, optimal DCM-based DAC, virtual simulation, weighted least pth norm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 879
122 Managing Iterations in Product Design and Development

Authors: K. Aravindhan, Trishit Bandyopadhyay, Mahesh Mehendale, Supriya Kumar De

Abstract:

The inherent iterative nature of product design and development poses significant challenge to reduce the product design and development time (PD). In order to shorten the time to market, organizations have adopted concurrent development where multiple specialized tasks and design activities are carried out in parallel. Iterative nature of work coupled with the overlap of activities can result in unpredictable time to completion and significant rework. Many of the products have missed the time to market window due to unanticipated or rather unplanned iteration and rework. The iterative and often overlapped processes introduce greater amounts of ambiguity in design and development, where the traditional methods and tools of project management provide less value. In this context, identifying critical metrics to understand the iteration probability is an open research area where significant contribution can be made given that iteration has been the key driver of cost and schedule risk in PD projects. Two important questions that the proposed study attempts to address are: Can we predict and identify the number of iterations in a product development flow? Can we provide managerial insights for a better control over iteration? The proposal introduces the concept of decision points and using this concept intends to develop metrics that can provide managerial insights into iteration predictability. By characterizing the product development flow as a network of decision points, the proposed research intends to delve further into iteration probability and attempts to provide more clarity.

Keywords: Decision Points, Iteration, Product Design, Rework.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2154
121 Accelerating Quantum Chemistry Calculations: Machine Learning for Efficient Evaluation of Electron-Repulsion Integrals

Authors: Nishant Rodrigues, Nicole Spanedda, Chilukuri K. Mohan, Arindam Chakraborty

Abstract:

A crucial objective in quantum chemistry is the computation of the energy levels of chemical systems. This task requires electron-repulsion integrals as inputs and the steep computational cost of evaluating these integrals poses a major numerical challenge in efficient implementation of quantum chemical software. This work presents a moment-based machine learning approach for the efficient evaluation of electron-repulsion integrals. These integrals were approximated using linear combinations of a small number of moments. Machine learning algorithms were applied to estimate the coefficients in the linear combination. A random forest approach was used to identify promising features using a recursive feature elimination approach, which performed best for learning the sign of each coefficient, but not the magnitude. A neural network with two hidden layers was then used to learn the coefficient magnitudes, along with an iterative feature masking approach to perform input vector compression, identifying a small subset of orbitals whose coefficients are sufficient for the quantum state energy computation. Finally, a small ensemble of neural networks (with a median rule for decision fusion) was shown to improve results when compared to a single network.

Keywords: Quantum energy calculations, atomic orbitals, electron-repulsion integrals, ensemble machine learning, random forests, neural networks, feature extraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 69
120 An Assessment of Software Process Optimization Compared to International Best Practice in Bangladesh

Authors: Mohammad Shahadat Hossain Chowdhury, Tania Taharima Chowdhary, Hasan Sarwar

Abstract:

The challenge for software development house in Bangladesh is to find a path of using minimum process rather than CMMI or ISO type gigantic practice and process area. The small and medium size organization in Bangladesh wants to ensure minimum basic Software Process Improvement (SPI) in day to day operational activities. Perhaps, the basic practices will ensure to realize their company's improvement goals. This paper focuses on the key issues in basic software practices for small and medium size software organizations, who are unable to effort the CMMI, ISO, ITIL etc. compliance certifications. This research also suggests a basic software process practices model for Bangladesh and it will show the mapping of our suggestions with international best practice. In this IT competitive world for software process improvement, Small and medium size software companies that require collaboration and strengthening to transform their current perspective into inseparable global IT scenario. This research performed some investigations and analysis on some projects- life cycle, current good practice, effective approach, reality and pain area of practitioners, etc. We did some reasoning, root cause analysis, comparative analysis of various approach, method, practice and justifications of CMMI and real life. We did avoid reinventing the wheel, where our focus is for minimal practice, which will ensure a dignified satisfaction between organizations and software customer.

Keywords: Compare with CMMI practices, Key success factors, Small and medium software house, Software process improvement; Software process optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1820
119 Blood Lymphocyte and Neutrophil Response of Cultured Rainbow Trout, Oncorhynchus mykiss, Administered Varying Dosages of an Oral Immunomodulator – ‘Fin-Immune™’

Authors: Duane Barker, John Holliday

Abstract:

In a 10-week (May – August, 2008) Phase I trial, 840, 1+ rainbow trout, Oncorhynchus mykiss, received a commercial oral immunomodulator, Fin Immune™, at four different dosages (0, 10, 20 and 30 mg g-1) to evaluate immune response and growth. The overall objective of was to determine an optimal dosage of this product for rainbow trout that provides enhanced immunity with maximal growth and health. Biweekly blood samples were taken from 10 randomly selected fish in each tank (30 samples per treatment) to evaluate the duration of enhanced immunity conferred by Fin-Immune™. The immunological assessment included serum white blood cell (lymphocyte, neutrophil) densities and blood hematocrit (packed cell volume %). Of these three variables, only lymphocyte density increased significantly among trout fed Fin- Immune™ at 20 and 30 mg g-1 which peaked at week 6. At week 7, all trout were switched to regular feed (lacking Fin-Immune™) and by week 10, lymphocyte levels decreased among all levels but were still greater than at week 0. There was growth impairment at the highest dose of Fin-Immune™ tested (30 mg g-1) which can be associated with a physiological compensatory mechanism due to a dose-specific threshold level. Thus, our main objective of this Phase I study was achieved, the 20 mg g-1 dose of Fin-Immune™ should be the most efficacious (of those we tested) to use for a Phase II disease challenge trial.

Keywords: Blood Lymphocyte, Neutrophil Response of Cultured Rainbow Trout, Oncorhynchus mykiss, Oral Immunomodulator – 'Fin-ImmuneTM'.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1481
118 Co-Administration Effects of Conjugated Linoleic Acid and L-Carnitine on Weight Gain and Biochemical Profile in Diet Induced Obese Rats

Authors: Maryam Nazari, Majid Karandish, Alihossein Saberi

Abstract:

Obesity as a global health challenge motivates pharmaceutical industries to produce anti-obesity drugs. However, effectiveness of these agents is remained unclear. Because of popularity of dietary supplements, the aim of this study was tp investigate the effects of Conjugated Linoleic Acid (CLA) and L-carnitine (LC) on serum glucose, triglyceride, cholesterol and weight changes in diet induced obese rats. 48 male Wistar rats were randomly divided into two groups: Normal fat diet (n=8), and High fat diet (HFD) (n=32). After eight weeks, the second group which was maintained on HFD until the end of study, was subdivided into four categories: a) 500 mg Corn Oil (as control group), b) 500 mg CLA, c) 200 mg LC, d) 500 mg CLA+ 200 mg LC.All doses are planned per kg body weights, which were administered by oral gavage for four weeks. Body weights were measured and recorded weekly by means of a digital scale. At the end of the study, blood samples were collected for biochemical markers measurement. SPSS Version 16 was used for statistical analysis. At the end of 8th week, a significant difference in weight was observed between HFD and NFD group. After 12 weeks, LC significantly reduced weight gain by 4.2%. Trend of weight gain in CLA and CLA+LC groups was insignificantly decelerated. CLA+LC reduced triglyceride level significantly, but just CLA had significant influence on total cholesterol and insignificant decreasing effect on FBS. Our results showed that an obesogenic diet in a relative short time led to obesity and dyslipidemia which can be modified by LC and CLA to some extent.

Keywords: Conjugated linoleic acid, high fat diet, L-carnitine, obesity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 856
117 Mix Proportioning and Strength Prediction of High Performance Concrete Including Waste Using Artificial Neural Network

Authors: D. G. Badagha, C. D. Modhera, S. A. Vasanwala

Abstract:

There is a great challenge for civil engineering field to contribute in environment prevention by finding out alternatives of cement and natural aggregates. There is a problem of global warming due to cement utilization in concrete, so it is necessary to give sustainable solution to produce concrete containing waste. It is very difficult to produce designated grade of concrete containing different ingredient and water cement ratio including waste to achieve desired fresh and harden properties of concrete as per requirement and specifications. To achieve the desired grade of concrete, a number of trials have to be taken, and then after evaluating the different parameters at long time performance, the concrete can be finalized to use for different purposes. This research work is carried out to solve the problem of time, cost and serviceability in the field of construction. In this research work, artificial neural network introduced to fix proportion of concrete ingredient with 50% waste replacement for M20, M25, M30, M35, M40, M45, M50, M55 and M60 grades of concrete. By using the neural network, mix design of high performance concrete was finalized, and the main basic mechanical properties were predicted at 3 days, 7 days and 28 days. The predicted strength was compared with the actual experimental mix design and concrete cube strength after 3 days, 7 days and 28 days. This experimentally and neural network based mix design can be used practically in field to give cost effective, time saving, feasible and sustainable high performance concrete for different types of structures.

Keywords: Artificial neural network, ANN, high performance concrete, rebound hammer, strength prediction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1158
116 NANCY: Combining Adversarial Networks with Cycle-Consistency for Robust Multi-Modal Image Registration

Authors: Mirjana Ruppel, Rajendra Persad, Amit Bahl, Sanja Dogramadzi, Chris Melhuish, Lyndon Smith

Abstract:

Multimodal image registration is a profoundly complex task which is why deep learning has been used widely to address it in recent years. However, two main challenges remain: Firstly, the lack of ground truth data calls for an unsupervised learning approach, which leads to the second challenge of defining a feasible loss function that can compare two images of different modalities to judge their level of alignment. To avoid this issue altogether we implement a generative adversarial network consisting of two registration networks GAB, GBA and two discrimination networks DA, DB connected by spatial transformation layers. GAB learns to generate a deformation field which registers an image of the modality B to an image of the modality A. To do that, it uses the feedback of the discriminator DB which is learning to judge the quality of alignment of the registered image B. GBA and DA learn a mapping from modality A to modality B. Additionally, a cycle-consistency loss is implemented. For this, both registration networks are employed twice, therefore resulting in images ˆA, ˆB which were registered to ˜B, ˜A which were registered to the initial image pair A, B. Thus the resulting and initial images of the same modality can be easily compared. A dataset of liver CT and MRI was used to evaluate the quality of our approach and to compare it against learning and non-learning based registration algorithms. Our approach leads to dice scores of up to 0.80 ± 0.01 and is therefore comparable to and slightly more successful than algorithms like SimpleElastix and VoxelMorph.

Keywords: Multimodal image registration, GAN, cycle consistency, deep learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 748
115 Economic Assessment of Green House for Cultivation of Float Based Seedling Production in India

Authors: Srinath Ramakkrushnan, Aswathaman Vijayan

Abstract:

In conventional seedling production, the seedlings are being grown in the open field under natural conditions. Here they are susceptible to sudden changes in climate were their quality and yield is affected. Quality seedlings are essential for good growth and performance of crops in main field; they serve as a foundation for the economic returns to the farmer. Producing quality seedling demands usage of hybrid seeds as they have the ability to result in better yield, greater uniformity, improved color, disease resistance, and so forth. Hybrid seed production poses major operational challenge and its seed use efficiency plays an important role. Thus in order to overcome the difficulties currently present in conventional seedling production and to efficiently use hybrid seeds, ITC Limited Agri Business Divisions - Sustainability Cell as conceptualized a novel method of seedling production unit for farmers in West Godavari District of Andhra Pradesh. The “Green House based Float Seedling" methodology aims at a protected cultivation technique wherein the micro climate surrounding the plant/seedling body is controlled partially or fully as per the requirement of the species. This paper reports on the techno economic evaluation of green house for cultivation of float based seedling production with experimental results that was attained from the pilot implementation in West Godavari District, Rajahmundry region of India.

Keywords: Economic Assessment, Float Seedling, Green House, ITC Limited, Payback period.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4166
114 Modeling of PZ in Haunch Connections Systems

Authors: Peyman Shadman Heidari, Roohollah Ahmady Jazany, Mahmood Reza Mehran, Pouya Shadman Heidari, Mohammad khorasani

Abstract:

Modeling of Panel Zone (PZ) seismic behavior, because of its role in overall ductility and lateral stiffness of steel moment frames, has been considered a challenge for years. There are some studies regarding the effects of different doubler plates thicknesses and geometric properties of PZ on its seismic behavior. However, there is not much investigation on the effects of number of provided continuity plates in case of presence of one triangular haunch, two triangular haunches and rectangular haunch (T shape haunches) for exterior columns. In this research first detailed finite element models of 12tested connection of SAC joint venture were created and analyzed then obtained cyclic behavior backbone curves of these models besides other FE models for similar tests were used for neural network training. Then seismic behavior of these data is categorized according to continuity plate-s arrangements and differences in type of haunches. PZ with one-sided haunches have little plastic rotation. As the number of continuity plates increases due to presence of two triangular haunches (four continuity plate), there will be no plastic rotation, in other words PZ behaves in its elastic range. In the case of rectangular haunch, PZ show more plastic rotation in comparison with one-sided triangular haunch and especially double-sided triangular haunches. Moreover, the models that will be presented in case of triangular one-sided and double- sided haunches and rectangular haunches as a result of this study seem to have a proper estimation of PZ seismic behavior.

Keywords: Continuity plate, FE models, Neural network, Panel zone, Plastic rotation, Rectangular haunch, Seismic behavior

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1967
113 A Programming Assessment Software Artefact Enhanced with the Help of Learners

Authors: Romeo A. Botes, Imelda Smit

Abstract:

The demands of an ever changing and complex higher education environment, along with the profile of modern learners challenge current approaches to assessment and feedback. More learners enter the education system every year. The younger generation expects immediate feedback. At the same time, feedback should be meaningful. The assessment of practical activities in programming poses a particular problem, since both lecturers and learners in the information and computer science discipline acknowledge that paper-based assessment for programming subjects lacks meaningful real-life testing. At the same time, feedback lacks promptness, consistency, comprehensiveness and individualisation. Most of these aspects may be addressed by modern, technology-assisted assessment. The focus of this paper is the continuous development of an artefact that is used to assist the lecturer in the assessment and feedback of practical programming activities in a senior database programming class. The artefact was developed using three Design Science Research cycles. The first implementation allowed one programming activity submission per assessment intervention. This pilot provided valuable insight into the obstacles regarding the implementation of this type of assessment tool. A second implementation improved the initial version to allow multiple programming activity submissions per assessment. The focus of this version is on providing scaffold feedback to the learner – allowing improvement with each subsequent submission. It also has a built-in capability to provide the lecturer with information regarding the key problem areas of each assessment intervention.

Keywords: Programming, computer-aided assessment, technology-assisted assessment, programming assessment software, design science research, mixed-method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 955
112 Infrastructure Change Monitoring Using Multitemporal Multispectral Satellite Images

Authors: U. Datta

Abstract:

The main objective of this study is to find a suitable approach to monitor the land infrastructure growth over a period of time using multispectral satellite images. Bi-temporal change detection method is unable to indicate the continuous change occurring over a long period of time. To achieve this objective, the approach used here estimates a statistical model from series of multispectral image data over a long period of time, assuming there is no considerable change during that time period and then compare it with the multispectral image data obtained at a later time. The change is estimated pixel-wise. Statistical composite hypothesis technique is used for estimating pixel based change detection in a defined region. The generalized likelihood ratio test (GLRT) is used to detect the changed pixel from probabilistic estimated model of the corresponding pixel. The changed pixel is detected assuming that the images have been co-registered prior to estimation. To minimize error due to co-registration, 8-neighborhood pixels around the pixel under test are also considered. The multispectral images from Sentinel-2 and Landsat-8 from 2015 to 2018 are used for this purpose. There are different challenges in this method. First and foremost challenge is to get quite a large number of datasets for multivariate distribution modelling. A large number of images are always discarded due to cloud coverage. Due to imperfect modelling there will be high probability of false alarm. Overall conclusion that can be drawn from this work is that the probabilistic method described in this paper has given some promising results, which need to be pursued further.

Keywords: Co-registration, GLRT, infrastructure growth, multispectral, multitemporal, pixel-based change detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 664
111 Conceptual Synthesis of Multi-Source Renewable Energy Based Microgrid

Authors: Bakari M. M. Mwinyiwiwa, Mighanda J. Manyahi, Nicodemu Gregory, Alex L. Kyaruzi

Abstract:

Microgrids are increasingly being considered to provide electricity for the expanding energy demand in the grid distribution network and grid isolated areas. However, the technical challenges associated with the operation and controls are immense. Management of dynamic power balances, power flow, and network voltage profiles imposes unique challenges in the context of microgrids. Stability of the microgrid during both grid-connected and islanded mode is considered as the major challenge during its operation. Traditional control methods have been employed are based on the assumption of linear loads. For instance the concept of PQ, voltage and frequency control through decoupled PQ are some of very useful when considering linear loads, but they fall short when considering nonlinear loads. The deficiency of traditional control methods of microgrid suggests that more research in the control of microgrids should be done. This research aims at introducing the dq technique concept into decoupled PQ for dynamic load demand control in inverter interfaced DG system operating as isolated LV microgrid. Decoupled PQ in exact mathematical formulation in dq frame is expected to accommodate all variations of the line parameters (resistance and inductance) and to relinquish forced relationship between the DG variables such as power, voltage and frequency in LV microgrids and allow for individual parameter control (frequency and line voltages). This concept is expected to address and achieve accurate control, improve microgrid stability and power quality at all load conditions.

Keywords: Decoupled PQ, microgrid, multisource, renewable energy, dq control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2502
110 Assessment of Drought Tolerance Maize Hybrids at Grain Growth Stage in Mediterranean Area

Authors: Ayman El Sabagh, Celaleddin Barutçular, Hirofumi Saneoka

Abstract:

Drought is one of the most serious problems posing a grave threat to cereals production including maize. Maize improvement in drought-stress tolerance poses a great challenge as the global need for food and bio-energy increases. Thus, the current study was planned to explore the variations and determine the performance of target traits of maize hybrids at grain growth stage under drought conditions during 2014 under Adana, Mediterranean climate conditions, Turkey. Maize hybrids (Sancia, Indaco, 71May69, Aaccel, Calgary, 70May82, 72May80) were evaluated under (irrigated and water stress). Results revealed that, grain yield and yield traits had a negative effects because of water stress conditions compared with the normal irrigation. As well as, based on the result under normal irrigation, the maximum biological yield and harvest index were recorded. According to the differences among hybrids were found that, significant differences were observed among hybrids with respect to yield and yield traits under current research. Based on the results, grain weight had more effect on grain yield than grain number during grain filling growth stage under water stress conditions. In this concern, according to low drought susceptibility index (less grain yield losses), the hybrid (Indaco) was more stable in grain number and grain weight. Consequently, it may be concluded that this hybrid would be recommended for use in the future breeding programs for production of drought tolerant hybrids.

Keywords: Drought susceptibility index, grain filling, grain yield, maize, water stress.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2410
109 Prediction Modeling of Alzheimer’s Disease and Its Prodromal Stages from Multimodal Data with Missing Values

Authors: M. Aghili, S. Tabarestani, C. Freytes, M. Shojaie, M. Cabrerizo, A. Barreto, N. Rishe, R. E. Curiel, D. Loewenstein, R. Duara, M. Adjouadi

Abstract:

A major challenge in medical studies, especially those that are longitudinal, is the problem of missing measurements which hinders the effective application of many machine learning algorithms. Furthermore, recent Alzheimer's Disease studies have focused on the delineation of Early Mild Cognitive Impairment (EMCI) and Late Mild Cognitive Impairment (LMCI) from cognitively normal controls (CN) which is essential for developing effective and early treatment methods. To address the aforementioned challenges, this paper explores the potential of using the eXtreme Gradient Boosting (XGBoost) algorithm in handling missing values in multiclass classification. We seek a generalized classification scheme where all prodromal stages of the disease are considered simultaneously in the classification and decision-making processes. Given the large number of subjects (1631) included in this study and in the presence of almost 28% missing values, we investigated the performance of XGBoost on the classification of the four classes of AD, NC, EMCI, and LMCI. Using 10-fold cross validation technique, XGBoost is shown to outperform other state-of-the-art classification algorithms by 3% in terms of accuracy and F-score. Our model achieved an accuracy of 80.52%, a precision of 80.62% and recall of 80.51%, supporting the more natural and promising multiclass classification.

Keywords: eXtreme Gradient Boosting, missing data, Alzheimer disease, early mild cognitive impairment, late mild cognitive impairment, multiclass classification, ADNI, support vector machine, random forest.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 902
108 Infection in the Sentence: The Castration of a Black Woman's Dream of Authorship as Manifested in Buchi Emecheta's Second Class Citizen

Authors: Aseel Hatif Jassam, Hadeel Hatif Jassam

Abstract:

The paper discusses the phallocentric discourse that is challenged by women in general and women of color in particular in spite of the simultaneity of oppression due to race, class, and gender in the diaspora. Therefore, the paper gives a brief account of women's experience in the light of postcolonial feminist theory. The paper also casts light on the theories of Luce Irigaray and Helen Cixous, two feminist theorists who support and advise women to have their own discourse to challenge the infectious patriarchal sentence advocated by Sigmund Freud and Harold Bloom's model of literary history. Black women authors like Buchi Emecheta as well as her alter ego Adah, a Nigerian-born girl and the protagonist of her semi-autobiographical novel, Second Class Citizen, suffer from this phallocentric and oppressive sentence and displacement as they migrate from Nigeria, a former British colony where they feel marginalized, to North London with the hope of realizing their dreams. Yet in the British diaspora, they get culturally shocked and continue to suffer from further marginalization due to class and race and are insulted and inferiorized ironically by their patriarchal husbands who try to put an end to their dreams of authorship. With the phallocentric belief that women are not capable of self-representation in the background of their mindsets, the violent Sylvester Onwordi and Francis Obi, the husbands of both Emecheta and Adah respectively have practiced oppression on them by burning their own authoritative voices, represented by the novels they write while they are struggling with their economically atrocious living experiences in the British diaspora.

Keywords: Authorship, British diaspora, discourse, phallocentric, patriarchy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 297
107 Quality of Life Assessment across the Cancer Continuum: Understanding the Role of an Exercise Rehabilitation Programme

Authors: Bernat-Carles Serdà Ferrer, Arantza Del Valle Gómez

Abstract:

The Quality of Life (QoL) paradigm is multidimensional, dynamic and modular and its definition differs across the cancer continuum. The challenge in the interpretation of QoL data in clinical research is that QoL is influenced by psychological phenomena such as adaptation to illness. This research aims to obtain a valid and sensitive assessment of QoL change over the continuum disease, and to evaluate a rehabilitation programme aimed at inverting the observed decrease in QoL when patients return to daily living activities. The sample comprised 66 men. Patients were first assessed to establish a baseline (P1-diagnosis). This was followed by a post-test (P2-discharge) and a then-test measurement (P3-retrospective evaluation) and after returning home patients were randomized in experimental and control groups. The experimental group attended a rehabilitation programme over 24 weeks (P4). Results show that from baseline to post-test, QoL decreased significantly. The recalibration then-test confirmed a low QoL in all periods evaluated. Significant differences between the experimental and control groups prove the positive effect of the Exercise Rehabilitation Programme (ERP) on QoL. Understanding the real dynamic of QoL over time would help to adapt rehabilitation programmes by improving sensitivity and efficacy and provide professionals with a more accurate perception of the impact of treatment and side effects on patients’ QoL. Our results underline the importance of changing the approach adopted by health professionals towards one of watchful waiting on patients’ QoL until their complete recovery in daily life.

Keywords: Prostate cancer, quality of life, rehabilitation programme, response shift.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1091
106 Supplier Selection Using Sustainable Criteria in Sustainable Supply Chain Management

Authors: Richa Grover, Rahul Grover, V. Balaji Rao, Kavish Kejriwal

Abstract:

Selection of suppliers is a crucial problem in the supply chain management. On top of that, sustainable supplier selection is the biggest challenge for the organizations. Environment protection and social problems have been of concern to society in recent years, and the traditional supplier selection does not consider about this factor; therefore, this research work focuses on introducing sustainable criteria into the structure of supplier selection criteria. Sustainable Supply Chain Management (SSCM) is the management and administration of material, information, and money flows, as well as coordination among business along the supply chain. All three dimensions - economic, environmental, and social - of sustainable development needs to be taken care of. Purpose of this research is to maximize supply chain profitability, maximize social wellbeing of supply chain and minimize environmental impacts. Problem statement is selection of suppliers in a sustainable supply chain network by ranking the suppliers against sustainable criteria identified. The aim of this research is twofold: To find out what are the sustainable parameters that can be applied to the supply chain, and to determine how these parameters can effectively be used in supplier selection. Multicriteria decision making tools will be used to rank both criteria and suppliers. AHP Analysis will be used to find out ratings for the criteria identified. It is a technique used for efficient decision making. TOPSIS will be used to find out rating for suppliers and then ranking them. TOPSIS is a MCDM problem solving method which is based on the principle that the chosen option should have the maximum distance from the negative ideal solution (NIS) and the minimum distance from the ideal solution.

Keywords: Sustainable supply chain management, supplier selection, MCDM tools, AHP analysis, TOPSIS method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3435
105 A Survey of Field Programmable Gate Array-Based Convolutional Neural Network Accelerators

Authors: Wei Zhang

Abstract:

With the rapid development of deep learning, neural network and deep learning algorithms play a significant role in various practical applications. Due to the high accuracy and good performance, Convolutional Neural Networks (CNNs) especially have become a research hot spot in the past few years. However, the size of the networks becomes increasingly large scale due to the demands of the practical applications, which poses a significant challenge to construct a high-performance implementation of deep learning neural networks. Meanwhile, many of these application scenarios also have strict requirements on the performance and low-power consumption of hardware devices. Therefore, it is particularly critical to choose a moderate computing platform for hardware acceleration of CNNs. This article aimed to survey the recent advance in Field Programmable Gate Array (FPGA)-based acceleration of CNNs. Various designs and implementations of the accelerator based on FPGA under different devices and network models are overviewed, and the versions of Graphic Processing Units (GPUs), Application Specific Integrated Circuits (ASICs) and Digital Signal Processors (DSPs) are compared to present our own critical analysis and comments. Finally, we give a discussion on different perspectives of these acceleration and optimization methods on FPGA platforms to further explore the opportunities and challenges for future research. More helpfully, we give a prospect for future development of the FPGA-based accelerator.

Keywords: Deep learning, field programmable gate array, FPGA, hardware acceleration, convolutional neural networks, CNN.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 829
104 Possible Number of Dwelling Units Using Waste Plastic Bottle for Construction

Authors: Dibya Jivan Pati, Kazuhisa Iki, Riken Homma

Abstract:

Unlike other metro cities of India, Bhubaneswar–the capital city of Odisha, is expected to reach 1-million-mark population by now. The demands of dwelling unit requirement mostly among urban poor belonging to Economically Weaker section (EWS) and Low Income groups (LIG) is becoming a challenge due to high housing cost and rents. As a matter of fact, it’s also noted that, with increase in population, the solid waste generation also increases subsequently affecting the environment due to inefficiency in collection of waste by local government bodies. Methods of utilizing Solid Waste - especially in form of Plastic bottles, Glass bottles and Metal cans (PGM) are now widely used as an alternative material for construction of low-cost building by Non-Government Organizations (NGOs) in developing countries like India to help the urban poor afford a shelter. The application of disposed plastic bottle used in construction of single dwelling significantly reduces the overall cost of construction to as much as 14% compared to traditional construction material. Therefore, considering its cost-benefit result, it’s possible to provide housing to EWS and LIGs at an affordable price. In this paper, we estimated the quantity of plastic bottles generated in Bhubaneswar which further helped to estimate the possible number of single dwelling unit that can be constructed on yearly basis so as to refrain from further housing shortage. The estimation results will be practically used for planning and managing low-cost housing business by local government and NGOs.

Keywords: Construction, dwelling unit, plastic bottle, solid waste generation, groups.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1030
103 Incorporating Lexical-Semantic Knowledge into Convolutional Neural Network Framework for Pediatric Disease Diagnosis

Authors: Xiaocong Liu, Huazhen Wang, Ting He, Xiaozheng Li, Weihan Zhang, Jian Chen

Abstract:

The utilization of electronic medical record (EMR) data to establish the disease diagnosis model has become an important research content of biomedical informatics. Deep learning can automatically extract features from the massive data, which brings about breakthroughs in the study of EMR data. The challenge is that deep learning lacks semantic knowledge, which leads to impracticability in medical science. This research proposes a method of incorporating lexical-semantic knowledge from abundant entities into a convolutional neural network (CNN) framework for pediatric disease diagnosis. Firstly, medical terms are vectorized into Lexical Semantic Vectors (LSV), which are concatenated with the embedded word vectors of word2vec to enrich the feature representation. Secondly, the semantic distribution of medical terms serves as Semantic Decision Guide (SDG) for the optimization of deep learning models. The study evaluates the performance of LSV-SDG-CNN model on four kinds of Chinese EMR datasets. Additionally, CNN, LSV-CNN, and SDG-CNN are designed as baseline models for comparison. The experimental results show that LSV-SDG-CNN model outperforms baseline models on four kinds of Chinese EMR datasets. The best configuration of the model yielded an F1 score of 86.20%. The results clearly demonstrate that CNN has been effectively guided and optimized by lexical-semantic knowledge, and LSV-SDG-CNN model improves the disease classification accuracy with a clear margin.

Keywords: lexical semantics, feature representation, semantic decision, convolutional neural network, electronic medical record

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 525
102 Social Security Reform and Management: The Case of Three Member Territories of the Organisation of Eastern Caribbean States

Authors: Cleopatra Gittens

Abstract:

It has been recognized that some social security and national insurance systems in the Eastern Caribbean are experiencing ageing populations and economic and other crises that will present a financial challenge of being unable to pay pension benefits in fifteen to twenty years. This has implications for the fiscal and economic positions of the countries themselves. Hence, organizations would need to address the issue urgently. The study adds to the body of knowledge on social security systems and social security reforms in Small Island Developing States (SIDS). It also makes recommendations for the types of reforms that social security systems in other SIDS can implement given their special circumstances. Secondary research is used to gather financial and other related information on three social security schemes in the Eastern Caribbean. Actuarial and financial reports and other documents of the social security systems are analysed to obtain financial and static data on each of the schemes. The findings show that the three schemes studied are experiencing steady increases in benefit expenditure versus contributions and increasing pensioner to insured ratios. The schemes will deplete their reserves between 2038 and 2050. Two of the schemes have increased their retirement age while the other has not embarked on any reforms. One scheme has made changes to its contribution percentages. Due to their small size, small populations and other unique circumstances, the social security schemes in the identified territories are not likely to be able to take advantage of all of the reform initiatives that the developed world embarked on when faced with similar problems. These schemes will need to make incremental changes that align with the timeframes recommended by the actuarial studies.

Keywords: Pension benefits, pension, Small Island Developing States, Social Security Reform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 91
101 Markov Chain Based QoS Support for Wireless Body Area Network Communication in Health Monitoring Services

Authors: R. A. Isabel, E. Baburaj

Abstract:

Wireless Body Area Networks (WBANs) are essential for real-time health monitoring of patients and in diagnosing of many diseases. WBANs comprise many sensors to monitor a large range of ambient conditions. Quality of Service (QoS) is a key challenge in WBAN, because the different state information of the neighboring nodes has to be monitored in an accurate manner. However, energy consumption gets increased while predicting and maintaining the exact information in highly dynamic environments. In order to reduce energy consumption and end to end delay, Markov Chain Based Quality of Service Support (MC-QoSS) method is designed in the health monitoring services of WBAN communication. The energy consumption gets reduced by forming a Markov chain with high energy nodes in the sensor networks communication path. The low energy level sensor nodes are removed using transitional probability in order to reduce end to end delay. High energy nodes are formed in the chain structure of its corresponding path to enhance communication. After choosing the communication path through high energy nodes, the packets are sent to the sink node from the source node with a higher Packet Delivery Ratio. The simulation result shows that MC-QoSS method improves the packet delivery ratio and reduces energy consumption with minimum end to end delay, compared to existing methods.

Keywords: Wireless body area networks, quality of service, Markov chain, health monitoring services.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1395
100 Online Information Seeking: A Review of the Literature in the Health Domain

Authors: Sharifah Sumayyah Engku Alwi, Masrah Azrifah Azmi Murad

Abstract:

The development of the information technology and Internet has been transforming the healthcare industry. The internet is continuously accessed to seek for health information and there are variety of sources, including search engines, health websites, and social networking sites. Providing more and better information on health may empower individuals, however, ensuring a high quality and trusted health information could pose a challenge. Moreover, there is an ever-increasing amount of information available, but they are not necessarily accurate and up to date. Thus, this paper aims to provide an insight of the models and frameworks related to online health information seeking of consumers. It begins by exploring the definition of information behavior and information seeking to provide a better understanding of the concept of information seeking. In this study, critical factors such as performance expectancy, effort expectancy, and social influence will be studied in relation to the value of seeking health information. It also aims to analyze the effect of age, gender, and health status as the moderator on the factors that influence online health information seeking, i.e. trust and information quality. A preliminary survey will be carried out among the health professionals to clarify the research problems which exist in the real world, at the same time producing a conceptual framework. A final survey will be distributed to five states of Malaysia, to solicit the feedback on the framework. Data will be analyzed using SPSS and SmartPLS 3.0 analysis tools. It is hoped that at the end of this study, a novel framework that can improve online health information seeking is developed. Finally, this paper concludes with some suggestions on the models and frameworks that could improve online health information seeking.

Keywords: Information behavior, information seeking, online health information, technology acceptance model, the theory of planned behavior, UTAUT.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1624
99 A Challenge to Acquire Serious Victims’ Locations during Acute Period of Giant Disasters

Authors: Keiko Shimazu, Yasuhiro Maida, Tetsuya Sugata, Daisuke Tamakoshi, Kenji Makabe, Haruki Suzuki

Abstract:

In this paper, we report how to acquire serious victims’ locations in the Acute Stage of Large-scale Disasters, in an Emergency Information Network System designed by us. The background of our concept is based on the Great East Japan Earthquake occurred on March 11th, 2011. Through many experiences of national crises caused by earthquakes and tsunamis, we have established advanced communication systems and advanced disaster medical response systems. However, Japan was devastated by huge tsunamis swept a vast area of Tohoku causing a complete breakdown of all the infrastructures including telecommunications. Therefore, we noticed that we need interdisciplinary collaboration between science of disaster medicine, regional administrative sociology, satellite communication technology and systems engineering experts. Communication of emergency information was limited causing a serious delay in the initial rescue and medical operation. For the emergency rescue and medical operations, the most important thing is to identify the number of casualties, their locations and status and to dispatch doctors and rescue workers from multiple organizations. In the case of the Tohoku earthquake, the dispatching mechanism and/or decision support system did not exist to allocate the appropriate number of doctors and locate disaster victims. Even though the doctors and rescue workers from multiple government organizations have their own dedicated communication system, the systems are not interoperable.

Keywords: Crisis management, disaster mitigation, messing, MGRS, Satellite communication system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 784
98 Hybrid Methods for Optimisation of Weights in Spatial Multi-Criteria Evaluation Decision for Fire Risk and Hazard

Authors: I. Yakubu, D. Mireku-Gyimah, D. Asafo-Adjei

Abstract:

The challenge for everyone involved in preserving the ecosystem is to find creative ways to protect and restore the remaining ecosystems while accommodating and enhancing the country social and economic well-being. Frequent fires of anthropogenic origin have been affecting the ecosystems in many countries adversely. Hence adopting ways of decision making such as Multicriteria Decision Making (MCDM) is appropriate since it will enhance the evaluation and analysis of fire risk and hazard of the ecosystem. In this paper, fire risk and hazard data from the West Gonja area of Ghana were used in some of the methods (Analytical Hierarchy Process, Compromise Programming, and Grey Relational Analysis (GRA) for MCDM evaluation and analysis to determine the optimal weight method for fire risk and hazard. Ranking of the land cover types was carried out using; Fire Hazard, Fire Fighting Capacity and Response Risk Criteria. Pairwise comparison under Analytic Hierarchy Process (AHP) was used to determine the weight of the various criteria. Weights for sub-criteria were also obtained by the pairwise comparison method. The results were optimised using GRA and Compromise Programming (CP). The results from each method, hybrid GRA and CP, were compared and it was established that all methods were satisfactory in terms of optimisation of weight. The most optimal method for spatial multicriteria evaluation was the hybrid GRA method. Thus, a hybrid AHP and GRA method is more effective method for ranking alternatives in MCDM than the hybrid AHP and CP method.

Keywords: Compromise programming, grey relational analysis, spatial multi-criteria, weight optimisation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 609
97 Fully Automated Methods for the Detection and Segmentation of Mitochondria in Microscopy Images

Authors: Blessing Ojeme, Frederick Quinn, Russell Karls, Shannon Quinn

Abstract:

The detection and segmentation of mitochondria from fluorescence microscopy is crucial for understanding the complex structure of the nervous system. However, the constant fission and fusion of mitochondria and image distortion in the background make the task of detection and segmentation challenging. Although there exists a number of open-source software tools and artificial intelligence (AI) methods designed for analyzing mitochondrial images, the availability of only a few combined expertise in the medical field and AI required to utilize these tools poses a challenge to its full adoption and use in clinical settings. Motivated by the advantages of automated methods in terms of good performance, minimum detection time, ease of implementation, and cross-platform compactibility, this study proposes a fully automated framework for the detection and segmentation of mitochondria using both image shape information and descriptive statistics. Using the low-cost, open-source Python and OpenCV library, the algorithms are implemented in three stages: pre-processing; image binarization; and coarse-to-fine segmentation. The proposed model is validated using the fluorescence mitochondrial dataset. Ground truth labels generated using Labkit were also used to evaluate the performance of our detection and segmentation model using precision, recall and rand index. The study produces good detection and segmentation results and reports the challenges encountered during the image analysis of mitochondrial morphology from the fluorescence mitochondrial dataset. A discussion on the methods and future perspectives of fully automated frameworks concludes the paper.

Keywords: 2D, Binarization, CLAHE, detection, fluorescence microscopy, mitochondria, segmentation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 369