Search results for: energy performance gap
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18793

Search results for: energy performance gap

10333 Computer Aided Diagnosis Bringing Changes in Breast Cancer Detection

Authors: Devadrita Dey Sarkar

Abstract:

Regardless of the many technologic advances in the past decade, increased training and experience, and the obvious benefits of uniform standards, the false-negative rate in screening mammography remains unacceptably high .A computer aided neural network classification of regions of suspicion (ROS) on digitized mammograms is presented in this abstract which employs features extracted by a new technique based on independent component analysis. CAD is a concept established by taking into account equally the roles of physicians and computers, whereas automated computer diagnosis is a concept based on computer algorithms only. With CAD, the performance by computers does not have to be comparable to or better than that by physicians, but needs to be complementary to that by physicians. In fact, a large number of CAD systems have been employed for assisting physicians in the early detection of breast cancers on mammograms. A CAD scheme that makes use of lateral breast images has the potential to improve the overall performance in the detection of breast lumps. Because breast lumps can be detected reliably by computer on lateral breast mammographs, radiologists’ accuracy in the detection of breast lumps would be improved by the use of CAD, and thus early diagnosis of breast cancer would become possible. In the future, many CAD schemes could be assembled as packages and implemented as a part of PACS. For example, the package for breast CAD may include the computerized detection of breast nodules, as well as the computerized classification of benign and malignant nodules. In order to assist in the differential diagnosis, it would be possible to search for and retrieve images (or lesions) with these CAD systems, which would be reliable and useful method for quantifying the similarity of a pair of images for visual comparison by radiologists.

Keywords: CAD(computer-aided design), lesions, neural network, ROS(region of suspicion)

Procedia PDF Downloads 452
10332 Discriminant Analysis of Pacing Behavior on Mass Start Speed Skating

Authors: Feng Li, Qian Peng

Abstract:

The mass start speed skating (MSSS) is a new event for the 2018 PyeongChang Winter Olympics and will be an official race for the 2022 Beijing Winter Olympics. Considering that the event rankings were based on points gained on laps, it is worthwhile to investigate the pacing behavior on each lap that directly influences the ranking of the race. The aim of this study was to detect the pacing behavior and performance on MSSS regarding skaters’ level (SL), competition stage (semi-final/final) (CS) and gender (G). All the men's and women's races in the World Cup and World Championships were analyzed in the 2018-2019 and 2019-2020 seasons. As a result, a total of 601 skaters from 36 games were observed. ANOVA for repeated measures was applied to compare the pacing behavior on each lap, and the three-way ANOVA for repeated measures was used to identify the influence of SL, CS, and G on pacing behavior and total time spent. In general, the results showed that the pacing behavior from fast to slow were cluster 1—laps 4, 8, 12, 15, 16, cluster 2—laps 5, 9, 13, 14, cluster 3—laps 3, 6, 7, 10, 11, and cluster 4—laps 1 and 2 (p=0.000). For CS, the total time spent in the final was less than the semi-final (p=0.000). For SL, top-level skaters spent less total time than the middle-level and low-level (p≤0.002), while there was no significant difference between the middle-level and low-level (p=0.214). For G, the men’s skaters spent less total time than women on all laps (p≤0.048). This study could help to coach staff better understand the pacing behavior regarding SL, CS, and G, further providing references concerning promoting the pacing strategy and decision making before and during the race.

Keywords: performance analysis, pacing strategy, winning strategy, winter Olympics

Procedia PDF Downloads 187
10331 Study of Radiological and Chemical Effects of Uranium in Ground Water of SW and NE Punjab, India

Authors: Komal Saini, S. K. Sahoo, B. S. Bajwa

Abstract:

The Laser Fluorimetery Technique has been used for the microanalysis of uranium content in water samples collected from different sources like the hand pumps, tube wells in the drinking water samples of SW & NE Punjab, India. The geographic location of the study region in NE Punjab is between latitude 31.21º- 32.05º N and longitude 75.60º-76.14º E and for SW Punjab is between latitude 29.66º-30.48º N and longitude 74.69º-75.54º E. The purpose of this study was mainly to investigate the uranium concentration levels of ground water being used for drinking purposes and to determine its health effects, if any, to the local population of these regions. In the present study 131 samples of drinking water collected from different villages of SW and 95 samples from NE, Punjab state, India have been analyzed for chemical and radiological toxicity. In the present investigation, uranium content in water samples of SW Punjab ranges from 0.13 to 908 μgL−1 with an average of 82.1 μgL−1 whereas in samples collected from NE- Punjab, it ranges from 0 to 28.2 μgL−1 with an average of 4.84 μgL−1. Thus, revealing that in the SW- Punjab 54 % of drinking water samples have uranium concentration higher than international recommended limit of 30 µgl-1 (WHO, 2011) while 35 % of samples exceeds the threshold of 60 µgl-1 recommended by our national regulatory authority of Atomic Energy Regulatory Board (AERB), Department of Atomic Energy, India, 2004. On the other hand in the NE-Punjab region, none of the observed water sample has uranium content above the national/international recommendations. The observed radiological risk in terms of excess cancer risk ranges from 3.64x10-7 to 2.54x10-3 for SW-Punjab, whereas for NE region it ranges from 0 to 7.89x10-5. The chemical toxic effect in terms of Life-time average Daily Dose (LDD) and Hazard Quotient (HQ) have also been calculated. The LDD for SW-Punjab varies from 0.0098 to 68.46 with an average of 6.18 µg/ kg/day whereas for NE region it varies from 0 to 2.13 with average 0.365 µg/ kg/day, thus indicating presence of chemical toxicity in SW Punjab as 35% of the observed samples in the SW Punjab are above the recommendation limit of 4.53 µg/ kg/day given by AERB for 60 µgl-1 of uranium. Maximum & Minimum values for hazard quotient for SW Punjab is 0.002 & 15.11 with average 1.36 which is considerably high as compared to safe limit i.e. 1. But for NE Punjab HQ varies from 0 to 0.47. The possible sources of high uranium observed in the SW- Punjab will also be discussed.

Keywords: uranium, groundwater, radiological and chemical toxicity, Punjab, India

Procedia PDF Downloads 370
10330 Predictive Analytics in Oil and Gas Industry

Authors: Suchitra Chnadrashekhar

Abstract:

Earlier looked as a support function in an organization information technology has now become a critical utility to manage their daily operations. Organizations are processing huge amount of data which was unimaginable few decades before. This has opened the opportunity for IT sector to help industries across domains to handle the data in the most intelligent manner. Presence of IT has been a leverage for the Oil & Gas industry to store, manage and process the data in most efficient way possible thus deriving the economic value in their day-to-day operations. Proper synchronization between Operational data system and Information Technology system is the need of the hour. Predictive analytics supports oil and gas companies by addressing the challenge of critical equipment performance, life cycle, integrity, security, and increase their utilization. Predictive analytics go beyond early warning by providing insights into the roots of problems. To reach their full potential, oil and gas companies need to take a holistic or systems approach towards asset optimization and thus have the functional information at all levels of the organization in order to make the right decisions. This paper discusses how the use of predictive analysis in oil and gas industry is redefining the dynamics of this sector. Also, the paper will be supported by real time data and evaluation of the data for a given oil production asset on an application tool, SAS. The reason for using SAS as an application for our analysis is that SAS provides an analytics-based framework to improve uptimes, performance and availability of crucial assets while reducing the amount of unscheduled maintenance, thus minimizing maintenance-related costs and operation disruptions. With state-of-the-art analytics and reporting, we can predict maintenance problems before they happen and determine root causes in order to update processes for future prevention.

Keywords: hydrocarbon, information technology, SAS, predictive analytics

Procedia PDF Downloads 341
10329 Hybrid Model: An Integration of Machine Learning with Traditional Scorecards

Authors: Golnush Masghati-Amoli, Paul Chin

Abstract:

Over the past recent years, with the rapid increases in data availability and computing power, Machine Learning (ML) techniques have been called on in a range of different industries for their strong predictive capability. However, the use of Machine Learning in commercial banking has been limited due to a special challenge imposed by numerous regulations that require lenders to be able to explain their analytic models, not only to regulators but often to consumers. In other words, although Machine Leaning techniques enable better prediction with a higher level of accuracy, in comparison with other industries, they are adopted less frequently in commercial banking especially for scoring purposes. This is due to the fact that Machine Learning techniques are often considered as a black box and fail to provide information on why a certain risk score is given to a customer. In order to bridge this gap between the explain-ability and performance of Machine Learning techniques, a Hybrid Model is developed at Dun and Bradstreet that is focused on blending Machine Learning algorithms with traditional approaches such as scorecards. The Hybrid Model maximizes efficiency of traditional scorecards by merging its practical benefits, such as explain-ability and the ability to input domain knowledge, with the deep insights of Machine Learning techniques which can uncover patterns scorecard approaches cannot. First, through development of Machine Learning models, engineered features and latent variables and feature interactions that demonstrate high information value in the prediction of customer risk are identified. Then, these features are employed to introduce observed non-linear relationships between the explanatory and dependent variables into traditional scorecards. Moreover, instead of directly computing the Weight of Evidence (WoE) from good and bad data points, the Hybrid Model tries to match the score distribution generated by a Machine Learning algorithm, which ends up providing an estimate of the WoE for each bin. This capability helps to build powerful scorecards with sparse cases that cannot be achieved with traditional approaches. The proposed Hybrid Model is tested on different portfolios where a significant gap is observed between the performance of traditional scorecards and Machine Learning models. The result of analysis shows that Hybrid Model can improve the performance of traditional scorecards by introducing non-linear relationships between explanatory and target variables from Machine Learning models into traditional scorecards. Also, it is observed that in some scenarios the Hybrid Model can be almost as predictive as the Machine Learning techniques while being as transparent as traditional scorecards. Therefore, it is concluded that, with the use of Hybrid Model, Machine Learning algorithms can be used in the commercial banking industry without being concerned with difficulties in explaining the models for regulatory purposes.

Keywords: machine learning algorithms, scorecard, commercial banking, consumer risk, feature engineering

Procedia PDF Downloads 123
10328 Bracing Applications for Improving the Earthquake Performance of Reinforced Concrete Structures

Authors: Diyar Yousif Ali

Abstract:

Braced frames, besides other structural systems, such as shear walls or moment resisting frames, have been a valuable and effective technique to increase structures against seismic loads. In wind or seismic excitations, diagonal members react as truss web elements which would afford tension or compression stresses. This study proposes to consider the effect of bracing diagonal configuration on values of base shear and displacement of building. Two models were created, and nonlinear pushover analysis was implemented. Results show that bracing members enhance the lateral load performance of the Concentric Braced Frame (CBF) considerably. The purpose of this article is to study the nonlinear response of reinforced concrete structures which contain hollow pipe steel braces as the major structural elements against earthquake loads. A five-storey reinforced concrete structure was selected in this study; two different reinforced concrete frames were considered. The first system was an un-braced frame, while the last one was a braced frame with diagonal bracing. Analytical modelings of the bare frame and braced frame were realized by means of SAP 2000. The performances of all structures were evaluated using nonlinear static analyses. From these analyses, the base shear and displacements were compared. Results are plotted in diagrams and discussed extensively, and the results of the analyses showed that the braced frame was seemed to capable of more lateral load carrying and had a high value for stiffness and lower roof displacement in comparison with the bare frame.

Keywords: reinforced concrete structures, pushover analysis, base shear, steel bracing

Procedia PDF Downloads 80
10327 Physical and Physiological Characteristics of Young Soccer Players in Republic of Macedonia

Authors: Sanja Manchevska, Vaska Antevska, Lidija Todorovska, Beti Dejanova, Sunchica Petrovska, Ivanka Karagjozova, Elizabeta Sivevska, Jasmina Pluncevic Gligoroska

Abstract:

Introduction: A number of positive effects on the player’s physical status, including the body mass components are attributed to training process. As young soccer players grow up qualitative and quantitative changes appear and contribute to better performance. Player’s anthropometric and physiologic characteristics are recognized as important determinants of performance. Material: A sample of 52 soccer players with an age span from 9 to 14 years were divided in two groups differentiated by age. The younger group consisted of 25 boys under 11 years (mean age 10.2) and second group consisted of 27 boys with mean age 12.64. Method: The set of basic anthropometric parameters was analyzed: height, weight, BMI (Body Mass Index) and body mass components. Maximal oxygen uptake was tested using the treadmill protocol by Brus. Results: The group aged under 11 years showed the following anthropometric and physiological features: average height= 143.39cm, average weight= 44.27 kg; BMI= 18.77; Err = 5.04; Hb= 13.78 g/l; VO2=37.72 mlO2/kg. Average values of analyzed parameters were as follows: height was 163.7 cm; weight= 56.3 kg; BMI = 19.6; VO2= 39.52 ml/kg; Err=5.01; Hb=14.3g/l for the participants aged 12 to14 years. Conclusion: Physiological parameters (maximal oxygen uptake, erythrocytes and Hb) were insignificantly higher in the older group compared to the younger group. There were no statistically significant differences between analyzed anthropometric parameters among the two groups except for the basic measurements (height and weight).

Keywords: body composition, young soccer players, BMI, physical status

Procedia PDF Downloads 391
10326 Applying Big Data Analysis to Efficiently Exploit the Vast Unconventional Tight Oil Reserves

Authors: Shengnan Chen, Shuhua Wang

Abstract:

Successful production of hydrocarbon from unconventional tight oil reserves has changed the energy landscape in North America. The oil contained within these reservoirs typically will not flow to the wellbore at economic rates without assistance from advanced horizontal well and multi-stage hydraulic fracturing. Efficient and economic development of these reserves is a priority of society, government, and industry, especially under the current low oil prices. Meanwhile, society needs technological and process innovations to enhance oil recovery while concurrently reducing environmental impacts. Recently, big data analysis and artificial intelligence become very popular, developing data-driven insights for better designs and decisions in various engineering disciplines. However, the application of data mining in petroleum engineering is still in its infancy. The objective of this research aims to apply intelligent data analysis and data-driven models to exploit unconventional oil reserves both efficiently and economically. More specifically, a comprehensive database including the reservoir geological data, reservoir geophysical data, well completion data and production data for thousands of wells is firstly established to discover the valuable insights and knowledge related to tight oil reserves development. Several data analysis methods are introduced to analysis such a huge dataset. For example, K-means clustering is used to partition all observations into clusters; principle component analysis is applied to emphasize the variation and bring out strong patterns in the dataset, making the big data easy to explore and visualize; exploratory factor analysis (EFA) is used to identify the complex interrelationships between well completion data and well production data. Different data mining techniques, such as artificial neural network, fuzzy logic, and machine learning technique are then summarized, and appropriate ones are selected to analyze the database based on the prediction accuracy, model robustness, and reproducibility. Advanced knowledge and patterned are finally recognized and integrated into a modified self-adaptive differential evolution optimization workflow to enhance the oil recovery and maximize the net present value (NPV) of the unconventional oil resources. This research will advance the knowledge in the development of unconventional oil reserves and bridge the gap between the big data and performance optimizations in these formations. The newly developed data-driven optimization workflow is a powerful approach to guide field operation, which leads to better designs, higher oil recovery and economic return of future wells in the unconventional oil reserves.

Keywords: big data, artificial intelligence, enhance oil recovery, unconventional oil reserves

Procedia PDF Downloads 273
10325 Improving the Safety Performance of Workers by Assessing the Impact of Safety Culture on Workers’ Safety Behaviour in Nigeria Oil and Gas Industry: A Pilot Study in the Niger Delta Region

Authors: Efua Ehiaguina, Haruna Moda

Abstract:

Interest in the development of appropriate safety culture in the oil and gas industry has taken centre stage among stakeholders in the industry. Human behaviour has been identified as a major contributor to occupational accidents, where abnormal activities associated with safety management are taken as normal behaviour. Poor safety culture is one of the major factors that influence employee’s safety behaviour at work, which may consequently result in injuries and accidents and strengthening such a culture can improve workers safety performance. Nigeria oil and gas industry has contributed to the growth and development of the country in diverse ways. However, in terms of safety and health of workers, this industry is a dangerous place to work as workers are often exposed to occupational safety and health hazard. To ascertain the impact of employees’ safety and how it impacts health and safety compliance within the local industry, online safety culture survey targeting frontline workers within the industry was administered covering major subjects that include; perception of management commitment and style of leadership; safety communication method and its resultant impact on employees’ behaviour; employee safety commitment and training needs. The preliminary result revealed that 54% of the participants feel that there is a lack of motivation from the management to work safely. In addition, 55% of participants revealed that employers place more emphasis on work delivery over employee’s safety on the installation. It is expected that the study outcome will provide measures aimed at strengthening and sustaining safety culture in the Nigerian oil and gas industry.

Keywords: oil and gas safety, safety behaviour, safety culture, safety compliance

Procedia PDF Downloads 129
10324 The Feasibility of Anaerobic Digestion at 45⁰C

Authors: Nuruol S. Mohd, Safia Ahmed, Rumana Riffat, Baoqiang Li

Abstract:

Anaerobic digestion at mesophilic and thermophilic temperatures have been widely studied and evaluated by numerous researchers. Limited extensive research has been conducted on anaerobic digestion in the intermediate zone of 45°C, mainly due to the notion that limited microbial activity occurs within this zone. The objectives of this research were to evaluate the performance and the capability of anaerobic digestion at 45°C in producing class A biosolids, in comparison to a mesophilic and thermophilic anaerobic digestion system operated at 35°C and 55°C, respectively. In addition to that, the investigation on the possible inhibition factors affecting the performance of the digestion system at this temperature will be conducted as well. The 45°C anaerobic digestion systems were not able to achieve comparable methane yield and high-quality effluent compared to the mesophilic system, even though the systems produced biogas with about 62-67% methane. The 45°C digesters suffered from high acetate accumulation, but sufficient buffering capacity was observed as the pH, alkalinity and volatile fatty acids (VFA)-to-alkalinity ratio were within recommended values. The accumulation of acetate observed in 45°C systems were presumably due to the high temperature which contributed to high hydrolysis rate. Consequently, it produced a large amount of toxic salts that combined with the substrate making them not readily available to be consumed by methanogens. Acetate accumulation, even though contributed to 52 to 71% reduction in acetate degradation process, could not be considered as completely inhibitory. Additionally, at 45°C, no ammonia inhibition was observed and the digesters were able to achieve volatile solids (VS) reduction of 47.94±4.17%. The pathogen counts were less than 1,000 MPN/g total solids, thus, producing Class A biosolids.

Keywords: 45°C anaerobic digestion, acetate accumulation, class A biosolids, salt toxicity

Procedia PDF Downloads 299
10323 A Numerical Study on Semi-Active Control of a Bridge Deck under Seismic Excitation

Authors: A. Yanik, U. Aldemir

Abstract:

This study investigates the benefits of implementing the semi-active devices in relation to passive viscous damping in the context of seismically isolated bridge structures. Since the intrinsically nonlinear nature of semi-active devices prevents the direct evaluation of Laplace transforms, frequency response functions are compiled from the computed time history response to sinusoidal and pulse-like seismic excitation. A simple semi-active control policy is used in regard to passive linear viscous damping and an optimal non-causal semi-active control strategy. The control strategy requires optimization. Euler-Lagrange equations are solved numerically during this procedure. The optimal closed-loop performance is evaluated for an idealized controllable dash-pot. A simplified single-degree-of-freedom model of an isolated bridge is used as numerical example. Two bridge cases are investigated. These cases are; bridge deck without the isolation bearing and bridge deck with the isolation bearing. To compare the performances of the passive and semi-active control cases, frequency dependent acceleration, velocity and displacement response transmissibility ratios Ta(w), Tv(w), and Td(w) are defined. To fully investigate the behavior of the structure subjected to the sinusoidal and pulse type excitations, different damping levels are considered. Numerical results showed that, under the effect of external excitation, bridge deck with semi-active control showed better structural performance than the passive bridge deck case.

Keywords: bridge structures, passive control, seismic, semi-active control, viscous damping

Procedia PDF Downloads 231
10322 The Nexus between Manpower Training and Corporate Compliance

Authors: Timothy Wale Olaosebikan

Abstract:

The most active resource in any organization is the manpower. Every other resource remains inactive unless there is competent manpower to handle them. Manpower training is needed to enhance productivity and overall performance of the organizations. This is due to the recognition of the important role of manpower training in attainment of organizational goals. Corporate Compliance conjures visions of an incomprehensible matrix of laws and regulations that defy logic and control by even the most seasoned manpower training professionals. Similarly, corporate compliance can be viewed as one of the most significant problems faced in manpower training process for any organization, therefore, commands relevant attention and comprehension. Consequently, this study investigated the nexus between manpower training and corporate compliance. Collection of data for the study was effected through the use of questionnaire with a sample size of 265 drawn by stratified random sampling. The data were analyzed using descriptive and inferential statistics. The findings of the study show that about 75% of the respondents agree that there is a strong relationship between manpower training and corporate compliance, which brings out the organizational attainment from any training process. The findings further show that most organisation do not totally comply with the rules guiding manpower training process thereby making the process less effective on organizational performance, which may affect overall profitability. The study concludes that formulation and compliance of adequate rules and guidelines for manpower trainings will produce effective results for both employees and the organization at large. The study recommends that leaders of organizations, industries, and institutions must ensure total compliance on the part of both the employees and the organization to manpower training rules. Organizations and stakeholders should also ensure that strict policies on corporate compliance to manpower trainings form the heart of their cardinal mission.

Keywords: corporate compliance, manpower training, nexus, rules and guidelines

Procedia PDF Downloads 128
10321 Design and Implementation of Low-code Model-building Methods

Authors: Zhilin Wang, Zhihao Zheng, Linxin Liu

Abstract:

This study proposes a low-code model-building approach that aims to simplify the development and deployment of artificial intelligence (AI) models. With an intuitive way to drag and drop and connect components, users can easily build complex models and integrate multiple algorithms for training. After the training is completed, the system automatically generates a callable model service API. This method not only lowers the technical threshold of AI development and improves development efficiency but also enhances the flexibility of algorithm integration and simplifies the deployment process of models. The core strength of this method lies in its ease of use and efficiency. Users do not need to have a deep programming background and can complete the design and implementation of complex models with a simple drag-and-drop operation. This feature greatly expands the scope of AI technology, allowing more non-technical people to participate in the development of AI models. At the same time, the method performs well in algorithm integration, supporting many different types of algorithms to work together, which further improves the performance and applicability of the model. In the experimental part, we performed several performance tests on the method. The results show that compared with traditional model construction methods, this method can make more efficient use, save computing resources, and greatly shorten the model training time. In addition, the system-generated model service interface has been optimized for high availability and scalability, which can adapt to the needs of different application scenarios.

Keywords: low-code, model building, artificial intelligence, algorithm integration, model deployment

Procedia PDF Downloads 8
10320 Improving Vocabulary and Listening Comprehension via Watching French Films without Subtitles: Positive Results

Authors: Yelena Mazour-Matusevich, Jean-Robert Ancheta

Abstract:

This study is based on more than fifteen years of experience of teaching a foreign language, in my case French, to the English-speaking students. It represents a qualitative research on foreign language learners’ reaction and their gains in terms of vocabulary and listening comprehension through repeatedly viewing foreign feature films with the original sountrack but without English subtitles. The initial idea emerged upon realization that the first challenge faced by my students when they find themselves in a francophone environment has been their lack of listening comprehension. Their inability to understand colloquial speech affects not only their academic performance, but their psychological health as well. To remedy this problem, I have designed and applied for many years my own teaching method based on one particular French film, exceptionally suited, for the reasons described in detail in the paper, for the intermediate-advanced level foreign language learners. This project, conducted together with my undergraduate assistant and mentoree J-R Ancheta, aims at showing how the paralinguistic features, such as characters’ facial expressions, settings, music, historical background, images provided before the actual viewing, etc., offer crucial support and enhance students’ listening comprehension. The study, based on students’ interviews, also offers special pedagogical techniques, such as ‘anticipatory’ vocabulary lists and exercises, drills, quizzes and composition topics that have proven to boost students’ performance. For this study, only the listening proficiency and vocabulary gains of the interviewed participants were assessed.

Keywords: comprehension, film, listening, subtitles, vocabulary

Procedia PDF Downloads 612
10319 Response of Local Cowpea to Intra Row Spacing and Weeding Regimes in Yobe State, Nigeria

Authors: A. G. Gashua, T. T. Bello, I. Alhassan, K. K. Gwiokura

Abstract:

Weeds are known to interfere seriously with crop growth, thereby affecting the productivity and quality of crops. Crops are also known to compete for natural growth resources if they are not adequately spaced, also affecting the performance of the growing crop. Farmers grow cowpea in mixtures with cereals and this is known to affect its yield. For this reason, a field experiment was conducted at Yobe State College of Agriculture Gujba, Damaturu station in the 2014 and 2015 rainy seasons to determine the appropriate intra row spacing and weeding regime for optimum growth and yield of cowpea (Vigna unguiculata L.) in pure stand in Sudan Savanna ecology. The treatments consist of three levels of spacing within rows (20 cm, 30 cm and 40 cm) and four weeding regimes (none, once at 3 weeks after sowing (WAS), twice at 3 and 6WAS, thrice at 3WAS, 6WAS and 9WAS); arranged in a Randomized Complete Block Design (RCBD) and replicated three times. The variety used was the local cowpea variety (white, early and spreading) commonly grown by farmers. The growth and yield data were collected and subjected to analysis of variance using SAS software, and the significant means were ranked by Students Newman Keul’s test (SNK). The findings of this study revealed better crop performance in 2015 than in 2014 despite poor soil condition. Intra row spacing significantly influenced vegetative growth especially the number of main branches, leaves and canopy spread at 6WAS and 9WAS with the highest values obtained at wider spacing (40 cm). The values obtained in 2015 doubled those obtained in 2014 in most cases. Spacing also significantly affected the number of pods in 2015, seed weight in both years and grain yield in 2014 with the highest values obtained when the crop was spaced at 30-40 cm. Similarly, weeding regime significantly influenced almost all the growth attributes of cowpea with higher values obtained from where cowpea was weeded three times at 3-week intervals, though statistically similar results were obtained even from where cowpea was weeded twice. Weeding also affected the entire yield and yield components in 2015 with the highest values obtained with increase weeding. Based on these findings, it is recommended that spreading cowpea varieties should be grown at 40 cm (or wider spacing) within rows and be weeded twice at three-week intervals for better crop performance in related ecologies.

Keywords: intra-row spacing, local cowpea, Nigeria, weeding

Procedia PDF Downloads 202
10318 Preparation and Characterization of Pectin Based Proton Exchange Membranes Derived by Solution Casting Method for Direct Methanol Fuel Cells

Authors: Mohanapriya Subramanian, V. Raj

Abstract:

Direct methanol fuel cells (DMFCs) are considered to be one of the most promising candidates for portable and stationary applications in the view of their advantages such as high energy density, easy manipulation, high efficiency and they operate with liquid fuel which could be used without requiring any fuel-processing units. Electrolyte membrane of DMFC plays a key role as a proton conductor as well as a separator between electrodes. Increasing concern over environmental protection, biopolymers gain tremendous interest owing to their eco-friendly bio-degradable nature. Pectin is a natural anionic polysaccharide which plays an essential part in regulating mechanical behavior of plant cell wall and it is extracted from outer cells of most of the plants. The aim of this study is to develop and demonstrate pectin based polymer composite membranes as methanol impermeable polymer electrolyte membranes for DMFCs. Pectin based nanocomposites membranes are prepared by solution-casting technique wherein pectin is blended with chitosan followed by the addition of optimal amount of sulphonic acid modified Titanium dioxide nanoparticle (S-TiO2). Nanocomposite membranes are characterized by Fourier Transform-Infra Red spectroscopy, Scanning electron microscopy, and Energy dispersive spectroscopy analyses. Proton conductivity and methanol permeability are determined into order to evaluate their suitability for DMFC application. Pectin-chitosan blends endow with a flexible polymeric network which is appropriate to disperse rigid S-TiO2 nanoparticles. Resulting nanocomposite membranes possess adequate thermo-mechanical stabilities as well as high charge-density per unit volume. Pectin-chitosan natural polymeric nanocomposite comprising optimal S-TiO2 exhibits good electrochemical selectivity and therefore desirable for DMFC application.

Keywords: biopolymers, fuel cells, nanocomposite, methanol crossover

Procedia PDF Downloads 128
10317 Soliton Solutions in (3+1)-Dimensions

Authors: Magdy G. Asaad

Abstract:

Solitons are among the most beneficial solutions for science and technology for their applicability in physical applications including plasma, energy transport along protein molecules, wave transport along poly-acetylene molecules, ocean waves, constructing optical communication systems, transmission of information through optical fibers and Josephson junctions. In this talk, we will apply the bilinear technique to generate a class of soliton solutions to the (3+1)-dimensional nonlinear soliton equation of Jimbo-Miwa type. Examples of the resulting soliton solutions are computed and a few solutions are plotted.

Keywords: Pfaffian solutions, N-soliton solutions, soliton equations, Jimbo-Miwa

Procedia PDF Downloads 438
10316 Effect of Temperature on the Properties of Cement Paste Modified with Nanoparticles

Authors: Karine Pimenta Teixeira, Jessica Flores, Isadora PerdigãO Rocha, Leticia De Sá Carneiro, Mahsa Kamali, Ali Ghahremaninezhad

Abstract:

The advent of nanotechnology has enabled innovative solutions towards improving the behavior of infrastructure materials. Nanomaterials have the potential to revolutionize the construction industry by improving the performance and durability of construction materials, as well as imparting new functionalities to these materials. Due to variability in the environmental temperature during mixing and curing of cementitious materials in practice, it is important to understand how curing temperature influences the behavior of cementitious materials. In addition, high temperature curing is relevant in applications such as oil well cement and precast industry. Knowledge of the influence of temperature on the performance of cementitious materials modified with nanoparticles is important in the nanoengineering of cementitious materials in applications such as oil well cement and precast industry. This presentation aims to investigate the influence of temperature on the hydration, mechanical properties and durability of cementitious materials modified with TiO2 nanoparticles. It was found that temperature improved the early hydration. The cement pastes cured at high temperatures showed an increase in the compressive strength at early age but the strength gain decreased at late ages. The electrical resistivity of the cement pastes cured at high temperatures was shown to decrease more noticeably at late ages compared to that of the room temperature cured cement paste. SEM examination indicated that hydration product was more uniformly distributed in the microstructure of the cement paste cured at room temperature compared to the cement pastes cured at high temperature.

Keywords: cement paste, nanoparticles, temperature, hydration

Procedia PDF Downloads 310
10315 Evaluation of Pile Performance in Different Layers of Soil

Authors: Orod Zarrin, Mohesn Ramezan Shirazi, Hassan Moniri

Abstract:

The use of pile foundations technique is developed to support structures and buildings on soft soil. The most important dynamic load that can affect the pile structure is earthquake vibrations. Pile foundations during earthquake excitation indicate that piles are subject to damage by affecting the superstructure integrity and serviceability. During an earthquake, two types of stresses can damage the pile head, inertial load that is caused by superstructure and deformation which caused by the surrounding soil. Soil deformation and inertial load are associated with the acceleration developed in an earthquake. The acceleration amplitude at the ground surface depends on the magnitude of earthquakes, soil properties and seismic source distance. According to the investigation, the damage is between the liquefiable and non-liquefiable layers and also soft and stiff layers. This damage crushes the pile head by increasing the inertial load which is applied by the superstructure. On the other hand, the cracks on the piles due to the surrounding soil are directly related to the soil profile and causes cracks from small to large. However, the large cracks reason have been listed such as liquefaction, lateral spreading, and inertial load. In the field of designing, elastic response of piles is always a challenge for designer in liquefaction soil, by allowing deflection at top of piles. Moreover, absence of plastic hinges in piles should be insured, because the damage in the piles is not observed directly. In this study, the performance and behavior of pile foundations during liquefaction and lateral spreading are investigated. In addition, emphasize on the soil behavior in the liquefiable and non-liquefiable layers by different aspect of piles damage such as ranking, location and degree of damage are going to discuss.

Keywords: pile, earthquake, liquefaction, non-liquefiable, damage

Procedia PDF Downloads 292
10314 Production of Pre-Reduction of Iron Ore Nuggets with Lesser Sulphur Intake by Devolatisation of Boiler Grade Coal

Authors: Chanchal Biswas, Anrin Bhattacharyya, Gopes Chandra Das, Mahua Ghosh Chaudhuri, Rajib Dey

Abstract:

Boiler coals with low fixed carbon and higher ash content have always challenged the metallurgists to develop a suitable method for their utilization. In the present study, an attempt is made to establish an energy effective method for the reduction of iron ore fines in the form of nuggets by using ‘Syngas’. By devolatisation (expulsion of volatile matter by applying heat) of boiler coal, gaseous product (enriched with reducing agents like CO, CO2, H2, and CH4 gases) is generated. Iron ore nuggets are reduced by this syngas. For that reason, there is no direct contact between iron ore nuggets and coal ash. It helps to control the minimization of the sulphur intake of the reduced nuggets. A laboratory scale devolatisation furnace designed with reduction facility is evaluated after in-depth studies and exhaustive experimentations including thermo-gravimetric (TG-DTA) analysis to find out the volatile fraction present in boiler grade coal, gas chromatography (GC) to find out syngas composition in different temperature and furnace temperature gradient measurements to minimize the furnace cost by applying one heating coil. The nuggets are reduced in the devolatisation furnace at three different temperatures and three different times. The pre-reduced nuggets are subjected to analytical weight loss calculations to evaluate the extent of reduction. The phase and surface morphology analysis of pre-reduced samples are characterized using X-ray diffractometry (XRD), energy dispersive x-ray spectrometry (EDX), scanning electron microscopy (SEM), carbon sulphur analyzer and chemical analysis method. Degree of metallization of the reduced nuggets is 78.9% by using boiler grade coal. The pre-reduced nuggets with lesser sulphur content could be used in the blast furnace as raw materials or coolant which would reduce the high quality of coke rate of the furnace due to its pre-reduced character. These can be used in Basic Oxygen Furnace (BOF) as coolant also.

Keywords: alternative ironmaking, coal gasification, extent of reduction, nugget making, syngas based DRI, solid state reduction

Procedia PDF Downloads 250
10313 Performance of the Aptima® HIV-1 Quant Dx Assay on the Panther System

Authors: Siobhan O’Shea, Sangeetha Vijaysri Nair, Hee Cheol Kim, Charles Thomas Nugent, Cheuk Yan William Tong, Sam Douthwaite, Andrew Worlock

Abstract:

The Aptima® HIV-1 Quant Dx Assay is a fully automated assay on the Panther system. It is based on Transcription-Mediated Amplification and real time detection technologies. This assay is intended for monitoring HIV-1 viral load in plasma specimens and for the detection of HIV-1 in plasma and serum specimens. Nine-hundred and seventy nine specimens selected at random from routine testing at St Thomas’ Hospital, London were anonymised and used to compare the performance of the Aptima HIV-1 Quant Dx assay and Roche COBAS® AmpliPrep/COBAS® TaqMan® HIV-1 Test, v2.0. Two-hundred and thirty four specimens gave quantitative HIV-1 viral load results in both assays. The quantitative results reported by the Aptima Assay were comparable those reported by the Roche COBAS AmpliPrep/COBAS TaqMan HIV-1 Test, v2.0 with a linear regression slope of 1.04 and an intercept on -0.097. The Aptima assay detected HIV-1 in more samples than the Roche assay. This was not due to lack of specificity of the Aptima assay because this assay gave 99.83% specificity on testing plasma specimens from 600 HIV-1 negative individuals. To understand the reason for this higher detection rate a side-by-side comparison of low level panels made from the HIV-1 3rd international standard (NIBSC10/152) and clinical samples of various subtypes were tested in both assays. The Aptima assay was more sensitive than the Roche assay. The good sensitivity, specificity and agreement with other commercial assays make the HIV-1 Quant Dx Assay appropriate for both viral load monitoring and detection of HIV-1 infections.

Keywords: HIV viral load, Aptima, Roche, Panther system

Procedia PDF Downloads 359
10312 Performance Analysis of VoIP Coders for Different Modulations Under Pervasive Environment

Authors: Jasbinder Singh, Harjit Pal Singh, S. A. Khan

Abstract:

The work, in this paper, presents the comparison of encoded speech signals by different VoIP narrow-band and wide-band codecs for different modulation schemes. The simulation results indicate that codec has an impact on the speech quality and also effected by modulation schemes.

Keywords: VoIP, coders, modulations, BER, MOS

Procedia PDF Downloads 498
10311 Modelling Optimal Control of Diabetes in the Workplace

Authors: Eunice Christabel Chukwu

Abstract:

Introduction: Diabetes is a chronic medical condition which is characterized by high levels of glucose in the blood and urine; it is usually diagnosed by means of a glucose tolerance test (GTT). Diabetes can cause a range of health problems if left unmanaged, as it can lead to serious complications. It is essential to manage the condition effectively, particularly in the workplace where the impact on work productivity can be significant. This paper discusses the modelling of optimal control of diabetes in the workplace using a control theory approach. Background: Diabetes mellitus is a condition caused by too much glucose in the blood. Insulin, a hormone produced by the pancreas, controls the blood sugar level by regulating the production and storage of glucose. In diabetes, there may be a decrease in the body’s ability to respond to insulin or a decrease in insulin produced by the pancreas which will lead to abnormalities in the metabolism of carbohydrates, proteins, and fats. In addition to the health implications, the condition can also have a significant impact on work productivity, as employees with uncontrolled diabetes are at risk of absenteeism, reduced performance, and increased healthcare costs. While several interventions are available to manage diabetes, the most effective approach is to control blood glucose levels through a combination of lifestyle modifications and medication. Methodology: The control theory approach involves modelling the dynamics of the system and designing a controller that can regulate the system to achieve optimal performance. In the case of diabetes, the system dynamics can be modelled using a mathematical model that describes the relationship between insulin, glucose, and other variables. The controller can then be designed to regulate the glucose levels to maintain them within a healthy range. Results: The modelling of optimal control of diabetes in the workplace using a control theory approach has shown promising results. The model has been able to predict the optimal dose of insulin required to maintain glucose levels within a healthy range, taking into account the individual’s lifestyle, medication regimen, and other relevant factors. The approach has also been used to design interventions that can improve diabetes management in the workplace, such as regular glucose monitoring and education programs. Conclusion: The modelling of optimal control of diabetes in the workplace using a control theory approach has significant potential to improve diabetes management and work productivity. By using a mathematical model and a controller to regulate glucose levels, the approach can help individuals with diabetes to achieve optimal health outcomes while minimizing the impact of the condition on their work performance. Further research is needed to validate the model and develop interventions that can be implemented in the workplace.

Keywords: mathematical model, blood, insulin, pancreas, model, glucose

Procedia PDF Downloads 53
10310 Feature Selection of Personal Authentication Based on EEG Signal for K-Means Cluster Analysis Using Silhouettes Score

Authors: Jianfeng Hu

Abstract:

Personal authentication based on electroencephalography (EEG) signals is one of the important field for the biometric technology. More and more researchers have used EEG signals as data source for biometric. However, there are some disadvantages for biometrics based on EEG signals. The proposed method employs entropy measures for feature extraction from EEG signals. Four type of entropies measures, sample entropy (SE), fuzzy entropy (FE), approximate entropy (AE) and spectral entropy (PE), were deployed as feature set. In a silhouettes calculation, the distance from each data point in a cluster to all another point within the same cluster and to all other data points in the closest cluster are determined. Thus silhouettes provide a measure of how well a data point was classified when it was assigned to a cluster and the separation between them. This feature renders silhouettes potentially well suited for assessing cluster quality in personal authentication methods. In this study, “silhouettes scores” was used for assessing the cluster quality of k-means clustering algorithm is well suited for comparing the performance of each EEG dataset. The main goals of this study are: (1) to represent each target as a tuple of multiple feature sets, (2) to assign a suitable measure to each feature set, (3) to combine different feature sets, (4) to determine the optimal feature weighting. Using precision/recall evaluations, the effectiveness of feature weighting in clustering was analyzed. EEG data from 22 subjects were collected. Results showed that: (1) It is possible to use fewer electrodes (3-4) for personal authentication. (2) There was the difference between each electrode for personal authentication (p<0.01). (3) There is no significant difference for authentication performance among feature sets (except feature PE). Conclusion: The combination of k-means clustering algorithm and silhouette approach proved to be an accurate method for personal authentication based on EEG signals.

Keywords: personal authentication, K-mean clustering, electroencephalogram, EEG, silhouettes

Procedia PDF Downloads 269
10309 Management of Theatre with Social and Culture

Authors: Chitsuphang Ungsvanonda

Abstract:

Objective of this research is to study the government’s theater management system regarding planning and operation. Also studying how the management associate with the change of an environment. This is to gather an appropriate model to develop a theater management system especially regarding all show performance. The research will be done by a Qualitative Research with an interview of 35 person by specify and unexpectedly group.

Keywords: management, theatre, social, culture

Procedia PDF Downloads 455
10308 Energy Loss Reduction in Oil Refineries through Flare Gas Recovery Approaches

Authors: Majid Amidpour, Parisa Karimi, Marzieh Joda

Abstract:

For the last few years, release of burned undesirable by-products has become a challenging issue in oil industries. Flaring, as one of the main sources of air contamination, involves detrimental and long-lasting effects on human health and is considered a substantial reason for energy losses worldwide. This research involves studying the implications of two main flare gas recovery methods at three oil refineries, all in Iran as the case I, case II, and case III in which the production capacities are increasing respectively. In the proposed methods, flare gases are converted into more valuable products, before combustion by the flare networks. The first approach involves collecting, compressing and converting the flare gas to smokeless fuel which can be used in the fuel gas system of the refineries. The other scenario includes utilizing the flare gas as a feed into liquefied petroleum gas (LPG) production unit already established in the refineries. The processes of these scenarios are simulated, and the capital investment is calculated for each procedure. The cumulative profits of the scenarios are evaluated using Net Present Value method. Furthermore, the sensitivity analysis based on total propane and butane mole fraction is carried out to make a rational comparison for LPG production approach, and the results are illustrated for different mole fractions of propane and butane. As the mole fraction of propane and butane contained in LPG differs in summer and winter seasons, the results corresponding to LPG scenario are demonstrated for each season. The results of the simulations show that cumulative profit in fuel gas production scenario and LPG production rate increase with the capacity of the refineries. Moreover, the investment return time in LPG production method experiences a decline, followed by a rising trend with an increase in C3 and C4 content. The minimum value of time return occurs at propane and butane sum concentration values of 0.7, 0.6, and 0.7 in case I, II, and III, respectively. Based on comparison of the time of investment return and cumulative profit, fuel gas production is the superior scenario for three case studies.

Keywords: flare gas reduction, liquefied petroleum gas, fuel gas, net present value method, sensitivity analysis

Procedia PDF Downloads 145
10307 A Holistic Approach for Technical Product Optimization

Authors: Harald Lang, Michael Bader, A. Buchroithner

Abstract:

Holistic methods covering the development process as a whole – e.g. systems engineering – have established themselves in product design. However, technical product optimization, representing improvements in efficiency and/or minimization of loss, usually applies to single components of a system. A holistic approach is being defined based on a hierarchical point of view of systems engineering. This is subsequently presented using the example of an electromechanical flywheel energy storage system for automotive applications.

Keywords: design, product development, product optimization, systems engineering

Procedia PDF Downloads 616
10306 Modeling of Sediment Yield and Streamflow of Watershed Basin in the Philippines Using the Soil Water Assessment Tool Model for Watershed Sustainability

Authors: Warda L. Panondi, Norihiro Izumi

Abstract:

Sedimentation is a significant threat to the sustainability of reservoirs and their watershed. In the Philippines, the Pulangi watershed experienced a high sediment loss mainly due to land conversions and plantations that showed critical erosion rates beyond the tolerable limit of -10 ton/ha/yr in all of its sub-basin. From this event, the prediction of runoff volume and sediment yield is essential to examine using the country's soil conservation techniques realistically. In this research, the Pulangi watershed was modeled using the soil water assessment tool (SWAT) to predict its watershed basin's annual runoff and sediment yield. For the calibration and validation of the model, the SWAT-CUP was utilized. The model was calibrated with monthly discharge data for 1990-1993 and validated for 1994-1997. Simultaneously, the sediment yield was calibrated in 2014 and validated in 2015 because of limited observed datasets. Uncertainty analysis and calculation of efficiency indexes were accomplished through the SUFI-2 algorithm. According to the coefficient of determination (R2), Nash Sutcliffe efficiency (NSE), King-Gupta efficiency (KGE), and PBIAS, the calculation of streamflow indicates a good performance for both calibration and validation periods while the sediment yield resulted in a satisfactory performance for both calibration and validation. Therefore, this study was able to identify the most critical sub-basin and severe needs of soil conservation. Furthermore, this study will provide baseline information to prevent floods and landslides and serve as a useful reference for land-use policies and watershed management and sustainability in the Pulangi watershed.

Keywords: Pulangi watershed, sediment yield, streamflow, SWAT model

Procedia PDF Downloads 190
10305 Optimization of Smart Beta Allocation by Momentum Exposure

Authors: J. B. Frisch, D. Evandiloff, P. Martin, N. Ouizille, F. Pires

Abstract:

Smart Beta strategies intend to be an asset management revolution with reference to classical cap-weighted indices. Indeed, these strategies allow a better control on portfolios risk factors and an optimized asset allocation by taking into account specific risks or wishes to generate alpha by outperforming indices called 'Beta'. Among many strategies independently used, this paper focuses on four of them: Minimum Variance Portfolio, Equal Risk Contribution Portfolio, Maximum Diversification Portfolio, and Equal-Weighted Portfolio. Their efficiency has been proven under constraints like momentum or market phenomenon, suggesting a reconsideration of cap-weighting.
 To further increase strategy return efficiency, it is proposed here to compare their strengths and weaknesses inside time intervals corresponding to specific identifiable market phases, in order to define adapted strategies depending on pre-specified situations. 
Results are presented as performance curves from different combinations compared to a benchmark. If a combination outperforms the applicable benchmark in well-defined actual market conditions, it will be preferred. It is mainly shown that such investment 'rules', based on both historical data and evolution of Smart Beta strategies, and implemented according to available specific market data, are providing very interesting optimal results with higher return performance and lower risk.
 Such combinations have not been fully exploited yet and justify present approach aimed at identifying relevant elements characterizing them.

Keywords: smart beta, minimum variance portfolio, equal risk contribution portfolio, maximum diversification portfolio, equal weighted portfolio, combinations

Procedia PDF Downloads 328
10304 Fast Robust Switching Control Scheme for PWR-Type Nuclear Power Plants

Authors: Piyush V. Surjagade, Jiamei Deng, Paul Doney, S. R. Shimjith, A. John Arul

Abstract:

In sophisticated and complex systems such as nuclear power plants, maintaining the system's stability in the presence of uncertainties and disturbances and obtaining a fast dynamic response are the most challenging problems. Thus, to ensure the satisfactory and safe operation of nuclear power plants, this work proposes a new fast, robust optimal switching control strategy for pressurized water reactor-type nuclear power plants. The proposed control strategy guarantees a substantial degree of robustness, fast dynamic response over the entire operational envelope, and optimal performance during the nominal operation of the plant. To improve the robustness, obtain a fast dynamic response, and make the system optimal, a bank of controllers is designed. Various controllers, like a baseline proportional-integral-derivative controller, an optimal linear quadratic Gaussian controller, and a robust adaptive L1 controller, are designed to perform distinct tasks in a specific situation. At any instant of time, the most suitable controller from the bank of controllers is selected using the switching logic unit that designates the controller by monitoring the health of the nuclear power plant or transients. The proposed switching control strategy optimizes the overall performance and increases operational safety and efficiency. Simulation studies have been performed considering various uncertainties and disturbances that demonstrate the applicability and effectiveness of the proposed switching control strategy over some conventional control techniques.

Keywords: switching control, robust control, optimal control, nuclear power control

Procedia PDF Downloads 115