Search results for: model order formulation
662 Food Security in India: A Case Study of Kandi Region of Punjab
Authors: Savita Ahlawat, Dhian Kaur
Abstract:
Banishing hunger from the face of earth has been frequently expressed in various international, national and regional level conferences since 1974. Providing food security has become important issue across the world particularly in developing countries. In a developing country like India, where growth rate of population is more than that of the food grains production, food security is a question of great concern. According to the International Food Policy Research Institute's Global Hunger Index, 2011, India ranks 67 of the 81 countries of the world with the worst food security status. After Green Revolution, India became a food surplus country. Its production has increased from 74.23 million tonnes in 1966-67 to 257.44 million tonnes in 2011-12. But after achieving selfsufficiency in food during last three decades, the country is now facing new challenges due to increasing population, climate change, stagnation in farm productivity. Therefore, the main objective of the present paper is to examine the food security situation at national level in the country and further to explain the paradox of food insecurity in a food surplus state of India i.e in Punjab at micro level. In order to achieve the said objectives, secondary data collected from the Ministry of Agriculture and the Agriculture department of Punjab State was analyzed. The result of the study showed that despite having surplus food production the country is still facing food insecurity problem at micro level. Within the Kandi belt of Punjab state, the area adjacent to plains is food secure while the area along the hills falls in food insecure zone. The present paper is divided into following three sections (i) Introduction, (ii) Analysis of food security situation at national level as well as micro level (Kandi belt of Punjab State) (iii) Concluding ObservationsKeywords: Availability, consumption, food security, poverty.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6763661 High School Stem Curriculum and Example of Laboratory Work That Shows How Microcomputers Can Help in Understanding of Physical Concepts
Authors: Jelena Slugan, Ivica Ružić
Abstract:
We are witnessing the rapid development of technologies that change the world around us. However, curriculums and teaching processes are often slow to adapt to the change; it takes time, money and expertise to implement technology in the classroom. Therefore, the University of Split, Croatia, partnered with local school Marko Marulić High School and created the project "Modern competence in modern high schools" as part of which five different curriculums for STEM areas were developed. One of the curriculums involves combining information technology with physics. The main idea was to teach students how to use different circuits and microcomputers to explore nature and physical phenomena. As a result, using electrical circuits, students are able to recreate in the classroom the phenomena that they observe every day in their environment. So far, high school students had very little opportunity to perform experiments independently, and especially, those physics experiment did not involve ICT. Therefore, this project has a great importance, because the students will finally get a chance to develop themselves in accordance to modern technologies. This paper presents some new methods of teaching physics that will help students to develop experimental skills through the study of deterministic nature of physical laws. Students will learn how to formulate hypotheses, model physical problems using the electronic circuits and evaluate their results. While doing that, they will also acquire useful problem solving skills.
Keywords: ICT in physics, curriculum, laboratory activities, STEM.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 989660 Effect of Modified Atmosphere Packaging and Storage Temperatures on Quality of Shelled Raw Walnuts
Authors: M. Javanmard
Abstract:
This study was aimed at analyzing the effects of packaging (MAP) and preservation conditions on the packaged fresh walnut kernel quality. The central composite plan was used for evaluating the effect of oxygen (0–10%), carbon dioxide (0-10%), and temperature (4-26 °C) on qualitative characteristics of walnut kernels. Also, the response level technique was used to find the optimal conditions for interactive effects of factors, as well as estimating the best conditions of process using least amount of testing. Measured qualitative parameters were: peroxide index, color, decreased weight, mould and yeast counting test, and sensory evaluation. The results showed that the defined model for peroxide index, color, weight loss, and sensory evaluation is significant (p < 0.001), so that increase of temperature causes the peroxide value, color variation, and weight loss to increase and it reduces the overall acceptability of walnut kernels. An increase in oxygen percentage caused the color variation level and peroxide value to increase and resulted in lower overall acceptability of the walnuts. An increase in CO2 percentage caused the peroxide value to decrease, but did not significantly affect other indices (p ≥ 0.05). Mould and yeast were not found in any samples. Optimal packaging conditions to achieve maximum quality of walnuts include: 1.46% oxygen, 10% carbon dioxide, and temperature of 4 °C.
Keywords: Shelled walnut, MAP, quality, storage temperature.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1139659 A Rule-based Approach for Anomaly Detection in Subscriber Usage Pattern
Authors: Rupesh K. Gopal, Saroj K. Meher
Abstract:
In this report we present a rule-based approach to detect anomalous telephone calls. The method described here uses subscriber usage CDR (call detail record) data sampled over two observation periods: study period and test period. The study period contains call records of customers- non-anomalous behaviour. Customers are first grouped according to their similar usage behaviour (like, average number of local calls per week, etc). For customers in each group, we develop a probabilistic model to describe their usage. Next, we use maximum likelihood estimation (MLE) to estimate the parameters of the calling behaviour. Then we determine thresholds by calculating acceptable change within a group. MLE is used on the data in the test period to estimate the parameters of the calling behaviour. These parameters are compared against thresholds. Any deviation beyond the threshold is used to raise an alarm. This method has the advantage of identifying local anomalies as compared to techniques which identify global anomalies. The method is tested for 90 days of study data and 10 days of test data of telecom customers. For medium to large deviations in the data in test window, the method is able to identify 90% of anomalous usage with less than 1% false alarm rate.Keywords: Subscription fraud, fraud detection, anomalydetection, maximum likelihood estimation, rule based systems.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2813658 A Temporal QoS Ontology for ERTMS/ETCS
Authors: Marc Sango, Olimpia Hoinaru, Christophe Gransart, Laurence Duchien
Abstract:
Ontologies offer a means for representing and sharing information in many domains, particularly in complex domains. For example, it can be used for representing and sharing information of System Requirement Specification (SRS) of complex systems like the SRS of ERTMS/ETCS written in natural language. Since this system is a real-time and critical system, generic ontologies, such as OWL and generic ERTMS ontologies provide minimal support for modeling temporal information omnipresent in these SRS documents. To support the modeling of temporal information, one of the challenges is to enable representation of dynamic features evolving in time within a generic ontology with a minimal redesign of it. The separation of temporal information from other information can help to predict system runtime operation and to properly design and implement them. In addition, it is helpful to provide a reasoning and querying techniques to reason and query temporal information represented in the ontology in order to detect potential temporal inconsistencies. To address this challenge, we propose a lightweight 3-layer temporal Quality of Service (QoS) ontology for representing, reasoning and querying over temporal and non-temporal information in a complex domain ontology. Representing QoS entities in separated layers can clarify the distinction between the non QoS entities and the QoS entities in an ontology. The upper generic layer of the proposed ontology provides an intuitive knowledge of domain components, specially ERTMS/ETCS components. The separation of the intermediate QoS layer from the lower QoS layer allows us to focus on specific QoS Characteristics, such as temporal or integrity characteristics. In this paper, we focus on temporal information that can be used to predict system runtime operation. To evaluate our approach, an example of the proposed domain ontology for handover operation, as well as a reasoning rule over temporal relations in this domain-specific ontology, are presented.
Keywords: System Requirement Specification, ERTMS/ETCS, Temporal Ontologies, Domain Ontologies.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3135657 Least Square-SVM Detector for Wireless BPSK in Multi-Environmental Noise
Authors: J. P. Dubois, Omar M. Abdul-Latif
Abstract:
Support Vector Machine (SVM) is a statistical learning tool developed to a more complex concept of structural risk minimization (SRM). In this paper, SVM is applied to signal detection in communication systems in the presence of channel noise in various environments in the form of Rayleigh fading, additive white Gaussian background noise (AWGN), and interference noise generalized as additive color Gaussian noise (ACGN). The structure and performance of SVM in terms of the bit error rate (BER) metric is derived and simulated for these advanced stochastic noise models and the computational complexity of the implementation, in terms of average computational time per bit, is also presented. The performance of SVM is then compared to conventional binary signaling optimal model-based detector driven by binary phase shift keying (BPSK) modulation. We show that the SVM performance is superior to that of conventional matched filter-, innovation filter-, and Wiener filter-driven detectors, even in the presence of random Doppler carrier deviation, especially for low SNR (signal-to-noise ratio) ranges. For large SNR, the performance of the SVM was similar to that of the classical detectors. However, the convergence between SVM and maximum likelihood detection occurred at a higher SNR as the noise environment became more hostile.Keywords: Colour noise, Doppler shift, innovation filter, least square-support vector machine, matched filter, Rayleigh fading, Wiener filter.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1813656 Students’ Level of Participation, Critical Thinking, Types of Action and Influencing Factors in Online Forum Environment
Authors: N. I. Bazid, I. N. Umar
Abstract:
Due to the advancement of Internet technology, online learning is widely used in higher education institutions. Online learning offers several means of communication, including online forum. Through online forum, students and instructors are able to discuss and share their knowledge and expertise without having a need to attend the face-to-face, ordinary classroom session. The purposes of this study are to analyze the students’ levels of participation and critical thinking, types of action and factors influencing their participation in online forum. A total of 41 postgraduate students undertaking a course in educational technology from a public university in Malaysia were involved in this study. In this course, the students participated in a weekly online forum as part of the course requirement. Based on the log data file extracted from the online forum, the students’ type of actions (view, add, update, delete posts) and their levels of participation (passive, moderate or active) were identified. In addition, the messages posted in the forum were analyzed to gauge their level of critical thinking. Meanwhile, the factors that might influence their online forum participation were measured using a 24-items questionnaire. Based on the log data, a total of 105 posts were sent by the participants. In addition, the findings show that (i) majority of the students are moderate participants, with an average of two to three posts per person, (ii) viewing posts are the most frequent type of action (85.1%), and followed by adding post (9.7%). Furthermore, based on the posts they made, the most frequent type of critical thinking observed was justification (50 input or 19.0%), followed by linking ideas and interpretation (47 input or 18%), and novelty (38 input or 14.4%). The findings indicate that online forum allows for social interaction and can be used to measure the students’ critical thinking skills. In order to achieve this, monitoring students’ activities in the online forum is recommended.
Keywords: Critical thinking, learning management system, level of online participation, online forum.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2275655 A Cross-Cultural Approach for Communication with Biological and Non-Biological Intelligences
Authors: Thomas Schalow
Abstract:
This paper posits the need to take a cross-cultural approach to communication with non-human cultures and intelligences in order to meet the following three imminent contingencies: communicating with sentient biological intelligences, communicating with extraterrestrial intelligences, and communicating with artificial super-intelligences. The paper begins with a discussion of how intelligence emerges. It disputes some common assumptions we maintain about consciousness, intention, and language. The paper next explores cross-cultural communication among humans, including non-sapiens species. The next argument made is that we need to become much more serious about communicating with the non-human, intelligent life forms that already exist around us here on Earth. There is an urgent need to broaden our definition of communication and reach out to the other sentient life forms that inhabit our world. The paper next examines the science and philosophy behind CETI (communication with extraterrestrial intelligences) and how it has proven useful, even in the absence of contact with alien life. However, CETI’s assumptions and methodology need to be revised and based on the cross-cultural approach to communication proposed in this paper if we are truly serious about finding and communicating with life beyond Earth. The final theme explored in this paper is communication with non-biological super-intelligences using a cross-cultural communication approach. This will present a serious challenge for humanity, as we have never been truly compelled to converse with other species, and our failure to seriously consider such intercourse has left us largely unprepared to deal with communication in a future that will be mediated and controlled by computer algorithms. Fortunately, our experience dealing with other human cultures can provide us with a framework for this communication. The basic assumptions behind intercultural communication can be applied to the many types of communication envisioned in this paper if we are willing to recognize that we are in fact dealing with other cultures when we interact with other species, alien life, and artificial super-intelligence. The ideas considered in this paper will require a new mindset for humanity, but a new disposition will prepare us to face the challenges posed by a future dominated by artificial intelligence.
Keywords: Artificial intelligence, CETI, communication, culture, language.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 986654 Creating Shared Value: A Paradigm Shift from Corporate Social Responsibility to Creating Shared Value
Authors: Bolanle Deborah Motilewa, E.K. Rowland Worlu, Gbenga Mayowa Agboola, Marvellous Aghogho Chidinma Gberevbie
Abstract:
Businesses operating in the modern business world are faced with varying challenges; amongst which is the need to ensure that they are performing their societal function of being responsible in the society in which they operate. This responsibility to society is generally termed as corporate social responsibility. For many years, the practice of corporate social responsibility (CSR) was solely philanthropic, where organizations gave ‘charity’ or ‘alms’ to society, without any link to the organization’s mission and objectives. However, there has arisen a shift in the application of CSR from an act of philanthropy to a strategy with a business model engaged in by organizations to create a win-win situation of performing their societal obligation, whilst simultaneously performing their economic obligation. In more recent times, the term has moved from CSR to creating shared value, which is simply corporate policies and practices that enhance the competitiveness of a business organization while simultaneously advancing social and economic conditions in the communities in which the company operates. Creating shared value has in more recent light found more meaning in underdeveloped countries, faced with deep societal challenges that businesses can solve whilst creating economic value. This study thus reviews literature on CSR, conceptualizing the shift to creating shared value and finally viewing its potential significance in Africa’s development.Keywords: Corporate social responsibility, shared value, Africapitalism.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2509653 A Novel VLSI Architecture for Image Compression Model Using Low power Discrete Cosine Transform
Authors: Vijaya Prakash.A.M, K.S.Gurumurthy
Abstract:
In Image processing the Image compression can improve the performance of the digital systems by reducing the cost and time in image storage and transmission without significant reduction of the Image quality. This paper describes hardware architecture of low complexity Discrete Cosine Transform (DCT) architecture for image compression[6]. In this DCT architecture, common computations are identified and shared to remove redundant computations in DCT matrix operation. Vector processing is a method used for implementation of DCT. This reduction in computational complexity of 2D DCT reduces power consumption. The 2D DCT is performed on 8x8 matrix using two 1-Dimensional Discrete cosine transform blocks and a transposition memory [7]. Inverse discrete cosine transform (IDCT) is performed to obtain the image matrix and reconstruct the original image. The proposed image compression algorithm is comprehended using MATLAB code. The VLSI design of the architecture is implemented Using Verilog HDL. The proposed hardware architecture for image compression employing DCT was synthesized using RTL complier and it was mapped using 180nm standard cells. . The Simulation is done using Modelsim. The simulation results from MATLAB and Verilog HDL are compared. Detailed analysis for power and area was done using RTL compiler from CADENCE. Power consumption of DCT core is reduced to 1.027mW with minimum area[1].Keywords: Discrete Cosine Transform (DCT), Inverse DiscreteCosine Transform (IDCT), Joint Photographic Expert Group (JPEG), Low Power Design, Very Large Scale Integration (VLSI) .
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3139652 Analysis of the Internal Mechanical Conditions in the Lower Limb Due to External Loads
Authors: Kent Salomonsson, Xuefang Zhao, Sara Kallin
Abstract:
Human soft tissue is loaded and deformed by any activity, an effect known as a stress-strain relationship, and is often described by a load and tissue elongation curve. Several advances have been made in the fields of biology and mechanics of soft human tissue. However, there is limited information available on in vivo tissue mechanical characteristics and behavior. Confident mechanical properties of human soft tissue cannot be extrapolated from e.g. animal testing. Thus, there is need for non invasive methods to analyze mechanical characteristics of soft human tissue. In the present study, the internal mechanical conditions of the lower limb, which is subject to an external load, is studied by use of the finite element method. A detailed finite element model of the lower limb is made possible by use of MRI scans. Skin, fat, bones, fascia and muscles are represented separately and the material properties for them are obtained from literature. Previous studies have been shown to address macroscopic deformation features, e.g. indentation depth, to a large extent. However, the detail in which the internal anatomical features have been modeled does not reveal the critical internal strains that may induce hypoxia and/or eventual tissue damage. The results of the present study reveals that lumped material models, i.e. averaging of the material properties for the different constituents, does not capture regions of critical strains in contrast to more detailed models.Keywords: FEM, human soft tissue, indentation, properties.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1286651 Earth Station Neural Network Control Methodology and Simulation
Authors: Hanaa T. El-Madany, Faten H. Fahmy, Ninet M. A. El-Rahman, Hassen T. Dorrah
Abstract:
Renewable energy resources are inexhaustible, clean as compared with conventional resources. Also, it is used to supply regions with no grid, no telephone lines, and often with difficult accessibility by common transport. Satellite earth stations which located in remote areas are the most important application of renewable energy. Neural control is a branch of the general field of intelligent control, which is based on the concept of artificial intelligence. This paper presents the mathematical modeling of satellite earth station power system which is required for simulating the system.Aswan is selected to be the site under consideration because it is a rich region with solar energy. The complete power system is simulated using MATLAB–SIMULINK.An artificial neural network (ANN) based model has been developed for the optimum operation of earth station power system. An ANN is trained using a back propagation with Levenberg–Marquardt algorithm. The best validation performance is obtained for minimum mean square error. The regression between the network output and the corresponding target is equal to 96% which means a high accuracy. Neural network controller architecture gives satisfactory results with small number of neurons, hence better in terms of memory and time are required for NNC implementation. The results indicate that the proposed control unit using ANN can be successfully used for controlling the satellite earth station power system.
Keywords: Satellite, neural network, MATLAB, power system.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1868650 Load Forecasting in Microgrid Systems with R and Cortana Intelligence Suite
Authors: F. Lazzeri, I. Reiter
Abstract:
Energy production optimization has been traditionally very important for utilities in order to improve resource consumption. However, load forecasting is a challenging task, as there are a large number of relevant variables that must be considered, and several strategies have been used to deal with this complex problem. This is especially true also in microgrids where many elements have to adjust their performance depending on the future generation and consumption conditions. The goal of this paper is to present a solution for short-term load forecasting in microgrids, based on three machine learning experiments developed in R and web services built and deployed with different components of Cortana Intelligence Suite: Azure Machine Learning, a fully managed cloud service that enables to easily build, deploy, and share predictive analytics solutions; SQL database, a Microsoft database service for app developers; and PowerBI, a suite of business analytics tools to analyze data and share insights. Our results show that Boosted Decision Tree and Fast Forest Quantile regression methods can be very useful to predict hourly short-term consumption in microgrids; moreover, we found that for these types of forecasting models, weather data (temperature, wind, humidity and dew point) can play a crucial role in improving the accuracy of the forecasting solution. Data cleaning and feature engineering methods performed in R and different types of machine learning algorithms (Boosted Decision Tree, Fast Forest Quantile and ARIMA) will be presented, and results and performance metrics discussed.
Keywords: Time-series, features engineering methods for forecasting, energy demand forecasting, Azure machine learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1290649 Jatropha curcas L. Oil Selectivity in Froth Flotation
Authors: André C. Silva, Izabela L. A. Moraes, Elenice M. S. Silva, Carlos M. Silva Filho
Abstract:
In Brazil, most soils are acidic and low in essential nutrients required for the growth and development of plants, making fertilizers essential for agriculture. As the biggest producer of soy in the world and a major producer of coffee, sugar cane and citrus fruits, Brazil is a large consumer of phosphate. Brazilian’s phosphate ores are predominantly from igneous rocks showing a complex mineralogy, associated with carbonites and oxides, typically iron, silicon and barium. The adopted industrial concentration circuit for this type of ore is a mix between magnetic separation (both low and high field) to remove the magnetic fraction and a froth flotation circuit composed by a reverse flotation of apatite (barite’s flotation) followed by direct flotation circuit (rougher, cleaner and scavenger circuit). Since the 70’s fatty acids obtained from vegetable oils are widely used as lower-cost collectors in apatite froth flotation. This is a very effective approach to the apatite family of minerals, being that this type of collector is both selective and efficient (high recovery). This paper presents Jatropha curcas L. oil (JCO) as a renewable and sustainable source of fatty acids with high selectivity in froth flotation of apatite. JCO is considerably rich in fatty acids such as linoleic, oleic and palmitic acid. The experimental campaign involved 216 tests using a modified Hallimond tube and two different minerals (apatite and quartz). In order to be used as a collector, the oil was saponified. The results found were compared with the synthetic collector, Fotigam 5806 produced by Clariant, which is composed mainly by soy oil. JCO showed the highest selectivity for apatite flotation with cold saponification at pH 8 and concentration of 2.5 mg/L. In this case, the mineral recovery was around 95%.
Keywords: Froth flotation, Jatropha curcas L., microflotation, selectivity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1142648 Experimental Investigation of Heat Transfer on Vertical Two-Phased Closed Thermosyphon
Authors: M. Hadi Kusuma, Nandy Putra, Anhar Riza Antariksawan, Ficky Augusta Imawan
Abstract:
Heat pipe is considered to be applied as a passive system to remove residual heat that generated from reactor core when incident occur or from spent fuel storage pool. The objectives are to characterized the heat transfer phenomena, performance of heat pipe, and as a model for large heat pipe will be applied as passive cooling system on nuclear spent fuel pool storage. In this experimental wickless heat pipe or two-phase closed thermosyphon (TPCT) is used. Variation of heat flux are 611.24 Watt/m2 - 3291.29 Watt/m2. Variation of filling ratio are 45 - 70%. Variation of initial pressure are -62 to -74 cm Hg. Demineralized water is used as working fluid in the TPCT. The results showed that increasing of heat load leads to an increase of evaporation of the working fluid. The optimum filling ratio obtained for 60% of TPCT evaporator volume, and initial pressure variation gave different TPCT wall temperature characteristic. TPCT showed best performance with 60% filling ratio and can be consider to be applied as passive residual heat removal system or passive cooling system on spent fuel storage pool.Keywords: Two-phase closed thermo syphon, heat pipe, passive cooling, spent fuel storage pool.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1061647 A Modular On-line Profit Sharing Approach in Multiagent Domains
Authors: Pucheng Zhou, Bingrong Hong
Abstract:
How to coordinate the behaviors of the agents through learning is a challenging problem within multi-agent domains. Because of its complexity, recent work has focused on how coordinated strategies can be learned. Here we are interested in using reinforcement learning techniques to learn the coordinated actions of a group of agents, without requiring explicit communication among them. However, traditional reinforcement learning methods are based on the assumption that the environment can be modeled as Markov Decision Process, which usually cannot be satisfied when multiple agents coexist in the same environment. Moreover, to effectively coordinate each agent-s behavior so as to achieve the goal, it-s necessary to augment the state of each agent with the information about other existing agents. Whereas, as the number of agents in a multiagent environment increases, the state space of each agent grows exponentially, which will cause the combinational explosion problem. Profit sharing is one of the reinforcement learning methods that allow agents to learn effective behaviors from their experiences even within non-Markovian environments. In this paper, to remedy the drawback of the original profit sharing approach that needs much memory to store each state-action pair during the learning process, we firstly address a kind of on-line rational profit sharing algorithm. Then, we integrate the advantages of modular learning architecture with on-line rational profit sharing algorithm, and propose a new modular reinforcement learning model. The effectiveness of the technique is demonstrated using the pursuit problem.Keywords: Multi-agent learning; reinforcement learning; rationalprofit sharing; modular architecture.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1446646 The Effect of Kaizen Implementation on Employees’ Affective Attitude in Textile Company in Ethiopia
Authors: Meseret Teshome
Abstract:
This study has the objective of assessing the effect of kaizen (5S, Muda elimination and Quality Control Circle (QCC) on employees’ affective attitude (job satisfaction, commitment and job stress) in Kombolcha Textile Share Company. A conceptual model was developed to describe the relationship between Kaizen and Employees’ Affective Attitude (EAA) factors. The three factors of Employee Affective Attitude were measured using questionnaire derived from other validated questionnaire. In the data collection to conduct this study; questionnaire, unstructured interview, written documents and direct observations are used. To analyze the data, SPSS and Microsoft Excel were used. In addition, the internal consistency of similar items in the questionnaire instrument was measured for their equivalence by using the cronbach’s alpha test. In this study, the effect of 5S, Muda elimination and QCC on job satisfaction, commitment and job stress in Kombolcha Textile Share Company is assessed and factors that reduce employees’ job satisfaction with respect to kaizen implementation are identified. The total averages of means from the questionnaire are 3.1 for job satisfaction, 4.31 for job commitment and 4.2 for job stress. And results from interview and secondary data show that kaizen implementation have effect on EAA. In general, based on the thesis results it was concluded that kaizen (5S, muda elimination and QCC) have positive effect for improving EAA factors at KTSC. Finally, recommendations for improvement are given based on the results.
Keywords: Kaizen, job satisfaction, job commitment, job stress.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1270645 Thailand Throne Hall Architecture in the Grand Palace in the Early Days of Ratthanakosin Era
Authors: Somchai Seviset, Lin Jian Qun
Abstract:
Amarindra-vinitchai-mahaisuraya Bhiman throne hall is one of the most significant throne halls in the grand palace in the Ratthanakosin city situated in Bangkok, Thailand. This is the first group of throne halls built in order to serve as a place for meetings, performing state affairs and royal duties until the present time. The structure and pattern of architectural design including the decoration and interior design of the throne hall obviously exhibits and convey the status of the king under the context of Thai society in the early period of Ratthanakosin era. According to the tradition of ruling the kingdom in absolute monarchy which had been in place since Ayutthaya era (A.D.1350-1767), the king was deemed as Deva Raja, the highest power and authority over the kingdom and as the greatest emperor of the universe (Chakkravatin). The architectural design adopted the concept of “Prasada" or Viman which served as the dwelling place of the gods and was presented in the form of “Thai traditional architecture" For the interior design of the throne hall, it had been adopted to be the heaven and the centre of the Universe in line with the cosmological beliefs of ancient people described in scripture Tribhumikatha (Tri Bhumi) written by Phra Maha Thamma Raja (Phraya Lithai) of the Sukhothai era (A.D.1347-1368). According to this belief, the throne hall had been designed to represent mount Meru, the central of the universe. On the top end of Mount Meru is situated the Viman and dwelling place of Indra who is the king of gods according to the idea of Deva Raja (the king god Avatar). At the same time, Indra also existed as the king of the universe simultaneously.Keywords: Amarindra-vinitchai-mahaisuraya Bhiman throne hall, throne hall architecture, grand palace, Thai traditional architecture, Ratthanakosin era
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2519644 Variations of Body Mass Index with Age in Masters Athletes (World Masters Games)
Authors: Walsh Joe, Climstein Mike, Heazlewood Ian Timothy, Burke Stephen, Kettunen Jyrki, Adams Kent, DeBeliso Mark
Abstract:
Whilst there is growing evidence that activity across the lifespan is beneficial for improved health, there are also many changes involved with the aging process and subsequently the potential for reduced indices of health. The nexus between health, physical activity and aging is complex and has raised much interest in recent times due to the realization that a multifaceted approached is necessary in order to counteract a growing obesity epidemic. By investigating age based trends within a population adhering to competitive sport at older ages, further insight might be gleaned to assist in understanding one of many factors influencing this relationship. BMI was derived using data gathered on a total of 6,071 masters athletes (51.9% male, 48.1% female) aged 25 to 91 years ( =51.5, s =±9.7), competing at the Sydney World Masters Games (2009). Using linear and loess regression it was demonstrated that the usual tendency for prevalence of higher BMI increasing with age was reversed in the sample. This trend in reversal was repeated for both male and female only sub-sets of the sample participants, indicating the possibility of improved prevalence of BMI with increasing age for both the sample as a whole and these individual subgroups. This evidence of improved classification in one index of health (reduced BMI) for masters athletes (when compared to the general population) implies there are either improved levels of this index of health with aging due to adherence to sport or possibly the reduced BMI is advantageous and contributes to this cohort adhering (or being attracted) to masters sport at older ages. Demonstration of this proportionately under-investigated World Masters Games population having an improved relationship between BMI and increasing age over the general population is of particular interest in the context of the measures being taken globally to curb an obesity epidemic.Keywords: Aging, masters athlete, Quetelet Index, sport.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1677643 A Coupled Extended-Finite-Discrete Element Method: On the Different Contact Schemes between Continua and Discontinua
Authors: Shervin Khazaeli, Shahab Haj-zamani
Abstract:
Recently, advanced geotechnical engineering problems related to soil movement, particle loss, and modeling of local failure (i.e. discontinua) as well as modeling the in-contact structures (i.e. continua) are of the great interest among researchers. The aim of this research is to meet the requirements with respect to the modeling of the above-mentioned two different domains simultaneously. To this end, a coupled numerical method is introduced based on Discrete Element Method (DEM) and eXtended-Finite Element Method (X-FEM). In the coupled procedure, DEM is employed to capture the interactions and relative movements of soil particles as discontinua, while X-FEM is utilized to model in-contact structures as continua, which may consist of different types of discontinuities. For verification purposes, the new coupled approach is utilized to examine benchmark problems including different contacts between/within continua and discontinua. Results are validated by comparison with those of existing analytical and numerical solutions. This study proves that extended-finite-discrete element method can be used to robustly analyze not only contact problems, but also other types of discontinuities in continua such as (i) crack formations and propagations, (ii) voids and bimaterial interfaces, and (iii) combination of previous cases. In essence, the proposed method can be used vastly in advanced soil-structure interaction problems to investigate the micro and macro behaviour of the surrounding soil and the response of the embedded structure that contains discontinuities.Keywords: Contact problems, discrete element method, extended-finite element method, soil-structure interaction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1236642 The Potential Effect of Biochar Application on Microbial Activities and Availability of Mineral Nitrogen in Arable Soil Stressed by Drought
Authors: Helena Dvořáčková, Jakub Elbl, Irina Mikajlo, Antonín Kintl, Jaroslav Hynšt, Olga Urbánková, Jaroslav Záhora
Abstract:
Application of biochar to arable soils represents a new approach to restore soil health and quality. Many studies reported the positive effect of biochar application on soil fertility and development of soil microbial community. Moreover biochar may affect the soil water retention, but this effect has not been sufficiently described yet. Therefore this study deals with the influence of biochar application on: microbial activities in soil, availability of mineral nitrogen in soil for microorganisms, mineral nitrogen retention and plant production. To demonstrate the effect of biochar addition on the above parameters, the pot experiment was realized. As a model crop, Lactuca sativa L. was used and cultivated from December 10th 2014 till March 22th 2015 in climate chamber in thoroughly homogenized arable soil with and without addition of biochar. Five variants of experiment (V1 – V5) with different regime of irrigation were prepared. Variants V1 – V2 were fertilized by mineral nitrogen, V3 – V4 by biochar and V5 was a control. The significant differences were found only in plant production and mineral nitrogen retention. The highest content of mineral nitrogen in soil was detected in V1 and V2, about 250 % in comparison with the other variants. The positive effect of biochar application on soil fertility, mineral nitrogen availability was not found. On the other hand results of plant production indicate the possible positive effect of biochar application on soil water retention.Keywords: Arable soil, biochar, drought, mineral Nitrogen.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2051641 Geochemistry of Tektites from Maoming of Guandong Province, China
Authors: Yung-Tan Lee, Ren-Yi Huang, Jyh-Yi Shih, Meng-Lung Lin, Yen-Tsui Hu, Hsiao-Ling Yu, Chih-Cheng Chen
Abstract:
We measured the major and trace element contents and Rb-Sr isotopic compositions of 12 tektites from the Maoming area, Guandong province (south China). All the samples studied are splash-form tektites which show pitted or grooved surfaces with schlieren structures on some surfaces. The trace element ratios Ba/Rb (avg. 4.33), Th/Sm (avg. 2.31), Sm/Sc (avg. 0.44), Th/Sc (avg. 1.01) , La/Sc (avg. 2.86), Th/U (avg. 7.47), Zr/Hf (avg. 46.01) and the rare earth elements (REE) contents of tektites of this study are similar to the average upper continental crust. From the chemical composition, it is suggested that tektites in this study are derived from similar parental terrestrial sedimentary deposit which may be related to post-Archean upper crustal rocks. The tektites from the Maoming area have high positive εSr(0) values-ranging from 176.9~190.5 which indicate that the parental material for these tektites have similar Sr isotopic compositions to old terrestrial sedimentary rocks and they were not dominantly derived from recent young sediments (such as soil or loess). The Sr isotopic data obtained by the present study support the conclusion proposed by Blum et al. (1992)[1] that the depositional age of sedimentary target materials is close to 170Ma (Jurassic). Mixing calculations based on the model proposed by Ho and Chen (1996)[2] for various amounts and combinations of target rocks indicate that the best fit for tektites from the Maoming area is a mixture of 40% shale, 30% greywacke, 30% quartzite.Keywords: Geochemistry, Guandong province, South China, Tektites
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2016640 Evaluation of Hancornia speciosa Gomes Lyophilization at Different Stages of Maturation
Authors: D. C. Soares, J. T. S. Santos, D. G. Costa, A. K. S. Abud, T. P. Nunes, A. V. D. Figueiredo, A. M. de Oliveira Junior
Abstract:
Mangabeira (Hancornia speciosa Gomes), a native plant in Brazil, is found growing spontaneously in various regions of the country. The high perishability of tropical fruits such as mangaba, causes it to be necessary to use technologies that promote conservation, aiming to increase the shelf life of this fruit and add value. The objective of this study was to compare the mangabas lyophilization curves behaviors with different sizes and maturation stages. The fruits were freeze-dried for a period of approximately 45 hours at lyophilizer Liotop brand, model L -108. It has been considered large the fruits between 38 and 58 mm diameter and small, between 23 and 28 mm diameter and the two states of maturation, intermediate and mature. Large size mangabas drying curves in both states of maturation were linear behavior at all process, while the kinetic drying curves related to small fruits, independent of maturation state, had a typical behavior of drying, with all the well-defined steps. With these results it was noted that the time of lyophilization was suitable for small mangabas, a fact that did not happen with the larger one. This may indicate that the large mangabas require a longer time to freeze until reaches the equilibrium level, as it happens with the small fruits, going to have constant moisture at the end of the process. For both types of fruit were analyzed water activity, acidity, protein, lipid, and vitamin C before and after the process.
Keywords: Freeze dryer, mangaba, conservation, chemical characteristics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2128639 The Use of Software and Internet Search Engines to Develop the Encoding and Decoding Skills of a Dyslexic Learner: A Case Study
Authors: Rabih Joseph Nabhan
Abstract:
This case study explores the impact of two major computer software programs Learn to Speak English and Learn English Spelling and Pronunciation, and some Internet search engines such as Google on mending the decoding and spelling deficiency of Simon X, a dyslexic student. The improvement in decoding and spelling may result in better reading comprehension and composition writing. Some computer programs and Internet materials can help regain the missing awareness and consequently restore his self-confidence and self-esteem. In addition, this study provides a systematic plan comprising a set of activities (four computer programs and Internet materials) which address the problem from the lowest to the highest levels of phoneme and phonological awareness. Four methods of data collection (accounts, observations, published tests, and interviews) create the triangulation to validly and reliably collect data before the plan, during the plan, and after the plan. The data collected are analyzed quantitatively and qualitatively. Sometimes the analysis is either quantitative or qualitative, and some other times a combination of both. Tables and figures are utilized to provide a clear and uncomplicated illustration of some data. The improvement in the decoding, spelling, reading comprehension, and composition writing skills that occurred is proved through the use of authentic materials performed by the student under study. Such materials are a comparison between two sample passages written by the learner before and after the plan, a genuine computer chat conversation, and the scores of the academic year that followed the execution of the plan. Based on these results, the researcher recommends further studies on other Lebanese dyslexic learners using the computer to mend their language problem in order to design and make a most reliable software program that can address this disability more efficiently and successfully.
Keywords: Analysis, awareness, dyslexic, software.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 645638 Hybrid Approach for Software Defect Prediction Using Machine Learning with Optimization Technique
Authors: C. Manjula, Lilly Florence
Abstract:
Software technology is developing rapidly which leads to the growth of various industries. Now-a-days, software-based applications have been adopted widely for business purposes. For any software industry, development of reliable software is becoming a challenging task because a faulty software module may be harmful for the growth of industry and business. Hence there is a need to develop techniques which can be used for early prediction of software defects. Due to complexities in manual prediction, automated software defect prediction techniques have been introduced. These techniques are based on the pattern learning from the previous software versions and finding the defects in the current version. These techniques have attracted researchers due to their significant impact on industrial growth by identifying the bugs in software. Based on this, several researches have been carried out but achieving desirable defect prediction performance is still a challenging task. To address this issue, here we present a machine learning based hybrid technique for software defect prediction. First of all, Genetic Algorithm (GA) is presented where an improved fitness function is used for better optimization of features in data sets. Later, these features are processed through Decision Tree (DT) classification model. Finally, an experimental study is presented where results from the proposed GA-DT based hybrid approach is compared with those from the DT classification technique. The results show that the proposed hybrid approach achieves better classification accuracy.
Keywords: Decision tree, genetic algorithm, machine learning, software defect prediction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1465637 Producing Outdoor Design Conditions Based on the Dependency between Meteorological Elements: Copula Approach
Authors: Zhichao Jiao, Craig Farnham, Jihui Yuan, Kazuo Emura
Abstract:
It is common to use the outdoor design weather data to select the air-conditioning capacity in the building design stage. The meteorological elements of outdoor design weather data are usually selected based on their excess frequency separately while the dependency between the elements is not well considered. It means that the simultaneous occurrence probability of these elements is smaller than the original excess frequency which may cause an overestimation of selecting air-conditioning capacity. Therefore, the copula approach which can capture the dependency between multivariate data was used to model the joint distributions of the meteorological elements, like air temperature and global solar radiation. We suggest a method based on the specific simultaneous occurrence probability of these two elements of selecting more credible outdoor design conditions. The hourly weather data at 12 noon from 2001 to 2010 in Tokyo, Japan are used to analyze the dependency structure and joint distribution, the Gaussian copula represents the dependence of data best. According to calculating the air temperature and global solar radiation in specific simultaneous occurrence probability and the common exceeding, the results show that both the air temperature and global solar radiation based on simultaneous occurrence probability are lower than these based on the conventional method in the same probability.
Keywords: Copula approach, Design weather database, energy conservation, HVAC.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 360636 A Study on Fuzzy Adaptive Control of Enteral Feeding Pump
Authors: Seungwoo Kim, Hyojune Chae, Yongrae Jung, Jongwook Kim
Abstract:
Recent medical studies have investigated the importance of enteral feeding and the use of feeding pumps for recovering patients unable to feed themselves or gain nourishment and nutrients by natural means. The most of enteral feeding system uses a peristaltic tube pump. A peristaltic pump is a form of positive displacement pump in which a flexible tube is progressively squeezed externally to allow the resulting enclosed pillow of fluid to progress along it. The squeezing of the tube requires a precise and robust controller of the geared motor to overcome parametric uncertainty of the pumping system which generates due to a wide variation of friction and slip between tube and roller. So, this paper proposes fuzzy adaptive controller for the robust control of the peristaltic tube pump. This new adaptive controller uses a fuzzy multi-layered architecture which has several independent fuzzy controllers in parallel, each with different robust stability area. Out of several independent fuzzy controllers, the most suited one is selected by a system identifier which observes variations in the controlled system parameter. This paper proposes a design procedure which can be carried out mathematically and systematically from the model of a controlled system. Finally, the good control performance, accurate dose rate and robust system stability, of the developed feeding pump is confirmed through experimental and clinic testing.
Keywords: Enteral Feeding Pump, Peristaltic Tube Pump, Fuzzy Adaptive Control, Fuzzy Multi-layered Controller, Look-up Table..
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1645635 Quantifying the Second-Level Digital Divide on Sub-National Level
Authors: Vladimir Korovkin, Albert Park, Evgeny Kaganer
Abstract:
Digital divide, the gap in the access to the world of digital technologies and the socio-economic opportunities that they create is an important phenomenon of the XXI century. This gap may exist between countries, regions within a country or socio-demographic groups, creating the classes of “digital have and have nots”. While the 1st-level divide (the difference in opportunities to access the digital networks) was demonstrated to diminish with time, the issues of 2nd level divide (the difference in skills and usage of digital systems) and 3rd level divide (the difference in effects obtained from digital technology) may grow. The paper offers a systemic review of literature on the measurement of the digital divide, noting the certain conceptual stagnation due to the lack of effective instruments that would capture the complex nature of the phenomenon. As a result, many important concepts do not receive the empiric exploration they deserve. As a solution the paper suggests a composite Digital Life Index, that studies separately the digital supply and demand across seven independent dimensions providing for 14 subindices. The Index is based on Internet-borne data, a distinction from traditional research approaches that rely on official statistics or surveys. The application of the model to the study of the digital divide between Russian regions and between cities in China have brought promising results. The paper advances the existing methodological literature on the 2nd level digital divide and can also inform practical decision-making regarding the strategies of national and regional digital development.
Keywords: Digital transformation, second-level digital divide, composite index, digital policy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 463634 Improving Flash Flood Forecasting with a Bayesian Probabilistic Approach: A Case Study on the Posina Basin in Italy
Authors: Zviad Ghadua, Biswa Bhattacharya
Abstract:
The Flash Flood Guidance (FFG) provides the rainfall amount of a given duration necessary to cause flooding. The approach is based on the development of rainfall-runoff curves, which helps us to find out the rainfall amount that would cause flooding. An alternative approach, mostly experimented with Italian Alpine catchments, is based on determining threshold discharges from past events and on finding whether or not an oncoming flood has its magnitude more than some critical discharge thresholds found beforehand. Both approaches suffer from large uncertainties in forecasting flash floods as, due to the simplistic approach followed, the same rainfall amount may or may not cause flooding. This uncertainty leads to the question whether a probabilistic model is preferable over a deterministic one in forecasting flash floods. We propose the use of a Bayesian probabilistic approach in flash flood forecasting. A prior probability of flooding is derived based on historical data. Additional information, such as antecedent moisture condition (AMC) and rainfall amount over any rainfall thresholds are used in computing the likelihood of observing these conditions given a flash flood has occurred. Finally, the posterior probability of flooding is computed using the prior probability and the likelihood. The variation of the computed posterior probability with rainfall amount and AMC presents the suitability of the approach in decision making in an uncertain environment. The methodology has been applied to the Posina basin in Italy. From the promising results obtained, we can conclude that the Bayesian approach in flash flood forecasting provides more realistic forecasting over the FFG.
Keywords: Flash flood, Bayesian, flash flood guidance, FFG, forecasting, Posina.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 749633 The Role of Motivations for Eco-driving and Social Norms on Behavioural Intentions Regarding Speed Limits and Time Headway
Authors: M. Cristea, F. Paran, P. Delhomme
Abstract:
Eco-driving allows the driver to optimize his/her behaviour in order to achieve several types of benefits: reducing pollution emissions, increasing road safety, and fuel saving. One of the main rules for adopting eco-driving is to anticipate the traffic events by avoiding strong acceleration or braking and maintaining a steady speed when possible. Therefore, drivers have to comply with speed limits and time headway. The present study explored the role of three types of motivation and social norms in predicting French drivers- intentions to comply with speed limits and time headway as eco-driving practices as well as examine the variations according to gender and age. 1234 drivers with ages between 18 and 75 years old filled in a questionnaire which was presented as part of an online survey aiming to better understand the drivers- road habits. It included items assessing: a) behavioural intentions to comply with speed limits and time headway according to three types of motivation: reducing pollution emissions, increasing road safety, and fuel saving, b) subjective and descriptive social norms regarding the intention to comply with speed limits and time headway, and c) sociodemographical variables. Drivers expressed their intention to frequently comply with speed limits and time headway in the following 6 months; however, they showed more intention to comply with speed limits as compared to time headway regardless of the type of motivation. The subjective injunctive norms were significantly more important in predicting drivers- intentions to comply with speed limits and time headway as compared to the descriptive norms. In addition, the most frequently reported type of motivation for complying with speed limits and time headway was increasing road safety followed by fuel saving and reducing pollution emissions, hence underlining a low motivation to practice eco-driving. Practical implications of the results are discussed.
Keywords: Eco-driving, social norms, speed limits, time headway.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1593