Search results for: Multiple time delays
615 Optimal Selling Prices for Small Sized Poultry Farmers
Authors: Hidefumi Kawakatsu, Dong Li, Kosuke Kato
Abstract:
In Japan, meat-type chickens are mainly classified into three categories: (1) Broilers, (2) Branded chickens, and (3) Jidori (Free-range local traditional pedigree chickens). The Jidori chickens are certified by the Japanese Ministry of Agriculture, whilst, for the Branded chickens, there is no regulation with respect to their breed (genotype) or methods for rearing them. It is, therefore, relatively easy for poultry farmers to introduce Branded than Jidori chickens. The Branded chickens are normally fed a low-calorie diet with ingredients such as herbs, which lengthens their breeding period (compared with that of the Broilers) and increases their market value. In the field of inventory management, fast-growing animals such as broilers are categorised as ameliorating items. To the best of our knowledge, there are no previous studies that have explicitly considered smaller sized poultry farmers with limited breeding areas. This study develops an inventory model for a small sized poultry farmer that produces both the Broilers (Product 1) and the Branded chickens (Product 2) with different amelioration rates. The poultry farmer’s total profit per unit of time is formulated as a function of selling prices by using a price-dependent demand function. The existence of a unique optimal selling price for each product, which maximises the total profit, established. It has also been confirmed through numerical examples that, when the breeding area is fixed, the total profit could increase if the poultry farmer reduced the product quantity of Product 1 to introduce Product 2.Keywords: Amelioration, deterioration, small sized poultry farmers, optimal price.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 811614 Bone Mineral Density and Trabecular Bone Score in Ukrainian Men with Obesity
Authors: Vladyslav Povoroznyuk, Anna Musiienko, Nataliia Dzerovych, Roksolana Povoroznyuk
Abstract:
Osteoporosis and obesity are widespread diseases in people over 50 years associated with changes in structure and body composition. Нigher body mass index (BMI) values are associated with greater bone mineral density (BMD). However, trabecular bone score (TBS) indirectly explores bone quality, independently of BMD. The aim of our study was to evaluate the relationship between the BMD and TBS parameters in Ukrainian men suffering from obesity. We examined 396 men aged 40-89 years. Depending on their BMI all the subjects were divided into two groups: Group I – patients with obesity whose BMI was ≥ 30 kg/m2 (n=129) and Group II – patients without obesity and BMI of < 30 kg/m2 (n=267). The BMD of total body, lumbar spine L1-L4, femoral neck and forearm were measured by DXA (Prodigy, GEHC Lunar, Madison, WI, USA). The TBS of L1- L4 was assessed by means of TBS iNsight® software installed on DXA machine (product of Med-Imaps, Pessac, France). In general, obese men had a significantly higher BMD of lumbar spine L1-L4, femoral neck, total body and ultradistal forearm (p < 0.001) in comparison with men without obesity. The TBS of L1-L4 was significantly lower in obese men compared to non-obese ones (p < 0.001). BMD of lumbar spine L1-L4, femoral neck and total body significantly differ in men aged 40-49, 50-59, 60-69, and 80-89 years (p < 0.05). At the same time, in men aged 70-79 years, BMD of lumbar spine L1-L4 (p=0.46), femoral neck (p=0.18), total body (p=0.21), ultra-distal forearm (p=0.13), and TBS (p=0.07) did not significantly differ. A significant positive correlation between the fat mass and the BMD at different sites was observed. However, the correlation between the fat mass and TBS of L1-L4 was also significant, though negative.
Keywords: Bone mineral density, trabecular bone score, obesity, men.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1099613 Hash Based Block Matching for Digital Evidence Image Files from Forensic Software Tools
Abstract:
Internet use, intelligent communication tools, and social media have all become an integral part of our daily life as a result of rapid developments in information technology. However, this widespread use increases crimes committed in the digital environment. Therefore, digital forensics, dealing with various crimes committed in digital environment, has become an important research topic. It is in the research scope of digital forensics to investigate digital evidences such as computer, cell phone, hard disk, DVD, etc. and to report whether it contains any crime related elements. There are many software and hardware tools developed for use in the digital evidence acquisition process. Today, the most widely used digital evidence investigation tools are based on the principle of finding all the data taken place in digital evidence that is matched with specified criteria and presenting it to the investigator (e.g. text files, files starting with letter A, etc.). Then, digital forensics experts carry out data analysis to figure out whether these data are related to a potential crime. Examination of a 1 TB hard disk may take hours or even days, depending on the expertise and experience of the examiner. In addition, it depends on examiner’s experience, and may change overall result involving in different cases overlooked. In this study, a hash-based matching and digital evidence evaluation method is proposed, and it is aimed to automatically classify the evidence containing criminal elements, thereby shortening the time of the digital evidence examination process and preventing human errors.
Keywords: Block matching, digital evidence, hash list.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1361612 Comparison of Microwave-Assisted and Conventional Leaching for Extraction of Copper from Chalcopyrite Concentrate
Authors: Ayfer Kilicarslan, Kubra Onol, Sercan Basit, Muhlis Nezihi Saridede
Abstract:
Chalcopyrite (CuFeS2) is the most common primary mineral used for the commercial production of copper. The low dissolution efficiency of chalcopyrite in sulfate media has prevented an efficient industrial leaching of this mineral in sulfate media. Ferric ions, bacteria, oxygen and other oxidants have been used as oxidizing agents in the leaching of chalcopyrite in sulfate and chloride media under atmospheric or pressure leaching conditions. Two leaching methods were studied to evaluate chalcopyrite (CuFeS2) dissolution in acid media. First, the conventional oxidative acid leaching method was carried out using sulfuric acid (H2SO4) and potassium dichromate (K2Cr2O7) as oxidant at atmospheric pressure. Second, microwave-assisted acid leaching was performed using the microwave accelerated reaction system (MARS) for same reaction media. Parameters affecting the copper extraction such as leaching time, leaching temperature, concentration of H2SO4 and concentration of K2Cr2O7 were investigated. The results of conventional acid leaching experiments were compared to the microwave leaching method. It was found that the copper extraction obtained under high temperature and high concentrations of oxidant with microwave leaching is higher than those obtained conventionally. 81% copper extraction was obtained by the conventional oxidative acid leaching method in 180 min, with the concentration of 0.3 mol/L K2Cr2O7 in 0.5M H2SO4 at 50 ºC, while 93.5% copper extraction was obtained in 60 min with microwave leaching method under same conditions.Keywords: Extraction, copper, microwave-assisted leaching, chalcopyrite, potassium dichromate.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2851611 Non-Burn Treatment of Health Care Risk Waste
Authors: Jefrey Pilusa, Tumisang Seodigeng
Abstract:
This research discusses a South African case study for the potential of utilizing refuse-derived fuel (RDF) obtained from non-burn treatment of health care risk waste (HCRW) as potential feedstock for green energy production. This specific waste stream can be destroyed via non-burn treatment technology involving high-speed mechanical shredding followed by steam or chemical injection to disinfect the final product. The RDF obtained from this process is characterised by a low moisture, low ash, and high calorific value which means it can be potentially used as high-value solid fuel. Due to the raw feed of this RDF being classified as hazardous, the final RDF has been reported to be non-infectious and can blend with other combustible wastes such as rubber and plastic for waste to energy applications. This study evaluated non-burn treatment technology as a possible solution for on-site destruction of HCRW in South African private and public health care centres. Waste generation quantities were estimated based on the number of registered patient beds, theoretical bed occupancy. Time and motion study was conducted to evaluate the logistics viability of on-site treatment. Non-burn treatment technology for HCRW is a promising option for South Africa, and successful implementation of this method depends upon the initial capital investment, operational cost and environmental permitting of such technology; there are other influencing factors such as the size of the waste stream, product off-take price as well as product demand.
Keywords: Autoclave, disposal, fuel, incineration, medical waste.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1167610 Modal Analysis of Machine Tool Column Using Finite Element Method
Authors: Migbar Assefa
Abstract:
The performance of a machine tool is eventually assessed by its ability to produce a component of the required geometry in minimum time and at small operating cost. It is customary to base the structural design of any machine tool primarily upon the requirements of static rigidity and minimum natural frequency of vibration. The operating properties of machines like cutting speed, feed and depth of cut as well as the size of the work piece also have to be kept in mind by a machine tool structural designer. This paper presents a novel approach to the design of machine tool column for static and dynamic rigidity requirement. Model evaluation is done effectively through use of General Finite Element Analysis software ANSYS. Studies on machine tool column are used to illustrate finite element based concept evaluation technique. This paper also presents results obtained from the computations of thin walled box type columns that are subjected to torsional and bending loads in case of static analysis and also results from modal analysis. The columns analyzed are square and rectangle based tapered open column, column with cover plate, horizontal partitions and with apertures. For the analysis purpose a total of 70 columns were analyzed for bending, torsional and modal analysis. In this study it is observed that the orientation and aspect ratio of apertures have no significant effect on the static and dynamic rigidity of the machine tool structure.
Keywords: Finite Element Modeling, Modal Analysis, Machine tool structure, Static Analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5041609 Thermal Method for Testing Small Chemisorbents Samples on the Base of Potassium Superoxide
Authors: Pavel V. Balabanov, Daria A. Liubimova, Aleksandr P. Savenkov
Abstract:
The increase of technogenic and natural accidents, accompanied by air pollution, for example, by combustion products, leads to the necessity of respiratory protection. This work is devoted to the development of a calorimetric method and a device which allows investigating quickly the kinetics of carbon dioxide sorption by chemisorbents on the base of potassium superoxide in order to assess the protective properties of respiratory protective closed circuit apparatus. The features of the traditional approach for determining the sorption properties in a thin layer of chemisorbent are described, as well as methods and devices, which can be used for the sorption kinetics study. The authors developed an approach (as opposed to the traditional approach) based on the power measurement of internal heat sources in the chemisorbent layer. The emergence of the heat sources is a result of exothermic reaction of carbon dioxide sorption. This approach eliminates the necessity of chemical analysis of samples and can significantly reduce the time and material expenses during chemisorbents testing. Error of determining the volume fraction of adsorbed carbon dioxide by the developed method does not exceed 12%. Taking into account the efficiency of the method, we consider that it is a good alternative to traditional methods of chemical analysis under the assessment of the protection sorbents quality.
Keywords: Carbon dioxide chemisorption, exothermic reaction, internal heat sources, respiratory protective apparatus.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1697608 Identifying the Barriers behind the Lack of Six Sigma Use in Libyan Manufacturing Companies
Authors: Osama Elgadi, Martin Birkett, Wai Ming Cheung
Abstract:
This paper investigates the barriers behind the underutilisation of six sigma in Libyan manufacturing companies (LMCs). A mixed-method methodology is proposed, starting by conducting interviews to collect qualitative data followed by the development of a questionnaire to obtain quantitative data. The focus of this paper is on discussing the findings of the interview stage and how these can be used to further develop the questionnaire stage. The interview results showed that only four key barriers were highlighted as being encountered by LMCs. With a difference in terms of their significance, these factors were identified, and placed in descending order according to their importance, namely: “Lack of top management commitment”, “Lack of training”, “Lack of knowledge about six sigma”, and “Culture effect”. The findings also showed that some barriers which, were found in previous studies of six sigma implementation were not considered as barriers to LMCs but can, in fact, be considered as success factors or enablers for six sigma adoption. These factors were identified as: “sufficiency of time and financial resources”; “customers unsatisfied”; “good communication between all departments in the company”; “we are certain about its results and benefits to our company and unhappy with the current quality system”. These results suggest that LMCs face fewer barriers to adopting six sigma than many well-established global companies operating in other countries and could take advantage of these successful factors by developing and implementing a six sigma framework to improve their product quality and competitiveness.
Keywords: Six sigma, barriers, Libyan manufacturing companies, interview.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1761607 Thin Bed Reservoir Delineation Using Spectral Decomposition and Instantaneous Seismic Attributes, Pohokura Field, Taranaki Basin, New Zealand
Authors: P. Sophon, M. Kruachanta, S. Chaisri, G. Leaungvongpaisan, P. Wongpornchai
Abstract:
The thick bed hydrocarbon reservoirs are primarily interested because of the more prolific production. When the amount of petroleum in the thick bed starts decreasing, the thin bed reservoirs are the alternative targets to maintain the reserves. The conventional interpretation of seismic data cannot delineate the thin bed having thickness less than the vertical seismic resolution. Therefore, spectral decomposition and instantaneous seismic attributes were used to delineate the thin bed in this study. Short Window Discrete Fourier Transform (SWDFT) spectral decomposition and instantaneous frequency attributes were used to reveal the thin bed reservoir, while Continuous Wavelet Transform (CWT) spectral decomposition and envelope (instantaneous amplitude) attributes were used to indicate hydrocarbon bearing zone. The study area is located in the Pohokura Field, Taranaki Basin, New Zealand. The thin bed target is the uppermost part of Mangahewa Formation, the most productive in the gas-condensate production in the Pohokura Field. According to the time-frequency analysis, SWDFT spectral decomposition can reveal the thin bed using a 72 Hz SWDFT isofrequency section and map, and that is confirmed by the instantaneous frequency attribute. The envelope attribute showing the high anomaly indicates the hydrocarbon accumulation area at the thin bed target. Moreover, the CWT spectral decomposition shows the low-frequency shadow zone and abnormal seismic attenuation in the higher isofrequencies below the thin bed confirms that the thin bed can be a prospective hydrocarbon zone.
Keywords: Hydrocarbon indication, instantaneous seismic attribute, spectral decomposition, thin bed delineation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 645606 Hydraulic Conductivity Prediction of Cement Stabilized Pavement Base Incorporating Recycled Plastics and Recycled Aggregates
Authors: Md. Shams Razi Shopnil, Tanvir Imtiaz, Sabrina Mahjabin, Md. Sahadat Hossain
Abstract:
Saturated hydraulic conductivity is one of the most significant attributes of pavement base course. Determination of hydraulic conductivity is a routine procedure for regular aggregate base courses. However, in many cases, a cement-stabilized base course is used with compromised drainage ability. Traditional hydraulic conductivity testing procedure is a readily available option which leads to two consequential drawbacks, i.e., the time required for the specimen to be saturated and extruding the sample after completion of the laboratory test. To overcome these complications, this study aims at formulating an empirical approach to predicting hydraulic conductivity based on Unconfined Compressive Strength test results. To do so, this study comprises two separate experiments (Constant Head Permeability test and Unconfined Compressive Strength test) conducted concurrently on a specimen having the same physical credentials. Data obtained from the two experiments were then used to devise a correlation between hydraulic conductivity and unconfined compressive strength. This correlation in the form of a polynomial equation helps to predict the hydraulic conductivity of cement-treated pavement base course, bypassing the cumbrous process of traditional permeability and less commonly used horizontal permeability tests. The correlation was further corroborated by a different set of data, and it has been found that the derived polynomial equation is deemed to be a viable tool to predict hydraulic conductivity.
Keywords: Hydraulic conductivity, unconfined compressive strength, recycled plastics, recycled concrete aggregates.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 338605 Optimization by Means of Genetic Algorithm of the Equivalent Electrical Circuit Model of Different Order for Li-ion Battery Pack
Authors: V. Pizarro-Carmona, S. Castano-Solis, M. Cortés-Carmona, J. Fraile-Ardanuy, D. Jimenez-Bermejo
Abstract:
The purpose of this article is to optimize the Equivalent Electric Circuit Model (EECM) of different orders to obtain greater precision in the modeling of Li-ion battery packs. Optimization includes considering circuits based on 1RC, 2RC and 3RC networks, with a dependent voltage source and a series resistor. The parameters are obtained experimentally using tests in the time domain and in the frequency domain. Due to the high non-linearity of the behavior of the battery pack, Genetic Algorithm (GA) was used to solve and optimize the parameters of each EECM considered (1RC, 2RC and 3RC). The objective of the estimation is to minimize the mean square error between the measured impedance in the real battery pack and those generated by the simulation of different proposed circuit models. The results have been verified by comparing the Nyquist graphs of the estimation of the complex impedance of the pack. As a result of the optimization, the 2RC and 3RC circuit alternatives are considered as viable to represent the battery behavior. These battery pack models are experimentally validated using a hardware-in-the-loop (HIL) simulation platform that reproduces the well-known New York City cycle (NYCC) and Federal Test Procedure (FTP) driving cycles for electric vehicles. The results show that using GA optimization allows obtaining EECs with 2RC or 3RC networks, with high precision to represent the dynamic behavior of a battery pack in vehicular applications.
Keywords: Li-ion battery packs modeling optimized, EECM, GA, electric vehicle applications.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 548604 Studying the Value-Added Chain for the Fish Distribution Process at Quang Binh Fishing Port in Vietnam
Authors: Van Chung Nguyen
Abstract:
The purpose of this research is to study the current status of the value chain for fish distribution at Quang Binh Fishing Port with 360 research samples, in which the research subjects are fishermen, traders, retailers, and businesses. The research uses the approach of applying the value chain theoretical framework of Kaplinsky and Morris to quantify and describe market channels and actors participating in the value chain and analyze the value-added process of these companies according to market channels. The analysis results show that fishermen directly catch fish with high economic efficiency, but processing enterprises and, especially retailers, are the agents to obtain higher added value. Processing enterprises play a role that is not really clear due to outdated processing technology; in contrast, retailers have the highest added value. This shows that the added value of the fish supply chain at Quang Binh fishing port is still limited, leading to low output quality. Therefore, the selling price of fish to the market is still high compared to the abundant fish resources, leading to low consumption and limiting exports due to the quality of processing enterprises. This reduces demand and fishing capacity, and productivity is lower than potential. To improve the fish value chain at fishing ports, it is necessary to focus on improving product quality, strengthening linkages between actors, building brands and product consumption markets at the same time, improving the capacity of export processing enterprises.
Keywords: Quang Binh fishing port, value chain, fish market, distributions channel.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 81603 A Preliminary Literature Review of Digital Transformation Case Studies
Authors: Vesna Bosilj Vukšić, Lucija Ivančić, Dalia Suša Vugec
Abstract:
While struggling to succeed in today’s complex market environment and provide better customer experience and services, enterprises encompass digital transformation as a means for reaching competitiveness and foster value creation. A digital transformation process consists of information technology implementation projects, as well as organizational factors such as top management support, digital transformation strategy, and organizational changes. However, to the best of our knowledge, there is little evidence about digital transformation endeavors in organizations and how they perceive it – is it only about digital technologies adoption or a true organizational shift is needed? In order to address this issue and as the first step in our research project, a literature review is conducted. The analysis included case study papers from Scopus and Web of Science databases. The following attributes are considered for classification and analysis of papers: time component; country of case origin; case industry and; digital transformation concept comprehension, i.e. focus. Research showed that organizations – public, as well as private ones, are aware of change necessity and employ digital transformation projects. Also, the changes concerning digital transformation affect both manufacturing and service-based industries. Furthermore, we discovered that organizations understand that besides technologies implementation, organizational changes must also be adopted. However, with only 29 relevant papers identified, research positioned digital transformation as an unexplored and emerging phenomenon in information systems research. The scarcity of evidence-based papers calls for further examination of this topic on cases from practice.
Keywords: Digital strategy, digital technologies, digital transformation, literature review.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6816602 Multiparametric Optimization of Water Treatment Process for Thermal Power Plants
Authors: B. Mukanova, N. Glazyrina, S. Glazyrin
Abstract:
The formulated problem of optimization of the technological process of water treatment for thermal power plants is considered in this article. The problem is of multiparametric nature. To optimize the process, namely, reduce the amount of waste water, a new technology was developed to reuse such water. A mathematical model of the technology of wastewater reuse was developed. Optimization parameters were determined. The model consists of a material balance equation, an equation describing the kinetics of ion exchange for the non-equilibrium case and an equation for the ion exchange isotherm. The material balance equation includes a nonlinear term that depends on the kinetics of ion exchange. A direct problem of calculating the impurity concentration at the outlet of the water treatment plant was numerically solved. The direct problem was approximated by an implicit point-to-point computation difference scheme. The inverse problem was formulated as relates to determination of the parameters of the mathematical model of the water treatment plant operating in non-equilibrium conditions. The formulated inverse problem was solved. Following the results of calculation the time of start of the filter regeneration process was determined, as well as the period of regeneration process and the amount of regeneration and wash water. Multi-parameter optimization of water treatment process for thermal power plants allowed decreasing the amount of wastewater by 15%.
Keywords: Direct problem, multiparametric optimization, optimization parameters, water treatment.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2139601 Developing Electronic Medical Record System to Enhance the Satisfaction of Patients and Service Providers
Authors: Siham Jemal Kedir
Abstract:
Information communication technology is dramatically transforming the health sector, especially in developing countries with few resources and burgeoning access to an internet connection. As a result, processes such as record keeping, administration, and human resources have been vastly simplified, allowing hospitals to focus on delivering urgent medical care. This paper will explore the impact of IT through a study of the electronic medical record system in the Mekelle City Health Center in Tigray Region, Ethiopia. This paper has four specific objectives: 1. developing artifacts in the Electronic Medical Record system, 2. preparing a diagram for step-by-step development of Electronic Medical Records, 3. creating a draft website with the proposed Electronic Medical Record system, and 4. Testing and evaluating the performance and user acceptance of the system. The research will be done in a qualitative manner employing interviews and in-person observation. The research has found the following major results: firstly, the medical record system has been difficult to implement. Second, the Mekelle Health Center is using a manual recording system which is time-consuming and inefficient. The old recording system in the Center leads to the dissatisfaction of patients as well as the service provider staff. As a result, to transform the manual recording system into a digital system, an electronic medical recording system has been developed. The developed system has been tested for implementation and has been successful. Consequently, the administrator of the health center is ready to implement and use the developed software to introduce a medical recording system in Mekelle Health Center.
Keywords: Electronic Health Record Implementation, EMR System Development, Medical Record.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 69600 Exploring Students’ Self-Evaluation on Their Learning Outcomes through an Integrated Cumulative Grade Point Average Reporting Mechanism
Authors: Suriyani Ariffin, Nor Aziah Alias, Khairil Iskandar Othman, Haslinda Yusoff
Abstract:
An Integrated Cumulative Grade Point Average (iCGPA) is a mechanism and strategy to ensure the curriculum of an academic programme is constructively aligned to the expected learning outcomes and student performance based on the attainment of those learning outcomes that is reported objectively in a spider web. Much effort and time has been spent to develop a viable mechanism and trains academics to utilize the platform for reporting. The question is: How well do learners conceive the idea of their achievement via iCGPA and whether quality learner attributes have been nurtured through the iCGPA mechanism? This paper presents the architecture of an integrated CGPA mechanism purported to address a holistic evaluation from the evaluation of courses learning outcomes to aligned programme learning outcomes attainment. The paper then discusses the students’ understanding of the mechanism and evaluation of their achievement from the generated spider web. A set of questionnaires were distributed to a group of students with iCGPA reporting and frequency analysis was used to compare the perspectives of students on their performance. In addition, the questionnaire also explored how they conceive the idea of an integrated, holistic reporting and how it generates their motivation to improve. The iCGPA group was found to be receptive to what they have achieved throughout their study period. They agreed that the achievement level generated from their spider web allows them to develop intervention and enhance the programme learning outcomes before they graduate.
Keywords: Learning outcomes attainment, iCGPA, programme learning outcomes, spider web, iCGPA reporting skills.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 778599 A New High Speed Neural Model for Fast Character Recognition Using Cross Correlation and Matrix Decomposition
Authors: Hazem M. El-Bakry
Abstract:
Neural processors have shown good results for detecting a certain character in a given input matrix. In this paper, a new idead to speed up the operation of neural processors for character detection is presented. Such processors are designed based on cross correlation in the frequency domain between the input matrix and the weights of neural networks. This approach is developed to reduce the computation steps required by these faster neural networks for the searching process. The principle of divide and conquer strategy is applied through image decomposition. Each image is divided into small in size sub-images and then each one is tested separately by using a single faster neural processor. Furthermore, faster character detection is obtained by using parallel processing techniques to test the resulting sub-images at the same time using the same number of faster neural networks. In contrast to using only faster neural processors, the speed up ratio is increased with the size of the input image when using faster neural processors and image decomposition. Moreover, the problem of local subimage normalization in the frequency domain is solved. The effect of image normalization on the speed up ratio of character detection is discussed. Simulation results show that local subimage normalization through weight normalization is faster than subimage normalization in the spatial domain. The overall speed up ratio of the detection process is increased as the normalization of weights is done off line.Keywords: Fast Character Detection, Neural Processors, Cross Correlation, Image Normalization, Parallel Processing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1539598 Convective Hot Air Drying of Different Varieties of Blanched Sweet Potato Slices
Authors: M. O. Oke, T. S. Workneh
Abstract:
Drying behavior of blanched sweet potato in a cabinet dryer using different five air temperatures (40-80°C) and ten sweet potato varieties sliced to 5mm thickness were investigated. The drying data were fitted to eight models. The Modified Henderson and Pabis model gave the best fit to the experimental moisture ratio data obtained during the drying of all the varieties while Newton (Lewis) and Wang and Singh models gave the least fit. The values of Deff obtained for Bophelo variety (1.27 x 10-9 to 1.77 x 10-9 m2/s) was the least while that of S191 (1.93 x 10-9 to 2.47 x 10-9 m2/s) was the highest which indicates that moisture diffusivity in sweet potato is affected by the genetic factor. Activation energy values ranged from 0.27-6.54 kJ/mol. The lower activation energy indicates that drying of sweet potato slices requires less energy and is hence a cost and energy saving method. The drying behavior of blanched sweet potato was investigated in a cabinet dryer. Drying time decreased considerably with increase in hot air temperature. Out of the eight models fitted, the Modified Henderson and Pabis model gave the best fit to the experimental moisture ratio data on all the varieties while Newton, Wang and Singh models gave the least. The lower activation energy (0.27 - 6.54 kJ/mol) obtained indicates that drying of sweet potato slices requires less energy and is hence a cost and energy saving method.
Keywords: Sweet Potato Slice, Drying Models, Moisture Ratio, Moisture Diffusivity, Activation Energy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3010597 Feasibility Study for a Castor oil Extraction Plant in South Africa
Authors: Mohamed Belaid, Edison Muzenda, Getrude Mitilene, Mansoor Mollagee
Abstract:
A feasibility study for the design and construction of a pilot plant for the extraction of castor oil in South Africa was conducted. The study emphasized the four critical aspects of project feasibility analysis, namely technical, financial, market and managerial aspects. The technical aspect involved research on existing oil extraction technologies, namely: mechanical pressing and solvent extraction, as well as assessment of the proposed production site for both short and long term viability of the project. The site is on the outskirts of Nkomazi village in the Mpumalanga province, where connections for water and electricity are currently underway, potential raw material supply proves to be reliable since the province is known for its commercial farming. The managerial aspect was evaluated based on the fact that the current producer of castor oil will be fully involved in the project while receiving training and technical assistance from Sasol Technology, the TSC and SEDA. Market and financial aspects were evaluated and the project was considered financially viable with a Net Present Value (NPV) of R2 731 687 and an Internal Rate of Return (IRR) of 18% at an annual interest rate of 10.5%. The payback time is 6years for analysis over the first 10 years with a net income of R1 971 000 in the first year. The project was thus found to be feasible with high chance of success while contributing to socio-economic development. It was recommended for lab tests to be conducted to establish process kinetics that would be used in the initial design of the plant.Keywords: Mechanical pressing, Net Present Value, Oilextraction, Project feasibility, Solvent extraction
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6083596 Performance Analysis of Reconstruction Algorithms in Diffuse Optical Tomography
Authors: K. Uma Maheswari, S. Sathiyamoorthy, G. Lakshmi
Abstract:
Diffuse Optical Tomography (DOT) is a non-invasive imaging modality used in clinical diagnosis for earlier detection of carcinoma cells in brain tissue. It is a form of optical tomography which produces gives the reconstructed image of a human soft tissue with by using near-infra-red light. It comprises of two steps called forward model and inverse model. The forward model provides the light propagation in a biological medium. The inverse model uses the scattered light to collect the optical parameters of human tissue. DOT suffers from severe ill-posedness due to its incomplete measurement data. So the accurate analysis of this modality is very complicated. To overcome this problem, optical properties of the soft tissue such as absorption coefficient, scattering coefficient, optical flux are processed by the standard regularization technique called Levenberg - Marquardt regularization. The reconstruction algorithms such as Split Bregman and Gradient projection for sparse reconstruction (GPSR) methods are used to reconstruct the image of a human soft tissue for tumour detection. Among these algorithms, Split Bregman method provides better performance than GPSR algorithm. The parameters such as signal to noise ratio (SNR), contrast to noise ratio (CNR), relative error (RE) and CPU time for reconstructing images are analyzed to get a better performance.
Keywords: Diffuse optical tomography, ill-posedness, Levenberg Marquardt method, Split Bregman, the Gradient projection for sparse reconstruction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1622595 Estimation of Asphalt Pavement Surfaces Using Image Analysis Technique
Authors: Mohammad A. Khasawneh
Abstract:
Asphalt concrete pavements gradually lose their skid resistance causing safety problems especially under wet conditions and high driving speeds. In order to enact the actual field polishing and wearing process of asphalt pavement surfaces in a laboratory setting, several laboratory-scale accelerated polishing devices were developed by different agencies. To mimic the actual process, friction and texture measuring devices are needed to quantify surface deterioration at different polishing intervals that reflect different stages of the pavement life. The test could still be considered lengthy and to some extent labor-intensive. Therefore, there is a need to come up with another method that can assist in investigating the bituminous pavement surface characteristics in a practical and time-efficient test procedure.
The purpose of this paper is to utilize a well-developed image analysis technique to characterize asphalt pavement surfaces without the need to use conventional friction and texture measuring devices in an attempt to shorten and simplify the polishing procedure in the lab.
Promising findings showed the possibility of using image analysis in lieu of the labor-sensitive-variable-in-nature friction and texture measurements. It was found that the exposed aggregate surface area of asphalt specimens made from limestone and gravel aggregates produced solid evidence of the validity of this method in describing asphalt pavement surfaces. Image analysis results correlated well with the British Pendulum Numbers (BPN), Polish Values (PV) and Mean Texture Depth (MTD) values.
Keywords: Friction, Image Analysis, Polishing, Statistical Analysis, Texture.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2564594 Enhancing Temporal Extrapolation of Wind Speed Using a Hybrid Technique: A Case Study in West Coast of Denmark
Authors: B. Elshafei, X. Mao
Abstract:
The demand for renewable energy is significantly increasing, major investments are being supplied to the wind power generation industry as a leading source of clean energy. The wind energy sector is entirely dependable and driven by the prediction of wind speed, which by the nature of wind is very stochastic and widely random. This s0tudy employs deep multi-fidelity Gaussian process regression, used to predict wind speeds for medium term time horizons. Data of the RUNE experiment in the west coast of Denmark were provided by the Technical University of Denmark, which represent the wind speed across the study area from the period between December 2015 and March 2016. The study aims to investigate the effect of pre-processing the data by denoising the signal using empirical wavelet transform (EWT) and engaging the vector components of wind speed to increase the number of input data layers for data fusion using deep multi-fidelity Gaussian process regression (GPR). The outcomes were compared using root mean square error (RMSE) and the results demonstrated a significant increase in the accuracy of predictions which demonstrated that using vector components of the wind speed as additional predictors exhibits more accurate predictions than strategies that ignore them, reflecting the importance of the inclusion of all sub data and pre-processing signals for wind speed forecasting models.
Keywords: Data fusion, Gaussian process regression, signal denoise, temporal extrapolation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 503593 Conceptualizing Thoughtful Intelligence for Sustainable Decision Making
Authors: Musarrat Jabeen
Abstract:
Thoughtful intelligence offers a sustainable position to enhance the influence of decision-makers. Thoughtful Intelligence implies the understanding to realize the impact of one’s thoughts, words and actions on the survival, dignity and development of the individuals, groups and nations. Thoughtful intelligence has received minimal consideration in the area of Decision Support Systems, with an end goal to evaluate the quantity of knowledge and its viability. This pattern degraded the imbibed contribution of thoughtful intelligence required for sustainable decision making. Given the concern, this paper concentrates on the question: How to present a model of Thoughtful Decision Support System (TDSS)? The aim of this paper is to appreciate the concepts of thoughtful intelligence and insinuate a Decision Support System based on thoughtful intelligence. Thoughtful intelligence includes three dynamic competencies: i) Realization about long term impacts of decisions that are made in a specific time and space, ii) A great sense of taking actions, iii) Intense interconnectivity with people and nature and; seven associate competencies, of Righteousness, Purposefulness, Understanding, Contemplation, Sincerity, Mindfulness, and Nurturing. The study utilizes two methods: Focused group discussion to count prevailing Decision Support Systems; 70% results of focus group discussions found six decision support systems and the positive inexistence of thoughtful intelligence among decision support systems regarding sustainable decision making. Delphi focused on defining thoughtful intelligence to model (TDSS). 65% results helped to conceptualize (definition and description) of thoughtful intelligence. TDSS is offered here as an addition in the decision making literature. The clients are top leaders.
Keywords: Thoughtful intelligence, Sustainable decision making, Thoughtful decision support system.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 596592 Detecting Cavitation in a Vertical Sea water Centrifugal Lift Pump Related to Iran Oil Industry Cooling Water Circulation System
Authors: Omid A. Zargar
Abstract:
Cavitation is one of the most well-known process faults that may occur in different industrial equipment especially centrifugal pumps. Cavitation also may happen in water pumps and turbines. Sometimes cavitation has been severe enough to wear holes in the impeller and damage the vanes to such a degree that the impeller becomes very ineffective. More commonly, the pump efficiency will decrease significantly during cavitation and continue to decrease as damage to the impeller increases. Typically, when cavitation occurs, an audible sound similar to ‘marbles’ or ‘crackling’ is reported to be emitted from the pump. In this paper, the most effective monitoring items and techniques in detecting cavitation discussed in details. Besides, some successful solutions for solving this problem for sea water vertical Centrifugal lift Pump discussed through a case history related to Iran oil industry. Furthermore, balance line modification, strainer choking and random resonance in sea water pumps discussed. In addition, a new Method for diagnosing mechanical conditions of sea water vertical Centrifugal lift Pumps introduced. This method involves disaggregating bus current by device into disaggregated currents having correspondences with operating currents in response to measured bus current. Moreover, some new patents and innovations in mechanical sea water pumping and cooling systems discussed in this paper.
Keywords: Cavitation, Vibration Analysis, Centrifugal Pump, Vertical Pump, Sea Water Pump, Balance Line, Strainer, Time Wave Form (TWF), Fast Fourier Transform (FFT)
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4167591 An Analysis of Collapse Mechanism of Thin- Walled Circular Tubes Subjected to Bending
Authors: Somya Poonaya, Chawalit Thinvongpituk, Umphisak Teeboonma
Abstract:
Circular tubes have been widely used as structural members in engineering application. Therefore, its collapse behavior has been studied for many decades, focusing on its energy absorption characteristics. In order to predict the collapse behavior of members, one could rely on the use of finite element codes or experiments. These tools are helpful and high accuracy but costly and require extensive running time. Therefore, an approximating model of tubes collapse mechanism is an alternative for early step of design. This paper is also aimed to develop a closed-form solution of thin-walled circular tube subjected to bending. It has extended the Elchalakani et al.-s model (Int. J. Mech. Sci.2002; 44:1117-1143) to include the rate of energy dissipation of rolling hinge in the circumferential direction. The 3-D geometrical collapse mechanism was analyzed by adding the oblique hinge lines along the longitudinal tube within the length of plastically deforming zone. The model was based on the principal of energy rate conservation. Therefore, the rates of internal energy dissipation were calculated for each hinge lines which are defined in term of velocity field. Inextensional deformation and perfect plastic material behavior was assumed in the derivation of deformation energy rate. The analytical result was compared with experimental result. The experiment was conducted with a number of tubes having various D/t ratios. Good agreement between analytical and experiment was achieved.Keywords: Bending, Circular tube, Energy, Mechanism.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3520590 Modified Energy and Link Failure Recovery Routing Algorithm for Wireless Sensor Network
Authors: M. Jayekumar, V. Nagarajan
Abstract:
Wireless sensor network finds role in environmental monitoring, industrial applications, surveillance applications, health monitoring and other supervisory applications. Sensing devices form the basic operational unit of the network that is self-battery powered with limited life time. Sensor node spends its limited energy for transmission, reception, routing and sensing information. Frequent energy utilization for the above mentioned process leads to network lifetime degradation. To enhance energy efficiency and network lifetime, we propose a modified energy optimization and node recovery post failure method, Energy-Link Failure Recovery Routing (E-LFRR) algorithm. In our E-LFRR algorithm, two phases namely, Monitored Transmission phase and Replaced Transmission phase are devised to combat worst case link failure conditions. In Monitored Transmission phase, the Actuator Node monitors and identifies suitable nodes for shortest path transmission. The Replaced Transmission phase dispatches the energy draining node at early stage from the active link and replaces it with the new node that has sufficient energy. Simulation results illustrate that this combined methodology reduces overhead, energy consumption, delay and maintains considerable amount of alive nodes thereby enhancing the network performance.
Keywords: Actuator node, energy efficient routing, energy hole, link failure recovery, link utilization, wireless sensor network.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1197589 Coupling Heat and Mass Transfer for Hydrogen-Assisted Self-Ignition Behaviors of Propane-Air Mixtures in Catalytic Micro-Channels
Authors: Junjie Chen, Deguang Xu
Abstract:
Transient simulation of the hydrogen-assisted self-ignition of propane-air mixtures were carried out in platinum-coated micro-channels from ambient cold-start conditions, using a two-dimensional model with reduced-order reaction schemes, heat conduction in the solid walls, convection and surface radiation heat transfer. The self-ignition behavior of hydrogen-propane mixed fuel is analyzed and compared with the heated feed case. Simulations indicate that hydrogen can successfully cause self-ignition of propane-air mixtures in catalytic micro-channels with a 0.2 mm gap size, eliminating the need for startup devices. The minimum hydrogen composition for propane self-ignition is found to be in the range of 0.8-2.8% (on a molar basis), and increases with increasing wall thermal conductivity, and decreasing inlet velocity or propane composition. Higher propane-air ratio results in earlier ignition. The ignition characteristics of hydrogen-assisted propane qualitatively resemble the selectively inlet feed preheating mode. Transient response of the mixed hydrogen- propane fuel reveals sequential ignition of propane followed by hydrogen. Front-end propane ignition is observed in all cases. Low wall thermal conductivities cause earlier ignition of the mixed hydrogen-propane fuel, subsequently resulting in low exit temperatures. The transient-state behavior of this micro-scale system is described, and the startup time and minimization of hydrogen usage are discussed.
Keywords: Micro-combustion, Self-ignition, Hydrogen addition, Heat transfer, Catalytic combustion, Transient simulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1888588 Evaluating Emission Reduction Due to a Proposed Light Rail Service: A Micro-Level Analysis
Authors: Saeid Eshghi, Neeraj Saxena, Abdulmajeed Alsultan
Abstract:
Carbon dioxide (CO2) alongside other gas emissions in the atmosphere cause a greenhouse effect, resulting in an increase of the average temperature of the planet. Transportation vehicles are among the main contributors of CO2 emission. Stationary vehicles with initiated motors produce more emissions than mobile ones. Intersections with traffic lights that force the vehicles to become stationary for a period of time produce more CO2 pollution than other parts of the road. This paper focuses on analyzing the CO2 produced by the traffic flow at Anzac Parade Road - Barker Street intersection in Sydney, Australia, before and after the implementation of Light rail transport (LRT). The data are gathered during the construction phase of the LRT by collecting the number of vehicles on each path of the intersection for 15 minutes during the evening rush hour of 1 week (6-7 pm, July 04-31, 2018) and then multiplied by 4 to calculate the flow of vehicles in 1 hour. For analyzing the data, the microscopic simulation software “VISSIM” has been used. Through the analysis, the traffic flow was processed in three stages: before and after implementation of light rail train, and one during the construction phase. Finally, the traffic results were input into another software called “EnViVer”, to calculate the amount of CO2 during 1 h. The results showed that after the implementation of the light rail, CO2 will drop by a minimum of 13%. This finding provides an evidence that light rail is a sustainable mode of transport.Keywords: Carbon dioxide, emission modeling, light rail, microscopic model, traffic flow.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 953587 Authentication Protocol for Wireless Sensor Networks
Authors: Sunil Gupta, Harsh Kumar Verma, AL Sangal
Abstract:
Wireless sensor networks can be used to measure and monitor many challenging problems and typically involve in monitoring, tracking and controlling areas such as battlefield monitoring, object tracking, habitat monitoring and home sentry systems. However, wireless sensor networks pose unique security challenges including forgery of sensor data, eavesdropping, denial of service attacks, and the physical compromise of sensor nodes. Node in a sensor networks may be vanished due to power exhaustion or malicious attacks. To expand the life span of the sensor network, a new node deployment is needed. In military scenarios, intruder may directly organize malicious nodes or manipulate existing nodes to set up malicious new nodes through many kinds of attacks. To avoid malicious nodes from joining the sensor network, a security is required in the design of sensor network protocols. In this paper, we proposed a security framework to provide a complete security solution against the known attacks in wireless sensor networks. Our framework accomplishes node authentication for new nodes with recognition of a malicious node. When deployed as a framework, a high degree of security is reachable compared with the conventional sensor network security solutions. A proposed framework can protect against most of the notorious attacks in sensor networks, and attain better computation and communication performance. This is different from conventional authentication methods based on the node identity. It includes identity of nodes and the node security time stamp into the authentication procedure. Hence security protocols not only see the identity of each node but also distinguish between new nodes and old nodes.
Keywords: Authentication, Key management, Wireless Sensornetwork, Elliptic curve cryptography (ECC).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3826586 Identifying the Traditional Color Scheme in Decorative Patterns Used by the Bahnar Ethnic Group in the Central Highlands of Vietnam
Authors: Nguyen Viet Tan
Abstract:
The Bahnar is one of 11 indigenous groups living in the Central Highlands of Vietnam. It is one among the four most popular groups in this area, including the Mnong who speak the same language of Mon Khmer family, while both groups of the Jrai and the Rhade belong to the Malayo-Polynesian language family. These groups once captured fertile plateaus, left their cultural and artistic heritage which affected the remaining small groups. Despite the difference in ethnic origins, these groups seem to share similar beliefs, customs and related folk arts after a very long time living beside each other. However, through an in-depth study, this paper points out the fact that the decorative patterns used by the Bahnar are different from the other ethnic groups, especially in color. Based on historical materials from the local museums and some studies in 1980s when all of the ethnic groups in this area had still lived in self-sufficient condition, this paper characterizes the traditional color scheme used by the Bahnar and identifies the difference in decorative motifs of this group compared to the others by pointing out they do not use green in their usual decorative patterns. Moreover, combined with some field surveys recently, through comparative analysis, it also discovers stylistic variations of these patterns in the process of cultural exchange with the other ethnic groups, both in and out of the region, in modern living conditions. This study helps to preserve and promote the traditional values and cultural identity of the Bahnar people in the Central Highlands of Vietnam, avoiding the fusion of styles among groups during the cultural exchange.
Keywords: Bahnar ethic group, decorative patterns, the central highland of Vietnam, traditional color scheme.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 648