Search results for: cost efficiency
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4249

Search results for: cost efficiency

139 Sliding Mode Power System Stabilizer for Synchronous Generator Stability Improvement

Authors: J. Ritonja, R. Brezovnik, M. Petrun, B. Polajžer

Abstract:

Many modern synchronous generators in power systems are extremely weakly damped. The reasons are cost optimization of the machine building and introduction of the additional control equipment into power systems. Oscillations of the synchronous generators and related stability problems of the power systems are harmful and can lead to failures in operation and to damages. The only useful solution to increase damping of the unwanted oscillations represents the implementation of the power system stabilizers. Power system stabilizers generate the additional control signal which changes synchronous generator field excitation voltage. Modern power system stabilizers are integrated into static excitation systems of the synchronous generators. Available commercial power system stabilizers are based on linear control theory. Due to the nonlinear dynamics of the synchronous generator, current stabilizers do not assure optimal damping of the synchronous generator’s oscillations in the entire operating range. For that reason the use of the robust power system stabilizers which are convenient for the entire operating range is reasonable. There are numerous robust techniques applicable for the power system stabilizers. In this paper the use of sliding mode control for synchronous generator stability improvement is studied. On the basis of the sliding mode theory, the robust power system stabilizer was developed. The main advantages of the sliding mode controller are simple realization of the control algorithm, robustness to parameter variations and elimination of disturbances. The advantage of the proposed sliding mode controller against conventional linear controller was tested for damping of the synchronous generator oscillations in the entire operating range. Obtained results show the improved damping in the entire operating range of the synchronous generator and the increase of the power system stability. The proposed study contributes to the progress in the development of the advanced stabilizer, which will replace conventional linear stabilizers and improve damping of the synchronous generators.

Keywords: Control theory, power system stabilizer, robust control, sliding mode control, stability, synchronous generator.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1020
138 Assessment of Path Loss Prediction Models for Wireless Propagation Channels at L-Band Frequency over Different Micro-Cellular Environments of Ekiti State, Southwestern Nigeria

Authors: C. I. Abiodun, S. O. Azi, J. S. Ojo, P. Akinyemi

Abstract:

The design of accurate and reliable mobile communication systems depends majorly on the suitability of path loss prediction methods and the adaptability of the methods to various environments of interest. In this research, the results of the adaptability of radio channel behavior are presented based on practical measurements carried out in the 1800 MHz frequency band. The measurements are carried out in typical urban, suburban and rural environments in Ekiti State, Southwestern part of Nigeria. A total number of seven base stations of MTN GSM service located in the studied environments were monitored. Path loss and break point distances were deduced from the measured received signal strength (RSS) and a practical path loss model is proposed based on the deduced break point distances. The proposed two slope model, regression line and four existing path loss models were compared with the measured path loss values. The standard deviations of each model with respect to the measured path loss were estimated for each base station. The proposed model and regression line exhibited lowest standard deviations followed by the Cost231-Hata model when compared with the Erceg Ericsson and SUI models. Generally, the proposed two-slope model shows closest agreement with the measured values with a mean error values of 2 to 6 dB. These results show that, either the proposed two slope model or Cost 231-Hata model may be used to predict path loss values in mobile micro cell coverage in the well-considered environments. Information from this work will be useful for link design of microwave band wireless access systems in the region.

Keywords: Break-point distances, path loss models, path loss exponent, received signal strength.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 791
137 Multidimensional Performance Tracking

Authors: C. Ardil

Abstract:

In this study, a model, together with a software tool that implements it, has been developed to determine the performance ratings of employees in an organization operating in the information technology sector using the indicators obtained from employees' online study data. Weighted Sum (WS) Method and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) method based on multidimensional decision making approach were used in the study. WS and TOPSIS methods provide multidimensional decision making (MDDM) methods that allow all dimensions to be evaluated together considering specific weights, allowing employees to objectively evaluate the problem of online performance tracking. The application of WS and TOPSIS mathematical methods, which can combine alternatives with a large number of dimensions and reach simultaneous solution, has been implemented through an online performance tracking software. In the application of WS and TOPSIS methods, objective dimension weights were calculated by using entropy information (EI) and standard deviation (SD) methods from the data obtained by employees' online performance tracking method, decision matrix was formed by using performance scores for each employee, and a single performance score was calculated for each employee. Based on the calculated performance score, employees were given a performance evaluation decision. The results of Pareto set evidence and comparative mathematical analysis validate that employees' performance preference rankings in WS and TOPSIS methods are closely related. This suggests the compatibility, applicability, and validity of the proposed method to the MDDM problems in which a large number of alternative and dimension types are taken into account. With this study, an objective, realistic, feasible and understandable mathematical method, together with a software tool that implements it has been demonstrated. This is considered to be preferable because of the subjectivity, limitations and high cost of the methods traditionally used in the measurement and performance appraisal in the information technology sector.

Keywords: Weighted sum, entropy ınformation, standard deviation, online performance tracking, performance evaluation, performance management, multidimensional decision making.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1067
136 Optimizing Organizational Performance: The Critical Role of Headcount Budgeting in Strategic Alignment and Financial Stability

Authors: Shobhit Mittal

Abstract:

Headcount budgeting stands as a pivotal element in organizational financial management, extending beyond traditional budgeting to encompass strategic resource allocation for workforce-related expenses. This process is integral to maintaining financial stability and fostering a productive workforce, requiring a comprehensive analysis of factors such as market trends, business growth projections, and evolving workforce skill requirements. It demands a collaborative approach, primarily involving Human Resources (HR) and finance departments, to align workforce planning with an organization's financial capabilities and strategic objectives. The dynamic nature of headcount budgeting necessitates continuous monitoring and adjustment in response to economic fluctuations, business strategy shifts, technological advancements, and market dynamics. Its significance in talent management is also highlighted, aligning financial planning with talent acquisition and retention strategies to ensure a competitive edge in the market. The consequences of incorrect headcount budgeting are explored, showing how it can lead to financial strain, operational inefficiencies, and hindered strategic objectives. Examining case studies like IBM's strategic workforce rebalancing and Microsoft's shift for long-term success, the importance of aligning headcount budgeting with organizational goals is underscored. These examples illustrate that effective headcount budgeting transcends its role as a financial tool, emerging as a strategic element crucial for an organization's success. This necessitates continuous refinement and adaptation to align with evolving business goals and market conditions, highlighting its role as a key driver in organizational success and sustainability.

Keywords: Strategic planning, fiscal budget, headcount planning, resource allocation, financial management, decision-making, operational efficiency, risk management, headcount budget.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 42
135 Toward Indoor and Outdoor Surveillance Using an Improved Fast Background Subtraction Algorithm

Authors: A. El Harraj, N. Raissouni

Abstract:

The detection of moving objects from a video image sequences is very important for object tracking, activity recognition, and behavior understanding in video surveillance. The most used approach for moving objects detection / tracking is background subtraction algorithms. Many approaches have been suggested for background subtraction. But, these are illumination change sensitive and the solutions proposed to bypass this problem are time consuming. In this paper, we propose a robust yet computationally efficient background subtraction approach and, mainly, focus on the ability to detect moving objects on dynamic scenes, for possible applications in complex and restricted access areas monitoring, where moving and motionless persons must be reliably detected. It consists of three main phases, establishing illumination changes invariance, background/foreground modeling and morphological analysis for noise removing. We handle illumination changes using Contrast Limited Histogram Equalization (CLAHE), which limits the intensity of each pixel to user determined maximum. Thus, it mitigates the degradation due to scene illumination changes and improves the visibility of the video signal. Initially, the background and foreground images are extracted from the video sequence. Then, the background and foreground images are separately enhanced by applying CLAHE. In order to form multi-modal backgrounds we model each channel of a pixel as a mixture of K Gaussians (K=5) using Gaussian Mixture Model (GMM). Finally, we post process the resulting binary foreground mask using morphological erosion and dilation transformations to remove possible noise. For experimental test, we used a standard dataset to challenge the efficiency and accuracy of the proposed method on a diverse set of dynamic scenes.

Keywords: Video surveillance, background subtraction, Contrast Limited Histogram Equalization, illumination invariance, object tracking, object detection, behavior understanding, dynamic scenes.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2053
134 GridNtru: High Performance PKCS

Authors: Narasimham Challa, Jayaram Pradhan

Abstract:

Cryptographic algorithms play a crucial role in the information society by providing protection from unauthorized access to sensitive data. It is clear that information technology will become increasingly pervasive, Hence we can expect the emergence of ubiquitous or pervasive computing, ambient intelligence. These new environments and applications will present new security challenges, and there is no doubt that cryptographic algorithms and protocols will form a part of the solution. The efficiency of a public key cryptosystem is mainly measured in computational overheads, key size and bandwidth. In particular the RSA algorithm is used in many applications for providing the security. Although the security of RSA is beyond doubt, the evolution in computing power has caused a growth in the necessary key length. The fact that most chips on smart cards can-t process key extending 1024 bit shows that there is need for alternative. NTRU is such an alternative and it is a collection of mathematical algorithm based on manipulating lists of very small integers and polynomials. This allows NTRU to high speeds with the use of minimal computing power. NTRU (Nth degree Truncated Polynomial Ring Unit) is the first secure public key cryptosystem not based on factorization or discrete logarithm problem. This means that given sufficient computational resources and time, an adversary, should not be able to break the key. The multi-party communication and requirement of optimal resource utilization necessitated the need for the present day demand of applications that need security enforcement technique .and can be enhanced with high-end computing. This has promoted us to develop high-performance NTRU schemes using approaches such as the use of high-end computing hardware. Peer-to-peer (P2P) or enterprise grids are proven as one of the approaches for developing high-end computing systems. By utilizing them one can improve the performance of NTRU through parallel execution. In this paper we propose and develop an application for NTRU using enterprise grid middleware called Alchemi. An analysis and comparison of its performance for various text files is presented.

Keywords: Alchemi, GridNtru, Ntru, PKCS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1653
133 Study on Planning of Smart GRID using Landscape Ecology

Authors: Sunglim Lee, Susumu Fujii, Koji Okamura

Abstract:

Smart grid is a new approach for electric power grid that uses information and communications technology to control the electric power grid. Smart grid provides real-time control of the electric power grid, controlling the direction of power flow or time of the flow. Control devices are installed on the power lines of the electric power grid to implement smart grid. The number of the control devices should be determined, in relation with the area one control device covers and the cost associated with the control devices. One approach to determine the number of the control devices is to use the data on the surplus power generated by home solar generators. In current implementations, the surplus power is sent all the way to the power plant, which may cause power loss. To reduce the power loss, the surplus power may be sent to a control device and sent to where the power is needed from the control device. Under assumption that the control devices are installed on a lattice of equal size squares, our goal is to figure out the optimal spacing between the control devices, where the power sharing area (the area covered by one control device) is kept small to avoid power loss, and at the same time the power sharing area is big enough to have no surplus power wasted. To achieve this goal, a simulation using landscape ecology method is conducted on a sample area. First an aerial photograph of the land of interest is turned into a mosaic map where each area is colored according to the ratio of the amount of power production to the amount of power consumption in the area. The amount of power consumption is estimated according to the characteristics of the buildings in the area. The power production is calculated by the sum of the area of the roofs shown in the aerial photograph and assuming that solar panels are installed on all the roofs. The mosaic map is colored in three colors, each color representing producer, consumer, and neither. We started with a mosaic map with 100 m grid size, and the grid size is grown until there is no red grid. One control device is installed on each grid, so that the grid is the area which the control device covers. As the result of this simulation we got 350m as the optimal spacing between the control devices that makes effective use of the surplus power for the sample area.

Keywords: Landscape ecology, IT, smart grid, aerial photograph, simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1935
132 Clinical Comparative Study Comparing Efficacy of Intrathecal Fentanyl and Magnesium as an Adjuvant to Hyperbaric Bupivacaine in Mild Pre-Eclamptic Patients Undergoing Caesarean Section

Authors: Sanchita B. Sarma, M. P. Nath

Abstract:

Adequate analgesia following caesarean section decreases morbidity, hastens ambulation, improves patient outcome and facilitates care of the newborn. Intrathecal magnesium, an NMDA antagonist, has been shown to prolong analgesia without significant side effects in healthy parturients. The aim of this study was to evaluate the onset and duration of sensory and motor block, hemodynamic effect, postoperative analgesia, and adverse effects of magnesium or fentanyl given intrathecally with hyperbaric 0.5% bupivacaine in patients with mild preeclampsia undergoing caesarean section. Sixty women with mild preeclampsia undergoing elective caesarean section were included in a prospective, double blind, controlled trial. Patients were randomly assigned to receive spinal anesthesia with 2 mL 0.5% hyperbaric bupivacaine with 12.5 μg fentanyl (group F) or 0.1 ml of 50% magnesium sulphate (50 mg) (group M) with 0.15ml preservative free distilled water. Onset, duration and recovery of sensory and motor block, time to maximum sensory block, duration of spinal anaesthesia and postoperative analgesic requirements were studied. Statistical comparison was carried out using the Chi-square or Fisher’s exact tests and Independent Student’s t-test where appropriate. The onset of both sensory and motor block was slower in the magnesium group. The duration of spinal anaesthesia (246 vs. 284) and motor block (186.3 vs. 210) were significantly longer in the magnesium group. Total analgesic top up requirement was less in group M. Hemodynamic parameters were similar in both the groups. Intrathecal magnesium caused minimal side effects. Since Fentanyl and other opioid congeners are not available throughout the country easily, magnesium with its easy availability and less side effect profile can be a cost effective alternative to fentanyl in managing pregnancy induced hypertension (PIH) patients given along with Bupivacaine intrathecally in caesarean section.

Keywords: Analgesia, magnesium, preeclampsia, spinal anaesthesia.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2162
131 The Onset of Ironing during Casing Expansion

Authors: W. Assaad, D. Wilmink, H. R. Pasaribu, H. J. M. Geijselaers

Abstract:

Shell has developed a mono-diameter well concept for oil and gas wells as opposed to the traditional telescopic well design. A Mono-diameter well design allows well to have a single inner diameter from the surface all the way down to reservoir to increase production capacity, reduce material cost and reduce environmental footprint. This is achieved by expansion of liners (casing string) concerned using an expansion tool (e.g. a cone). Since the well is drilled in stages and liners are inserted to support the borehole, overlap sections between consecutive liners exist which should be expanded. At overlap, the previously inserted casing which can be expanded or unexpanded is called the host casing and the newly inserted casing is called the expandable casing. When the cone enters the overlap section, an expandable casing is expanded against a host casing, a cured cement layer and formation. In overlap expansion, ironing or lengthening may appear instead of shortening in the expandable casing when the pressure exerted by the host casing, cured cement layer and formation exceeds a certain limit. This pressure is related to cement strength, thickness of cement layer, host casing material mechanical properties, host casing thickness, formation type and formation strength. Ironing can cause implications that hinder the deployment of the technology. Therefore, the understanding of ironing becomes essential. A physical model is built in-house to calculate expansion forces, stresses, strains and post expansion casing dimensions under different conditions. In this study, only free casing and overlap expansion of two casings are addressed while the cement and formation will be incorporated in future study. Since the axial strain can be predicted by the physical model, the onset of ironing can be confirmed. In addition, this model helps in understanding ironing and the parameters influencing it. Finally, the physical model is validated with Finite Element (FE) simulations and small-scale experiments. The results of the study confirm that high pressure leads to ironing when the casing is expanded in tension mode.

Keywords: Casing expansion, cement, formation, metal forming, plasticity, well design.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 738
130 Dynamic Web-Based 2D Medical Image Visualization and Processing Software

Authors: Abdelhalim. N. Mohammed, Mohammed. Y. Esmail

Abstract:

In the course of recent decades, medical imaging has been dominated by the use of costly film media for review and archival of medical investigation, however due to developments in networks technologies and common acceptance of a standard digital imaging and communication in medicine (DICOM) another approach in light of World Wide Web was produced. Web technologies successfully used in telemedicine applications, the combination of web technologies together with DICOM used to design a web-based and open source DICOM viewer. The Web server allowance to inquiry and recovery of images and the images viewed/manipulated inside a Web browser without need for any preinstalling software. The dynamic site page for medical images visualization and processing created by using JavaScript and HTML5 advancements. The XAMPP ‘apache server’ is used to create a local web server for testing and deployment of the dynamic site. The web-based viewer connected to multiples devices through local area network (LAN) to distribute the images inside healthcare facilities. The system offers a few focal points over ordinary picture archiving and communication systems (PACS): easy to introduce, maintain and independently platforms that allow images to display and manipulated efficiently, the system also user-friendly and easy to integrate with an existing system that have already been making use of web technologies. The wavelet-based image compression technique on which 2-D discrete wavelet transform used to decompose the image then wavelet coefficients are transmitted by entropy encoding after threshold to decrease transmission time, stockpiling cost and capacity. The performance of compression was estimated by using images quality metrics such as mean square error ‘MSE’, peak signal to noise ratio ‘PSNR’ and compression ratio ‘CR’ that achieved (83.86%) when ‘coif3’ wavelet filter is used.

Keywords: DICOM, discrete wavelet transform, PACS, HIS, LAN.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 760
129 Hydrogen and Diesel Combustion on a Single Cylinder Four Stroke Diesel Engine in Dual Fuel mode with Varying Injection Strategies

Authors: Probir Kumar Bose, Rahul Banerjee, Madhujit Deb

Abstract:

The present energy situation and the concerns about global warming has stimulated active research interest in non-petroleum, carbon free compounds and non-polluting fuels, particularly for transportation, power generation, and agricultural sectors. Environmental concerns and limited amount of petroleum fuels have caused interests in the development of alternative fuels for internal combustion (IC) engines. The petroleum crude reserves however, are declining and consumption of transport fuels particularly in the developing countries is increasing at high rates. Severe shortage of liquid fuels derived from petroleum may be faced in the second half of this century. Recently more and more stringent environmental regulations being enacted in the USA and Europe have led to the research and development activities on clean alternative fuels. Among the gaseous fuels hydrogen is considered to be one of the clean alternative fuel. Hydrogen is an interesting candidate for future internal combustion engine based power trains. In this experimental investigation, the performance and combustion analysis were carried out on a direct injection (DI) diesel engine using hydrogen with diesel following the TMI(Time Manifold Injection) technique at different injection timings of 10 degree,45 degree and 80 degree ATDC using an electronic control unit (ECU) and injection durations were controlled. Further, the tests have been carried out at a constant speed of 1500rpm at different load conditions and it can be observed that brake thermal efficiency increases with increase in load conditions with a maximum gain of 15% at full load conditions during all injection strategies of hydrogen. It was also observed that with the increase in hydrogen energy share BSEC started reducing and it reduced to a maximum of 9% as compared to baseline diesel at 10deg ATDC injection during maximum injection proving the exceptional combustion properties of hydrogen.

Keywords: Hydrogen, performance, combustion, alternative fuels.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3376
128 Comparison of Detached Eddy Simulations with Turbulence Modeling

Authors: Muhammad Amjad Sohail, Prof. Yan Chao, Mukkarum Husain

Abstract:

Flow field around hypersonic vehicles is very complex and difficult to simulate. The boundary layers are squeezed between shock layer and body surface. Resolution of boundary layer, shock wave and turbulent regions where the flow field has high values is difficult of capture. Detached eddy simulation (DES) is a modification of a RANS model in which the model switches to a subgrid scale formulation in regions fine enough for LES calculations. Regions near solid body boundaries and where the turbulent length scale is less than the maximum grid dimension are assigned the RANS mode of solution. As the turbulent length scale exceeds the grid dimension, the regions are solved using the LES mode. Therefore the grid resolution is not as demanding as pure LES, thereby considerably cutting down the cost of the computation. In this research study hypersonic flow is simulated at Mach 8 and different angle of attacks to resolve the proper boundary layers and discontinuities. The flow is also simulated in the long wake regions. Mesh is little different than RANS simulations and it is made dense near the boundary layers and in the wake regions to resolve it properly. Hypersonic blunt cone cylinder body with frustrum at angle 5o and 10 o are simulated and there aerodynamics study is performed to calculate aerodynamics characteristics of different geometries. The results and then compared with experimental as well as with some turbulence model (SA Model). The results achieved with DES simulation have very good resolution as well as have excellent agreement with experimental and available data. Unsteady simulations are performed for DES calculations by using duel time stepping method or implicit time stepping. The simulations are performed at Mach number 8 and angle of attack from 0o to 10o for all these cases. The results and resolutions for DES model found much better than SA turbulence model.

Keywords: Detached eddy simulation, dual time stepping, hypersonic flow, turbulence modeling

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2318
127 Application of Recycled Tungsten Carbide Powder for Fabrication of Iron Based Powder Metallurgy Alloy

Authors: Yukinori Taniguchi, Kazuyoshi Kurita, Kohei Mizuta, Keigo Nishitani, Ryuichi Fukuda

Abstract:

Tungsten carbide is widely used as a tool material in metal manufacturing process. Since tungsten is typical rare metal, establishment of recycle process of tungsten carbide tools and restore into cemented carbide material bring great impact to metal manufacturing industry. Recently, recycle process of tungsten carbide has been developed and established gradually. However, the demands for quality of cemented carbide tool are quite severe because hardness, toughness, anti-wear ability, heat resistance, fatigue strength and so on should be guaranteed for precision machining and tool life. Currently, it is hard to restore the recycled tungsten carbide powder entirely as raw material for new processed cemented carbide tool. In this study, to suggest positive use of recycled tungsten carbide powder, we have tried to fabricate a carbon based sintered steel which shows reinforced mechanical properties with recycled tungsten carbide powder. We have made set of newly designed sintered steels. Compression test of sintered specimen in density ratio of 0.85 (which means 15% porosity inside) has been conducted. As results, at least 1.7 times higher in nominal strength in the amount of 7.0 wt.% was shown in recycled WC powder. The strength reached to over 600 MPa for the Fe-WC-Co-Cu sintered alloy. Wear test has been conducted by using ball-on-disk type friction tester using 5 mm diameter ball with normal force of 2 N in the dry conditions. Wear amount after 1,000 m running distance shows that about 1.5 times longer life was shown in designed sintered alloy. Since results of tensile test showed that same tendency in previous testing, it is concluded that designed sintered alloy can be used for several mechanical parts with special strength and anti-wear ability in relatively low cost due to recycled tungsten carbide powder.

Keywords: Tungsten carbide, recycle process, compression test, powder metallurgy, anti-wear ability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1437
126 Two-Level Identification of HVAC Consumers for Demand Response Potential Estimation Based on Setpoint Change

Authors: M. Naserian, M. Jooshaki, M. Fotuhi-Firuzabad, M. Hossein Mohammadi Sanjani, A. Oraee

Abstract:

In recent years, the development of communication infrastructure and smart meters have facilitated the utilization of demand-side resources which can enhance stability and economic efficiency of power systems. Direct load control programs can play an important role in the utilization of demand-side resources in the residential sector. However, investments required for installing control equipment can be a limiting factor in the development of such demand response programs. Thus, selection of consumers with higher potentials is crucial to the success of a direct load control program. Heating, ventilation, and air conditioning (HVAC) systems, which due to the heat capacity of buildings feature relatively high flexibility, make up a major part of household consumption. Considering that the consumption of HVAC systems depends highly on the ambient temperature and bearing in mind the high investments required for control systems enabling direct load control demand response programs, in this paper, a solution is presented to uncover consumers with high air conditioner demand among a large number of consumers and to measure the demand response potential of such consumers. This can pave the way for estimating the investments needed for the implementation of direct load control programs for residential HVAC systems and for estimating the demand response potentials in a distribution system. In doing so, we first cluster consumers into several groups based on the correlation coefficients between hourly consumption data and hourly temperature data using K-means algorithm. Then, by applying a recent algorithm to the hourly consumption and temperature data, consumers with high air conditioner consumption are identified. Finally, demand response potential of such consumers is estimated based on the equivalent desired temperature setpoint changes.

Keywords: Data-driven analysis, demand response, direct load control, HVAC system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 172
125 Urban Corridor Management Strategy Based on Intelligent Transportation System

Authors: Sourabh Jain, Sukhvir Singh Jain, Gaurav V. Jain

Abstract:

Intelligent Transportation System (ITS) is the application of technology for developing a user–friendly transportation system for urban areas in developing countries. The goal of urban corridor management using ITS in road transport is to achieve improvements in mobility, safety, and the productivity of the transportation system within the available facilities through the integrated application of advanced monitoring, communications, computer, display, and control process technologies, both in the vehicle and on the road. This paper attempts to present the past studies regarding several ITS available that have been successfully deployed in urban corridors of India and abroad, and to know about the current scenario and the methodology considered for planning, design, and operation of Traffic Management Systems. This paper also presents the endeavor that was made to interpret and figure out the performance of the 27.4 Km long study corridor having eight intersections and four flyovers. The corridor consisting of 6 lanes as well as 8 lanes divided road network. Two categories of data were collected on February 2016 such as traffic data (traffic volume, spot speed, delay) and road characteristics data (no. of lanes, lane width, bus stops, mid-block sections, intersections, flyovers). The instruments used for collecting the data were video camera, radar gun, mobile GPS and stopwatch. From analysis, the performance interpretations incorporated were identification of peak hours and off peak hours, congestion and level of service (LOS) at mid blocks, delay followed by the plotting speed contours and recommending urban corridor management strategies. From the analysis, it is found that ITS based urban corridor management strategies will be useful to reduce congestion, fuel consumption and pollution so as to provide comfort and efficiency to the users. The paper presented urban corridor management strategies based on sensors incorporated in both vehicles and on the roads.

Keywords: Congestion, ITS Strategies, Mobility, Safety.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1607
124 Milling Simulations with a 3-DOF Flexible Planar Robot

Authors: Hoai Nam Huynh, Edouard Rivière-Lorphèvre, Olivier Verlinden

Abstract:

Manufacturing technologies are becoming continuously more diversified over the years. The increasing use of robots for various applications such as assembling, painting, welding has also affected the field of machining. Machining robots can deal with larger workspaces than conventional machine-tools at a lower cost and thus represent a very promising alternative for machining applications. Furthermore, their inherent structure ensures them a great flexibility of motion to reach any location on the workpiece with the desired orientation. Nevertheless, machining robots suffer from a lack of stiffness at their joints restricting their use to applications involving low cutting forces especially finishing operations. Vibratory instabilities may also happen while machining and deteriorate the precision leading to scrap parts. Some researchers are therefore concerned with the identification of optimal parameters in robotic machining. This paper continues the development of a virtual robotic machining simulator in order to find optimized cutting parameters in terms of depth of cut or feed per tooth for example. The simulation environment combines an in-house milling routine (DyStaMill) achieving the computation of cutting forces and material removal with an in-house multibody library (EasyDyn) which is used to build a dynamic model of a 3-DOF planar robot with flexible links. The position of the robot end-effector submitted to milling forces is controlled through an inverse kinematics scheme while controlling the position of its joints separately. Each joint is actuated through a servomotor for which the transfer function has been computed in order to tune the corresponding controller. The output results feature the evolution of the cutting forces when the robot structure is deformable or not and the tracking errors of the end-effector. Illustrations of the resulting machined surfaces are also presented. The consideration of the links flexibility has highlighted an increase of the cutting forces magnitude. This proof of concept will aim to enrich the database of results in robotic machining for potential improvements in production.

Keywords: Control, machining, multibody, robotic, simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1334
123 Vocational Skills, Recognition of Prior Learning and Technology: The Future of Higher Education

Authors: Shankar Subramanian Iyer

Abstract:

The vocational education, enhanced by technology and Recognition of Prior Learning (RPL) is going to be the main ingredient of the future of education. This is coming from the various issues of the current educational system like cost, time, type of course, type of curriculum, unemployment, to name the major reasons. Most millennials like to perform and learn rather than learning how to perform. This is the essence of vocational education be it any field from cooking, painting, plumbing to modern technologies using computers. Even a more theoretical course like entrepreneurship can be taught as to be an entrepreneur and learn about its nuances. The best way to learn accountancy is actually keeping accounts for a small business or grocer and learn the ropes of accountancy and finance. The purpose of this study is to investigate the relationship between vocational skills, RPL and new technologies with future employability. This study implies that individual's knowledge and skills are essential aspects to be emphasized in future education and to give credit for prior experience for future employability. Virtual reality can be used to stimulate workplace situations for vocational learning for fields like hospitality, medical emergencies, healthcare, draughtsman ship, building inspection, quantity surveying, estimation, to name a few. All disruptions in future education, especially vocational education, are going to be technology driven with the advent of AI, ML, IoT, VR, VI etc. Vocational education not only helps institutes cut costs drastically, but allows all students to have hands-on experiences, rather than to be observers. The earlier experiential learning theory and the recent theory of knowledge and skills-based learning modified and applied to the vocational education and development of skills is the proposed contribution of this paper. Apart from secondary research study on major scholarly articles, books, primary research using interviews, questionnaire surveys have been used to validate and test the reliability of the suggested model using Partial Least Square- Structural Equation Method (PLS-SEM), the factors being assimilated using an existing literature review. Major findings have been that there exists high relationship between the vocational skills, RPL, new technology to the future employability through mediation of future employability skills.

Keywords: Vocational education, vocational skills, competencies, modern technologies, Recognition of Prior Learning, RPL.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 703
122 Computer Modeling and Plant-Wide Dynamic Simulation for Industrial Flare Minimization

Authors: Sujing Wang, Song Wang, Jian Zhang, Qiang Xu

Abstract:

Flaring emissions during abnormal operating conditions such as plant start-ups, shut-downs, and upsets in chemical process industries (CPI) are usually significant. Flare minimization can help to save raw material and energy for CPI plants, and to improve local environmental sustainability. In this paper, a systematic methodology based on plant-wide dynamic simulation is presented for CPI plant flare minimizations under abnormal operating conditions. Since off-specification emission sources are inevitable during abnormal operating conditions, to significantly reduce flaring emission in a CPI plant, they must be either recycled to the upstream process for online reuse, or stored somewhere temporarily for future reprocessing, when the CPI plant manufacturing returns to stable operation. Thus, the off-spec products could be reused instead of being flared. This can be achieved through the identification of viable design and operational strategies during normal and abnormal operations through plant-wide dynamic scheduling, simulation, and optimization. The proposed study includes three stages of simulation works: (i) developing and validating a steady-state model of a CPI plant; (ii) transiting the obtained steady-state plant model to the dynamic modeling environment; and refining and validating the plant dynamic model; and (iii) developing flare minimization strategies for abnormal operating conditions of a CPI plant via a validated plant-wide dynamic model. This cost-effective methodology has two main merits: (i) employing large-scale dynamic modeling and simulations for industrial flare minimization, which involves various unit models for modeling hundreds of CPI plant facilities; (ii) dealing with critical abnormal operating conditions of CPI plants such as plant start-up and shut-down. Two virtual case studies on flare minimizations for start-up operation (over 50% of emission savings) and shut-down operation (over 70% of emission savings) of an ethylene plant have been employed to demonstrate the efficacy of the proposed study.

Keywords: Flare minimization, large-scale modeling and simulation, plant shut-down, plant start-up.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1707
121 Multistage Condition Monitoring System of Aircraft Gas Turbine Engine

Authors: A. M. Pashayev, D. D. Askerov, C. Ardil, R. A. Sadiqov, P. S. Abdullayev

Abstract:

Researches show that probability-statistical methods application, especially at the early stage of the aviation Gas Turbine Engine (GTE) technical condition diagnosing, when the flight information has property of the fuzzy, limitation and uncertainty is unfounded. Hence the efficiency of application of new technology Soft Computing at these diagnosing stages with the using of the Fuzzy Logic and Neural Networks methods is considered. According to the purpose of this problem training with high accuracy of fuzzy multiple linear and non-linear models (fuzzy regression equations) which received on the statistical fuzzy data basis is made. For GTE technical condition more adequate model making dynamics of skewness and kurtosis coefficients- changes are analysed. Researches of skewness and kurtosis coefficients values- changes show that, distributions of GTE work parameters have fuzzy character. Hence consideration of fuzzy skewness and kurtosis coefficients is expedient. Investigation of the basic characteristics changes- dynamics of GTE work parameters allows drawing conclusion on necessity of the Fuzzy Statistical Analysis at preliminary identification of the engines' technical condition. Researches of correlation coefficients values- changes shows also on their fuzzy character. Therefore for models choice the application of the Fuzzy Correlation Analysis results is offered. At the information sufficiency is offered to use recurrent algorithm of aviation GTE technical condition identification (Hard Computing technology is used) on measurements of input and output parameters of the multiple linear and non-linear generalised models at presence of noise measured (the new recursive Least Squares Method (LSM)). The developed GTE condition monitoring system provides stageby- stage estimation of engine technical conditions. As application of the given technique the estimation of the new operating aviation engine technical condition was made.

Keywords: aviation gas turbine engine, technical condition, fuzzy logic, neural networks, fuzzy statistics

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1540
120 Mapping the Core Processes and Identifying Actors along with Their Roles, Functions and Linkages in Trout Value Chain in Kashmir, India

Authors: Stanzin Gawa, Nalini Ranjan Kumar, Gohar Bilal Wani, Vinay Maruti Hatte, A. Vinay

Abstract:

Rainbow trout (Oncorhynchus mykiss) and Brown trout (Salmo trutta fario) are the two species of trout which were once introduced by British in waters of Kashmir has well adapted to favorable climatic conditions. Cold water fisheries are one of the emerging sectors in Kashmir valley and trout holds an important place Jammu and Kashmir fisheries. Realizing the immense potential of trout culture in Kashmir region, the state fisheries department started privatizing trout culture under the centrally funded scheme of RKVY in which they provide 80 percent subsidy for raceway construction and supply of feed and seed for the first year since 2009-10 and at present there are 362 private trout farms. To cater the growing demand for trout in the valley, it is important to understand the bottlenecks faced in the propagation of trout culture. Value chain analysis provides a generic framework to understand the various activities and processes, mapping and studying linkages is first step that needs to be done in any value chain analysis. In Kashmir, it is found that trout hatcheries play a crucial role in insuring the continuous supply of trout seed in valley. Feed is most limiting factor in trout culture and the farmer has to incur high cost in payment and in the transportation of feed from the feed mill to farm. Lack of aqua clinic in the Kashmir valley needs to be addressed. Brood stock maintenance, breeding and seed production, technical assistance to private farmer, extension services have to be strengthened and there is need to development healthier environment for new entrepreneurs. It was found that trout farmers do not avail credit facility as there is no well define credit scheme for fisheries in the state. The study showed weak institutional linkages. Research and development should focus more on applied science rather than basic science.

Keywords: Trout, Kashmir, value chain, linkages, culture.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1320
119 Verifying the Supremacy of Volume Modulated Arc Therapy Over Intensity Modulated Radiation Therapy: Pelvis Malignancies’ Perspective

Authors: M. Umar Farooq, T. Ahmad Afridi, M. Zia-Ul-Islam Arsalan, U. Hussain Haider, S. Ullah

Abstract:

Cancer, a leading fatal disease worldwide, can be treated with various techniques including radiation therapy. It involves the use of ionizing radiation to target cancer cells. On basis of source placement, radiation therapy is of two types i.e., Brachytherapy and External Beam Radiotherapy (EBRT). EBRT has evolved from 2-D conventional therapy to 3-D Conformal radiotherapy (3D-CRT) and then Intensity-Modulated Radiotherapy (IMRT). IMRT improves dose conformity and sparing of organs at risk. Volumetric Modulated Arc Therapy (VMAT) is a modern technique that uses treatment delivery in arcs with rotation of the gantry. In this report, a dosimetry comparison was performed between IMRT and VMAT. This study was conducted in the Radiotherapy Department of the Institute of Nuclear Medicine and Oncology Lahore (INMOL). Ten patients with Prostate Carcinoma were selected for this study to compare the methods. Simulation of these patients was done with help of a CT Simulator. All target volumes and organs were delineated by the oncologists. Then suitable fields/arcs were applied which cover volumes effectively. This was followed by the optimization of plans for both techniques for every patient. Finally, a comparison of evaluating parameters e.g., Conformity Index (CI), Volume Coverage, Homogeneity Index (HI), Organ Doses, and MUs (Monitor Units) was performed. We obtained better results of target conformity indices from VMAT (CI = 1.16) than IMRT (CI = 1.24). VMAT was better in organ sparing too. Also, VMAT shows fewer MUs (733 MUs) as compared to IMRT (2149 MUs). From this study, it is concluded that VMAT is a better treatment technique than IMRT. This technique will enhance treatment efficiency as it takes less time in obtaining the required results. Also, a very less scatter dose will be delivered to the patient.

Keywords: 2-D Conventional Radiotherapy, 3-D Conformal Radiotherapy, Intensity Modulated Radiotherapy, Prostate Carcinoma, Radiotherapy, Volumetric Modulated Arc Therapy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 314
118 Field Trial of Resin-Based Composite Materials for the Treatment of Surface Collapses Associated with Former Shallow Coal Mining

Authors: Philip T. Broughton, Mark P. Bettney, Isla L. Smail

Abstract:

Effective treatment of ground instability is essential when managing the impacts associated with historic mining. A field trial was undertaken by the Coal Authority to investigate the geotechnical performance and potential use of composite materials comprising resin and fill or stone to safely treat surface collapses, such as crown-holes, associated with shallow mining. Test pits were loosely filled with various granular fill materials. The fill material was injected with commercially available silicate and polyurethane resin foam products. In situ and laboratory testing was undertaken to assess the geotechnical properties of the resultant composite materials. The test pits were subsequently excavated to assess resin permeation. Drilling and resin injection was easiest through clean limestone fill materials. Recycled building waste fill material proved difficult to inject with resin; this material is thus considered unsuitable for use in resin composites. Incomplete resin permeation in several of the test pits created irregular ‘blocks’ of composite. Injected resin foams significantly improve the stiffness and resistance (strength) of the un-compacted fill material. The stiffness of the treated fill material appears to be a function of the stone particle size, its associated compaction characteristics (under loose tipping) and the proportion of resin foam matrix. The type of fill material is more critical than the type of resin to the geotechnical properties of the composite materials. Resin composites can effectively support typical design imposed loads. Compared to other traditional treatment options, such as cement grouting, the use of resin composites is potentially less disruptive, particularly for sites with limited access, and thus likely to achieve significant reinstatement cost savings. The use of resin composites is considered a suitable option for the future treatment of shallow mining collapses.

Keywords: Composite material, ground improvement, mining legacy, resin.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1498
117 Using 3-Glycidoxypropyltrimethoxysilane Functionalized SiO2 Nanoparticles to Improve Flexural Properties of Glass Fibers/Epoxy Grid-Stiffened Composite Panels

Authors: Reza Eslami-Farsani, Hamed Khosravi, Saba Fayazzadeh

Abstract:

Lightweight and efficient structures have the aim to enhance the efficiency of the components in various industries. Toward this end, composites are one of the most widely used materials because of durability, high strength and modulus, and low weight. One type of the advanced composites is grid-stiffened composite (GSC) structures, which have been extensively considered in aerospace, automotive, and aircraft industries. They are one of the top candidates for replacing some of the traditional components, which are used here. Although there are a good number of published surveys on the design aspects and fabrication of GSC structures, little systematic work has been reported on their material modification to improve their properties, to our knowledge. Matrix modification using nanoparticles is an effective method to enhance the flexural properties of the fibrous composites. In the present study, a silanecoupling agent (3-glycidoxypropyltrimethoxysilane/3-GPTS) was introduced onto the silica (SiO2) nanoparticle surface and its effects on the three-point flexural response of isogrid E-glass/epoxy composites were assessed. Based on the Fourier Transform Infrared Spectrometer (FTIR) spectra, it was inferred that the 3-GPTS coupling agent was successfully grafted onto the surface of SiO2 nanoparticles after modification. Flexural test revealed an improvement of 16%, 14%, and 36% in stiffness, maximum load and energy absorption of the isogrid specimen filled with 3 wt.% 3- GPTS/SiO2 compared to the neat one. It would be worth mentioning that in these structures, considerable energy absorption was observed after the primary failure related to the load peak. In addition, 3- GPTMS functionalization had a positive effect on the flexural behavior of the multiscale isogrid composites. In conclusion, this study suggests that the addition of modified silica nanoparticles is a promising method to improve the flexural properties of the gridstiffened fibrous composite structures.

Keywords: Isogrid-stiffened composite panels, silica nanoparticles, surface modification, flexural properties.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2964
116 Similarity Solutions of Nonlinear Stretched Biomagnetic Flow and Heat Transfer with Signum Function and Temperature Power Law Geometries

Authors: M. G. Murtaza, E. E. Tzirtzilakis, M. Ferdows

Abstract:

Biomagnetic fluid dynamics is an interdisciplinary field comprising engineering, medicine, and biology. Bio fluid dynamics is directed towards finding and developing the solutions to some of the human body related diseases and disorders. This article describes the flow and heat transfer of two dimensional, steady, laminar, viscous and incompressible biomagnetic fluid over a non-linear stretching sheet in the presence of magnetic dipole. Our model is consistent with blood fluid namely biomagnetic fluid dynamics (BFD). This model based on the principles of ferrohydrodynamic (FHD). The temperature at the stretching surface is assumed to follow a power law variation, and stretching velocity is assumed to have a nonlinear form with signum function or sign function. The governing boundary layer equations with boundary conditions are simplified to couple higher order equations using usual transformations. Numerical solutions for the governing momentum and energy equations are obtained by efficient numerical techniques based on the common finite difference method with central differencing, on a tridiagonal matrix manipulation and on an iterative procedure. Computations are performed for a wide range of the governing parameters such as magnetic field parameter, power law exponent temperature parameter, and other involved parameters and the effect of these parameters on the velocity and temperature field is presented. It is observed that for different values of the magnetic parameter, the velocity distribution decreases while temperature distribution increases. Besides, the finite difference solutions results for skin-friction coefficient and rate of heat transfer are discussed. This study will have an important bearing on a high targeting efficiency, a high magnetic field is required in the targeted body compartment.

Keywords: Biomagnetic fluid, FHD, nonlinear stretching sheet, slip parameter.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 790
115 Deformation Characteristics of Fire Damaged and Rehabilitated Normal Strength Concrete Beams

Authors: Yeo Kyeong Lee, Hae Won Min, Ji Yeon Kang, Hee Sun Kim, Yeong Soo Shin

Abstract:

In recent years, fire accidents have been steadily increased and the amount of property damage caused by the accidents has gradually raised. Damaging building structure, fire incidents bring about not only such property damage but also strength degradation and member deformation. As a result, the building structure undermines its structural ability. Examining the degradation and the deformation is very important because reusing the building is more economical than reconstruction. Therefore, engineers need to investigate the strength degradation and member deformation well, and make sure that they apply right rehabilitation methods. This study aims at evaluating deformation characteristics of fire damaged and rehabilitated normal strength concrete beams through both experiments and finite element analyses. For the experiments, control beams, fire damaged beams and rehabilitated beams are tested to examine deformation characteristics. Ten test beam specimens with compressive strength of 21MPa are fabricated and main test variables are selected as cover thickness of 40mm and 50mm and fire exposure time of 1 hour or 2 hours. After heating, fire damaged beams are air-recurred for 2 months and rehabilitated beams are repaired with polymeric cement mortar after being removed the fire damaged concrete cover. All beam specimens are tested under four points loading. FE analyses are executed to investigate the effects of main parameters applied to experimental study. Test results show that both maximum load and stiffness of the rehabilitated beams are higher than those of the fire damaged beams. In addition, predicted structural behaviors from the analyses also show good rehabilitation effect and the predicted load-deflection curves are similar to the experimental results. For the further, the proposed analytical method can be used to predict deformation characteristics of fire damaged and rehabilitated concrete beams without suffering from time and cost consuming of experimental process.

Keywords: Fire, Normal strength concrete, Rehabilitation, Reinforced concrete beam.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2354
114 Quantifying Uncertainties in an Archetype-Based Building Stock Energy Model by Use of Individual Building Models

Authors: Morten Brøgger, Kim Wittchen

Abstract:

Focus on reducing energy consumption in existing buildings at large scale, e.g. in cities or countries, has been increasing in recent years. In order to reduce energy consumption in existing buildings, political incentive schemes are put in place and large scale investments are made by utility companies. Prioritising these investments requires a comprehensive overview of the energy consumption in the existing building stock, as well as potential energy-savings. However, a building stock comprises thousands of buildings with different characteristics making it difficult to model energy consumption accurately. Moreover, the complexity of the building stock makes it difficult to convey model results to policymakers and other stakeholders. In order to manage the complexity of the building stock, building archetypes are often employed in building stock energy models (BSEMs). Building archetypes are formed by segmenting the building stock according to specific characteristics. Segmenting the building stock according to building type and building age is common, among other things because this information is often easily available. This segmentation makes it easy to convey results to non-experts. However, using a single archetypical building to represent all buildings in a segment of the building stock is associated with loss of detail. Thermal characteristics are aggregated while other characteristics, which could affect the energy efficiency of a building, are disregarded. Thus, using a simplified representation of the building stock could come at the expense of the accuracy of the model. The present study evaluates the accuracy of a conventional archetype-based BSEM that segments the building stock according to building type- and age. The accuracy is evaluated in terms of the archetypes’ ability to accurately emulate the average energy demands of the corresponding buildings they were meant to represent. This is done for the buildings’ energy demands as a whole as well as for relevant sub-demands. Both are evaluated in relation to the type- and the age of the building. This should provide researchers, who use archetypes in BSEMs, with an indication of the expected accuracy of the conventional archetype model, as well as the accuracy lost in specific parts of the calculation, due to use of the archetype method.

Keywords: Building stock energy modelling, energy-savings, archetype.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 695
113 Normal and Peaberry Coffee Beans Classification from Green Coffee Bean Images Using Convolutional Neural Networks and Support Vector Machine

Authors: Hira Lal Gope, Hidekazu Fukai

Abstract:

The aim of this study is to develop a system which can identify and sort peaberries automatically at low cost for coffee producers in developing countries. In this paper, the focus is on the classification of peaberries and normal coffee beans using image processing and machine learning techniques. The peaberry is not bad and not a normal bean. The peaberry is born in an only single seed, relatively round seed from a coffee cherry instead of the usual flat-sided pair of beans. It has another value and flavor. To make the taste of the coffee better, it is necessary to separate the peaberry and normal bean before green coffee beans roasting. Otherwise, the taste of total beans will be mixed, and it will be bad. In roaster procedure time, all the beans shape, size, and weight must be unique; otherwise, the larger bean will take more time for roasting inside. The peaberry has a different size and different shape even though they have the same weight as normal beans. The peaberry roasts slower than other normal beans. Therefore, neither technique provides a good option to select the peaberries. Defect beans, e.g., sour, broken, black, and fade bean, are easy to check and pick up manually by hand. On the other hand, the peaberry pick up is very difficult even for trained specialists because the shape and color of the peaberry are similar to normal beans. In this study, we use image processing and machine learning techniques to discriminate the normal and peaberry bean as a part of the sorting system. As the first step, we applied Deep Convolutional Neural Networks (CNN) and Support Vector Machine (SVM) as machine learning techniques to discriminate the peaberry and normal bean. As a result, better performance was obtained with CNN than with SVM for the discrimination of the peaberry. The trained artificial neural network with high performance CPU and GPU in this work will be simply installed into the inexpensive and low in calculation Raspberry Pi system. We assume that this system will be used in under developed countries. The study evaluates and compares the feasibility of the methods in terms of accuracy of classification and processing speed.

Keywords: Convolutional neural networks, coffee bean, peaberry, sorting, support vector machine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1469
112 Effect of Good Agriculture Management Practices and Constraints on Grape Farming: A Case Study in Mirbachakot, Kalakan and Shakardara Districts Kabul, Afghanistan

Authors: Mohammad Mirwais Yusufi

Abstract:

Skillful management is one of the most important success factors for today’s farms. When a farm is well managed, it can generate funds for its sustainability. Grape is one of the most diffused fruits in the world and one of the most important cash crops with high potential of production in Afghanistan as well. While there are several organizations intervening for improvement of this cash crop, the quality and quantity are still not satisfactory for producers and external markets. The situation has not changed over the years. Therefore, a survey was conducted in 2017 with 60 grape growers, supported by questionnaires in Mirbachakot, Kalakan and Shakardara districts of Kabul province. The purpose was to get an understanding of the current socio-demographic characteristics of farmers, management methods, constraints, farm size, yield and contribution of grape farming to household income. Findings indicate that grape farming was predominant 83.3% male, 16.6% female and small-scale farmers were the main grape producers, 60% < 1 ha of land under grape production. Likewise, 50% had more than > 10 years and 33.3% between 1-5 years’ experience in grape farming. The high level of illiteracy and diseases had significant digit effect on growth, yield and quality of grapes. The results showed that vineyard management operations to protect grapes from mechanical damage are very poor or completely absent. Comparing developed countries, table grape is one of the fruits with the highest input of technology, while in developing countries the cost of labor is low but the purchase of the equipment is very high due to financial situation. Hence the low quality and quantity of grape are influenced by poor management methods, such as non-availability of experts and lack of technical guidance in the study site. Thereby, the study suggested that improved agricultural extension services and managerial skills could contribute to addressing the problems.

Keywords: Efficient resources use, management skills, constraints factors, Kabul.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 563
111 Production of Pre-Reduction of Iron Ore Nuggets with Lesser Sulphur Intake by Devolatisation of Boiler Grade Coal

Authors: Chanchal Biswas, Anrin Bhattacharyya, Gopes Chandra Das, Mahua Ghosh Chaudhuri, Rajib Dey

Abstract:

Boiler coals with low fixed carbon and higher ash content have always challenged the metallurgists to develop a suitable method for their utilization. In the present study, an attempt is made to establish an energy effective method for the reduction of iron ore fines in the form of nuggets by using ‘Syngas’. By devolatisation (expulsion of volatile matter by applying heat) of boiler coal, gaseous product (enriched with reducing agents like CO, CO2, H2, and CH4 gases) is generated. Iron ore nuggets are reduced by this syngas. For that reason, there is no direct contact between iron ore nuggets and coal ash. It helps to control the minimization of the sulphur intake of the reduced nuggets. A laboratory scale devolatisation furnace designed with reduction facility is evaluated after in-depth studies and exhaustive experimentations including thermo-gravimetric (TG-DTA) analysis to find out the volatile fraction present in boiler grade coal, gas chromatography (GC) to find out syngas composition in different temperature and furnace temperature gradient measurements to minimize the furnace cost by applying one heating coil. The nuggets are reduced in the devolatisation furnace at three different temperatures and three different times. The pre-reduced nuggets are subjected to analytical weight loss calculations to evaluate the extent of reduction. The phase and surface morphology analysis of pre-reduced samples are characterized using X-ray diffractometry (XRD), energy dispersive x-ray spectrometry (EDX), scanning electron microscopy (SEM), carbon sulphur analyzer and chemical analysis method. Degree of metallization of the reduced nuggets is 78.9% by using boiler grade coal. The pre-reduced nuggets with lesser sulphur content could be used in the blast furnace as raw materials or coolant which would reduce the high quality of coke rate of the furnace due to its pre-reduced character. These can be used in Basic Oxygen Furnace (BOF) as coolant also.

Keywords: Alternative ironmaking, coal devolatisation, extent of reduction, nugget making, syngas based DRI, solid state reduction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1455
110 A Review on Building Information Modelling in Nigeria and Its Potentials

Authors: Mansur Hamma-Adama, Tahar Kouider

Abstract:

Construction Industry has been evolving since the development of Building Information Modelling (BIM). This technological process is unstoppable; it is out to the market with remarkable case studies of solving the long industry’s history of fragmentation. This industry has been changing over time; United States has recorded the most significant development in construction digitalization, Australia, United Kingdom and some other developed nations are also amongst promoters of BIM process and its development. Recently, a developing country like China and Malaysia are keying into the industry’s digital shift, while very little move is seen in South Africa whose development is considered higher and perhaps leader in the digital transition amongst the African countries. To authors’ best knowledge, Nigerian construction industry has never engaged in BIM discussions hence has no attention at national level. Consequently, Nigeria has no “Noteworthy BIM publications.” Decision makers and key stakeholders need to be informed on the current trend of the industry’s development (BIM in specific) and the opportunities of adopting this digitalization trend in relation to the identified challenges. BIM concept can be traced mostly in Architectural practices than engineering practices in Nigeria. A superficial BIM practice is found to be at organisational level only and operating a model based - “BIM stage 1.” Research to adopting this innovation has received very little attention. This piece of work is literature review based, aimed at exploring BIM in Nigeria and its prospects. The exploration reveals limitations in the literature availability as to extensive research in the development of BIM in the country. Numerous challenges were noticed including building collapse, inefficiencies, cost overrun and late project delivery. BIM has potentials to overcome the above challenges and even beyond. Low level of BIM adoption with reasonable level of awareness is noticed. However, lack of policy and guideline as well as serious lack of experts in the field are amongst the major barriers to BIM adoption. The industry needs to embrace BIM to possibly compete with its global counterpart.

Keywords: Adoption, BIM, CAD, construction industry, Nigeria, opportunities.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1335