Search results for: forward speed
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4013

Search results for: forward speed

3143 Performance Evaluation of Iar Multi Crop Thresher

Authors: Idris Idris Sunusi, U.S. Muhammed, N.A. Sale, I.B. Dalha, N.A. Adam

Abstract:

Threshing efficiency and mechanical grain damages are among the important parameters used in rating the performance of agricultural threshers. To be acceptable to farmers, threshers should have high threshing efficiency and low grain. The objective of the research is to evaluate the performances of the thresher using sorghum and millet, the performances parameters considered are; threshing efficiency and mechanical grain damage. For millet, four drum speed levels; 700, 800, 900 and 1000 rpm were considered while for sorghum; 600, 700, 800 and 900 rpm were considered. The feed rate levels were 3, 4, 5 and 6 kg/min for both sorghum and millet; the levels of moisture content were 8.93 and 10.38% for sorghum and 9.21 and 10.81% for millet. For millet the test result showed a maximum of 98.37 threshing efficiencies and a minimum of 0.24% mechanical grain damage while for sorghum the test result indicated a maximum of 99.38 threshing efficiencies, and a minimum of 0.75% mechanical grain damage. In comparison to the previous thresher, the threshing efficiency and mechanical grain damage of the modified machine has improved by 2.01% and 330.56% for millet and 5.31%, 287.64% for sorghum. Also analysis of variance (ANOVA) showed that, the effect of drum speed, feed rate and moisture content were significant on the performance parameters.

Keywords: Threshing Efficiency, Mechanical Grain Damages, Sorghum and Millet, Multi Crop Thresher

Procedia PDF Downloads 346
3142 Electrodynamic Principles for Generation and Wireless Transfer of Energy

Authors: Steven D. P. Moore

Abstract:

An electrical discharge in the air induces an electromagnetic (EM) wave capable of wireless transfer, reception, and conversion back into electrical discharge at a distant location. Following Norton’s ground wave principles, EM wave radiation (EMR) runs parallel to the Earth’s surface. Energy in an EMR wave can move through the air and be focused to create a spark at a distant location, focused by a receiver to generate a local electrical discharge. This local discharge can be amplified and stored but also has the propensity to initiate another EMR wave. In addition to typical EM waves, lightning is also associated with atmospheric events, trans-ionospheric pulse pairs, the most powerful natural EMR signal on the planet. With each lightning strike, regardless of global position, it generates naturally occurring pulse-pairs that are emitted towards space within a narrow cone. An EMR wave can self-propagate, travel at the speed of light, and, if polarized, contain vector properties. If this reflective pulse could be directed by design through structures that have increased probabilities for lighting strikes, it could theoretically travel near the surface of the Earth at light speed towards a selected receiver for local transformation into electrical energy. Through research, there are several influencing parameters that could be modified to model, test, and increase the potential for adopting this technology towards the goal of developing a global grid that utilizes natural sources of energy.

Keywords: electricity, sparkgap, wireless, electromagnetic

Procedia PDF Downloads 184
3141 A Human Factors Approach to Workload Optimization for On-Screen Review Tasks

Authors: Christina Kirsch, Adam Hatzigiannis

Abstract:

Rail operators and maintainers worldwide are increasingly replacing walking patrols in the rail corridor with mechanized track patrols -essentially data capture on trains- and on-screen reviews of track infrastructure in centralized review facilities. The benefit is that infrastructure workers are less exposed to the dangers of the rail corridor. The impact is a significant change in work design from walking track sections and direct observation in the real world to sedentary jobs in the review facility reviewing captured data on screens. Defects in rail infrastructure can have catastrophic consequences. Reviewer performance regarding accuracy and efficiency of reviews within the available time frame is essential to ensure safety and operational performance. Rail operators must optimize workload and resource loading to transition to on-screen reviews successfully. Therefore, they need to know what workload assessment methodologies will provide reliable and valid data to optimize resourcing for on-screen reviews. This paper compares objective workload measures, including track difficulty ratings and review distance covered per hour, and subjective workload assessments (NASA TLX) and analyses the link between workload and reviewer performance, including sensitivity, precision, and overall accuracy. An experimental study was completed with eight on-screen reviewers, including infrastructure workers and engineers, reviewing track sections with different levels of track difficulty over nine days. Each day the reviewers completed four 90-minute sessions of on-screen inspection of the track infrastructure. Data regarding the speed of review (km/ hour), detected defects, false negatives, and false positives were collected. Additionally, all reviewers completed a subjective workload assessment (NASA TLX) after each 90-minute session and a short employee engagement survey at the end of the study period that captured impacts on job satisfaction and motivation. The results showed that objective measures for tracking difficulty align with subjective mental demand, temporal demand, effort, and frustration in the NASA TLX. Interestingly, review speed correlated with subjective assessments of physical and temporal demand, but to mental demand. Subjective performance ratings correlated with all accuracy measures and review speed. The results showed that subjective NASA TLX workload assessments accurately reflect objective workload. The analysis of the impact of workload on performance showed that subjective mental demand correlated with high precision -accurately detected defects, not false positives. Conversely, high temporal demand was negatively correlated with sensitivity and the percentage of detected existing defects. Review speed was significantly correlated with false negatives. With an increase in review speed, accuracy declined. On the other hand, review speed correlated with subjective performance assessments. Reviewers thought their performance was higher when they reviewed the track sections faster, despite the decline in accuracy. The study results were used to optimize resourcing and ensure that reviewers had enough time to review the allocated track sections to improve defect detection rates in accordance with the efficiency-thoroughness trade-off. Overall, the study showed the importance of a multi-method approach to workload assessment and optimization, combining subjective workload assessments with objective workload and performance measures to ensure that recommendations for work system optimization are evidence-based and reliable.

Keywords: automation, efficiency-thoroughness trade-off, human factors, job design, NASA TLX, performance optimization, subjective workload assessment, workload analysis

Procedia PDF Downloads 117
3140 Experimental Investigation and Analysis of Wear Parameters on Al/Sic/Gr: Metal Matrix Hybrid Composite by Taguchi Method

Authors: Rachit Marwaha, Rahul Dev Gupta, Vivek Jain, Krishan Kant Sharma

Abstract:

Metal matrix hybrid composites (MMHCs) are now gaining their usage in aerospace, automotive and other industries because of their inherent properties like high strength to weight ratio, hardness and wear resistance, good creep behaviour, light weight, design flexibility and low wear rate etc. Al alloy base matrix reinforced with silicon carbide (10%) and graphite (5%) particles was fabricated by stir casting process. The wear and frictional properties of metal matrix hybrid composites were studied by performing dry sliding wear test using pin on disc wear test apparatus. Experiments were conducted based on the plan of experiments generated through Taguchi’s technique. A L9 Orthogonal array was selected for analysis of data. Investigation to find the influence of applied load, sliding speed and track diameter on wear rate as well as coefficient of friction during wearing process was carried out using ANOVA. Objective of the model was chosen as smaller the better characteristics to analyse the dry sliding wear resistance. Results show that track diameter has highest influence followed by load and sliding speed.

Keywords: Taguchi method, orthogonal array, ANOVA, metal matrix hybrid composites

Procedia PDF Downloads 326
3139 Experimental Investigation of Bituminous Roads with Waste Plastic

Authors: Arjita Biswas, Sandeep Potnis

Abstract:

Plastic roads (bituminous roads using waste plastic in the wearing course ) have now become familiar in the Road Construction Sector in India. With the Indian Road Congress Code (IRC SP: 98 -2013), many agencies are coming forward to implement Plastic Roads in India. This paper discuss and compare about the various properties of bituminous mix with 8% waste plastic and normal bituminous mix. This paper also signifies the performance of both the types of roads after 4 months of age under loading conditions. Experiments were carried out to evaluate its performance. The result shows improved performance of plastic roads.

Keywords: bituminous roads, experiments, performance, plastic roads

Procedia PDF Downloads 210
3138 Cognitive Relaying in Interference Limited Spectrum Sharing Environment: Outage Probability and Outage Capacity

Authors: Md Fazlul Kader, Soo Young Shin

Abstract:

In this paper, we consider a cognitive relay network (CRN) in which the primary receiver (PR) is protected by peak transmit power $\bar{P}_{ST}$ and/or peak interference power Q constraints. In addition, the interference effect from the primary transmitter (PT) is considered to show its impact on the performance of the CRN. We investigate the outage probability (OP) and outage capacity (OC) of the CRN by deriving closed-form expressions over Rayleigh fading channel. Results show that both the OP and OC improve by increasing the cooperative relay nodes as well as when the PT is far away from the SR.

Keywords: cognitive relay, outage, interference limited, decode-and-forward (DF)

Procedia PDF Downloads 507
3137 Study of Human Upper Arm Girth during Elbow Isokinetic Contractions Based on a Smart Circumferential Measuring System

Authors: Xi Wang, Xiaoming Tao, Raymond C. H. So

Abstract:

As one of the convenient and noninvasive sensing approaches, the automatic limb girth measurement has been applied to detect intention behind human motion from muscle deformation. The sensing validity has been elaborated by preliminary researches but still need more fundamental study, especially on kinetic contraction modes. Based on the novel fabric strain sensors, a soft and smart limb girth measurement system was developed by the authors’ group, which can measure the limb girth in-motion. Experiments were carried out on elbow isometric flexion and elbow isokinetic flexion (biceps’ isokinetic contractions) of 90°/s, 60°/s, and 120°/s for 10 subjects (2 canoeists and 8 ordinary people). After removal of natural circumferential increments due to elbow position, the joint torque is found not uniformly sensitive to the limb circumferential strains, but declining as elbow joint angle rises, regardless of the angular speed. Moreover, the maximum joint torque was found as an exponential function of the joint’s angular speed. This research highly contributes to the application of the automatic limb girth measuring during kinetic contractions, and it is useful to predict the contraction level of voluntary skeletal muscles.

Keywords: fabric strain sensor, muscle deformation, isokinetic contraction, joint torque, limb girth strain

Procedia PDF Downloads 335
3136 Optimization of Two Quality Characteristics in Injection Molding Processes via Taguchi Methodology

Authors: Joseph C. Chen, Venkata Karthik Jakka

Abstract:

The main objective of this research is to optimize tensile strength and dimensional accuracy in injection molding processes using Taguchi Parameter Design. An L16 orthogonal array (OA) is used in Taguchi experimental design with five control factors at four levels each and with non-controllable factor vibration. A total of 32 experiments were designed to obtain the optimal parameter setting for the process. The optimal parameters identified for the shrinkage are shot volume, 1.7 cubic inch (A4); mold term temperature, 130 ºF (B1); hold pressure, 3200 Psi (C4); injection speed, 0.61 inch3/sec (D2); and hold time of 14 seconds (E2). The optimal parameters identified for the tensile strength are shot volume, 1.7 cubic inch (A4); mold temperature, 160 ºF (B4); hold pressure, 3100 Psi (C3); injection speed, 0.69 inch3/sec (D4); and hold time of 14 seconds (E2). The Taguchi-based optimization framework was systematically and successfully implemented to obtain an adjusted optimal setting in this research. The mean shrinkage of the confirmation runs is 0.0031%, and the tensile strength value was found to be 3148.1 psi. Both outcomes are far better results from the baseline, and defects have been further reduced in injection molding processes.

Keywords: injection molding processes, taguchi parameter design, tensile strength, high-density polyethylene(HDPE)

Procedia PDF Downloads 193
3135 A Time-Reducible Approach to Compute Determinant |I-X|

Authors: Wang Xingbo

Abstract:

Computation of determinant in the form |I-X| is primary and fundamental because it can help to compute many other determinants. This article puts forward a time-reducible approach to compute determinant |I-X|. The approach is derived from the Newton’s identity and its time complexity is no more than that to compute the eigenvalues of the square matrix X. Mathematical deductions and numerical example are presented in detail for the approach. By comparison with classical approaches the new approach is proved to be superior to the classical ones and it can naturally reduce the computational time with the improvement of efficiency to compute eigenvalues of the square matrix.

Keywords: algorithm, determinant, computation, eigenvalue, time complexity

Procedia PDF Downloads 410
3134 Study of the Vertical Handoff in Heterogeneous Networks and Implement Based on Opnet

Authors: Wafa Benaatou, Adnane Latif

Abstract:

In this document we studied more in detail the Performances of the vertical handover in the networks WLAN, WiMAX, UMTS before studying of it the Procedure of Handoff Vertical, the whole buckled by simulations putting forward the performances of the handover in the heterogeneous networks. The goal of Vertical Handover is to carry out several accesses in real-time in the heterogeneous networks. This makes it possible a user to use several networks (such as WLAN UMTS and WiMAX) in parallel, and the system to commutate automatically at another basic station, without disconnecting itself, as if there were no cut and with little loss of data as possible.

Keywords: vertical handoff, WLAN, UMTS, WIMAX, heterogeneous

Procedia PDF Downloads 387
3133 Modeling of the Attitude Control Reaction Wheels of a Spacecraft in Software in the Loop Test Bed

Authors: Amr AbdelAzim Ali, G. A. Elsheikh, Moutaz M. Hegazy

Abstract:

Reaction wheels (RWs) are generally used as main actuator in the attitude control system (ACS) of spacecraft (SC) for fast orientation and high pointing accuracy. In order to achieve the required accuracy for the RWs model, the main characteristics of the RWs that necessitate analysis during the ACS design phase include: technical features, sequence of operating and RW control logic are included in function (behavior) model. A mathematical model is developed including the various errors source. The errors in control torque including relative, absolute, and error due to time delay. While the errors in angular velocity due to differences between average and real speed, resolution error, loose in installation of angular sensor, and synchronization errors. The friction torque is presented in the model include the different feature of friction phenomena: steady velocity friction, static friction and break-away torque, and frictional lag. The model response is compared with the experimental torque and frequency-response characteristics of tested RWs. Based on the created RW model, some criteria of optimization based control torque allocation problem can be recommended like: avoiding the zero speed crossing, bias angular velocity, or preventing wheel from running on the same angular velocity.

Keywords: friction torque, reaction wheels modeling, software in the loop, spacecraft attitude control

Procedia PDF Downloads 262
3132 An Experimental Study on the Positive Streamer Leader Propagation under Slow Front Impulse Voltages in a 10m Rod-Plane Air Gap

Authors: Wahab Ali Shah, Junjia He

Abstract:

In this work, we performed a large-scale investigation into leader development in a 10 m rod-plane gap under a long front positive impulse. To describe the leader propagation under slow front impulse voltages, we recorded the leader propagation with a high-speed charge coupled device (CCD) camera. It is important to figure out this phenomenon to deepen our understanding of leader discharge. The observation results showed that the leader mechanism is a very complex physical phenomenon; it could be categorized into two types of leader process, namely, continuous and the discontinuous leader streamer-leader propagation. Furthermore, we studied the continuous leader development parameters, including two-dimensional (2-D) leader length, injected charge, and final jump stage, as well as leader velocity for rod–plane configuration. We observed that the discontinuous leader makes an important contribution to the appearance of channel re-illuminations of the positive leader. The comparative study shows better results in terms of standard switch impulse and long front positive impulse. Finally, the results are presented with a view toward improving our understanding of propagation mechanisms related to restrike phenomena, which are rarely reported. To clarify the above doubts under long front cases, we carried out extensive experiments in this study.

Keywords: continuous and discontinuous leader, high-speed photographs, long air gap, positive long front impulse, restrike phenomena

Procedia PDF Downloads 166
3131 A Literature Review and a Proposed Conceptual Framework for Learning Activities in Business Process Management

Authors: Carin Lindskog

Abstract:

Introduction: Long-term success requires an organizational balance between continuity (exploitation) and change (exploration). The problem of balancing exploitation and exploration is a common issue in studies of organizational learning. In order to better face the tough competition in the face of changes, organizations need to exploit their current business and explore new business fields by developing new capabilities. The purpose of this work in progress is to develop a conceptual framework to shed light on the relevance of 'learning activities', i.e., exploitation and exploration, on different levels. The research questions that will be addressed are as follows: What sort of learning activities are found in the Business Process Management (BPM) field? How can these activities be linked to the individual level, group, level, and organizational level? In the work, a literature review will first be conducted. This review will explore the status of learning activities in the BPM field. An outcome from the literature review will be a conceptual framework of learning activities based on the included publications. The learning activities will be categorized to focus on the categories exploitation, exploration or both and into the levels of individual, group, and organization. The proposed conceptual framework will be a valuable tool for analyzing the research field as well as identification of future research directions. Related Work: BPM has increased in popularity as a way of working to strengthen the quality of the work and meet the demands of efficiency. Due to the increase in BPM popularity, more and more organizations reporting on BPM failure. One reason for this is the lack of knowledge about the extended scope of BPM to other business contexts that include, for example, more creative business fields. Yet another reason for the failures are the fact of the employees’ are resistant to changes. The learning process in an organization is an ongoing cycle of reflection and action and is a process that can be initiated, developed and practiced. Furthermore, organizational learning is multilevel; therefore the theory of organizational learning needs to consider the individual, the group, and the organization level. Learning happens over time and across levels, but it also creates a tension between incorporating new learning (feed-forward) and exploiting or using what has already been learned (feedback). Through feed-forward processes, new ideas and actions move from the individual to the group to the organization level. At the same time, what has already been learned feeds back from the organization to a group to an individual and has an impact on how people act and think.

Keywords: business process management, exploitation, exploration, learning activities

Procedia PDF Downloads 122
3130 Using Deep Learning Real-Time Object Detection Convolution Neural Networks for Fast Fruit Recognition in the Tree

Authors: K. Bresilla, L. Manfrini, B. Morandi, A. Boini, G. Perulli, L. C. Grappadelli

Abstract:

Image/video processing for fruit in the tree using hard-coded feature extraction algorithms have shown high accuracy during recent years. While accurate, these approaches even with high-end hardware are computationally intensive and too slow for real-time systems. This paper details the use of deep convolution neural networks (CNNs), specifically an algorithm (YOLO - You Only Look Once) with 24+2 convolution layers. Using deep-learning techniques eliminated the need for hard-code specific features for specific fruit shapes, color and/or other attributes. This CNN is trained on more than 5000 images of apple and pear fruits on 960 cores GPU (Graphical Processing Unit). Testing set showed an accuracy of 90%. After this, trained data were transferred to an embedded device (Raspberry Pi gen.3) with camera for more portability. Based on correlation between number of visible fruits or detected fruits on one frame and the real number of fruits on one tree, a model was created to accommodate this error rate. Speed of processing and detection of the whole platform was higher than 40 frames per second. This speed is fast enough for any grasping/harvesting robotic arm or other real-time applications.

Keywords: artificial intelligence, computer vision, deep learning, fruit recognition, harvesting robot, precision agriculture

Procedia PDF Downloads 414
3129 Research of Data Cleaning Methods Based on Dependency Rules

Authors: Yang Bao, Shi Wei Deng, WangQun Lin

Abstract:

This paper introduces the concept and principle of data cleaning, analyzes the types and causes of dirty data, and proposes several key steps of typical cleaning process, puts forward a well scalability and versatility data cleaning framework, in view of data with attribute dependency relation, designs several of violation data discovery algorithms by formal formula, which can obtain inconsistent data to all target columns with condition attribute dependent no matter data is structured (SQL) or unstructured (NoSQL), and gives 6 data cleaning methods based on these algorithms.

Keywords: data cleaning, dependency rules, violation data discovery, data repair

Procedia PDF Downloads 560
3128 Modeling of Surface Roughness in Hard Turning of DIN 1.2210 Cold Work Tool Steel with Ceramic Tools

Authors: Mehmet Erdi Korkmaz, Mustafa Günay

Abstract:

Nowadays, grinding is frequently replaced with hard turning for reducing set up time and higher accuracy. This paper focused on mathematical modeling of average surface roughness (Ra) in hard turning of AISI L2 grade (DIN 1.2210) cold work tool steel with ceramic tools. The steel was hardened to 60±1 HRC after the heat treatment process. Cutting speed, feed rate, depth of cut and tool nose radius was chosen as the cutting conditions. The uncoated ceramic cutting tools were used in the machining experiments. The machining experiments were performed according to Taguchi L27 orthogonal array on CNC lathe. Ra values were calculated by averaging three roughness values obtained from three different points of machined surface. The influences of cutting conditions on surface roughness were evaluated as statistical and experimental. The analysis of variance (ANOVA) with 95% confidence level was applied for statistical analysis of experimental results. Finally, mathematical models were developed using the artificial neural networks (ANN). ANOVA results show that feed rate is the dominant factor affecting surface roughness, followed by tool nose radius and cutting speed.

Keywords: ANN, hard turning, DIN 1.2210, surface roughness, Taguchi method

Procedia PDF Downloads 367
3127 Design of In-House Test Method for Assuring Packing Quality of Bottled Spirits

Authors: S. Ananthakrishnan, U. H. Acharya

Abstract:

Whether shopping in a retail location or via the internet, consumers expect to receive their products intact. When products arrive damaged or over-packaged, the result can be customer dissatisfaction and increased cost for retailers and manufacturers. The packaging performance depends on both the transport situation and the packaging design. During transportation, the packaged products are subjected to the variation in vibration levels from transport vehicles that vary in frequency and acceleration while moving to their destinations. Spirits manufactured by this Company were being transported to various parts of the country by road. There were instances of package breaking and customer complaints. The vibration experienced on a straight road at some speed may not be same as the vibration experienced by the same vehicle on a curve at the same speed. This vibration may negatively affect the product or packing. Hence, it was necessary to conduct a physical road test to understand the effect of vibration in the packaged products. The field transit trial has to be done before the transportations, which results in high investment. The company management was interested in developing an in-house test environment which would adequately represent the transit conditions. With the objective to develop an in-house test condition that can accurately simulate the mechanical loading scenario prevailing during the storage, handling and transportation of the products a brainstorming was done with the concerned people to identify the critical factors affecting vibration rate. Position of corrugated box, the position of bottle and speed of vehicle were identified as factors affecting the vibration rate. Several packing scenarios were identified by Design of Experiment methodology and simulated in the in-house test facility. Each condition was observed for 30 minutes, which was equivalent to 1000 km. The achieved vibration level was considered as the response. The average achieved in the simulated experiments was near to the third quartile (Q3) of the actual data. Thus, we were able to address around three-fourth of the actual phenomenon. Most of the cases in transit could be reproduced. The recommended test condition could generate a vibration level ranging from 9g to 15g as against a maximum of only 7g that was being generated earlier. Thus, the Company was able to test the packaged cartons satisfactorily in the house itself before transporting to the destinations, assuring itself that the breakages of the bottles will not happen.

Keywords: ANOVA, Corrugated box, DOE, Quartile

Procedia PDF Downloads 118
3126 Mathematical Study for Traffic Flow and Traffic Density in Kigali Roads

Authors: Kayijuka Idrissa

Abstract:

This work investigates a mathematical study for traffic flow and traffic density in Kigali city roads and the data collected from the national police of Rwanda in 2012. While working on this topic, some mathematical models were used in order to analyze and compare traffic variables. This work has been carried out on Kigali roads specifically at roundabouts from Kigali Business Center (KBC) to Prince House as our study sites. In this project, we used some mathematical tools to analyze the data collected and to understand the relationship between traffic variables. We applied the Poisson distribution method to analyze and to know the number of accidents occurred in this section of the road which is from KBC to Prince House. The results show that the accidents that occurred in 2012 were at very high rates due to the fact that this section has a very narrow single lane on each side which leads to high congestion of vehicles, and consequently, accidents occur very frequently. Using the data of speeds and densities collected from this section of road, we found that the increment of the density results in a decrement of the speed of the vehicle. At the point where the density is equal to the jam density the speed becomes zero. The approach is promising in capturing sudden changes on flow patterns and is open to be utilized in a series of intelligent management strategies and especially in noncurrent congestion effect detection and control.

Keywords: statistical methods, traffic flow, Poisson distribution, car moving technics

Procedia PDF Downloads 279
3125 Human-factor and Ergonomics in Bottling Lines

Authors: Parameshwaran Nair

Abstract:

Filling and packaging lines for bottling of beverages into glass, PET or aluminum containers require specialized expertise and a different configuration of equipment like – Filler, Warmer, Labeller, Crater/Recrater, Shrink Packer, Carton Erector, Carton Sealer, Date Coder, Palletizer, etc. Over the period of time, the packaging industry has evolved from manually operated single station machines to highly automized high-speed lines. Human factor and ergonomics have gained significant consideration in this course of transformation. A pre-requisite for such bottling lines, irrespective of the container type and size, is to be suitable for multi-format applications. It should also be able to handle format changeovers with minimal adjustment. It should have variable capacity and speeds, for providing great flexibility of use in managing accumulation times as a function of production characteristics. In terms of layout as well, it should demonstrate flexibility for operator movement and access to machine areas for maintenance. Packaging technology during the past few decades has risen to these challenges by a series of major breakthroughs interspersed with periods of refinement and improvement. The milestones are many and varied and are described briefly in this paper. In order to have a brief understanding of the human factor and ergonomics in the modern packaging lines, this paper, highlights the various technologies, design considerations and statutory requirements in packaging equipment for different types of containers used in India.

Keywords: human-factor, ergonomics, bottling lines, automized high-speed lines

Procedia PDF Downloads 430
3124 Performance of High Efficiency Video Codec over Wireless Channels

Authors: Mohd Ayyub Khan, Nadeem Akhtar

Abstract:

Due to recent advances in wireless communication technologies and hand-held devices, there is a huge demand for video-based applications such as video surveillance, video conferencing, remote surgery, Digital Video Broadcast (DVB), IPTV, online learning courses, YouTube, WhatsApp, Instagram, Facebook, Interactive Video Games. However, the raw videos posses very high bandwidth which makes the compression a must before its transmission over the wireless channels. The High Efficiency Video Codec (HEVC) (also called H.265) is latest state-of-the-art video coding standard developed by the Joint effort of ITU-T and ISO/IEC teams. HEVC is targeted for high resolution videos such as 4K or 8K resolutions that can fulfil the recent demands for video services. The compression ratio achieved by the HEVC is twice as compared to its predecessor H.264/AVC for same quality level. The compression efficiency is generally increased by removing more correlation between the frames/pixels using complex techniques such as extensive intra and inter prediction techniques. As more correlation is removed, the chances of interdependency among coded bits increases. Thus, bit errors may have large effect on the reconstructed video. Sometimes even single bit error can lead to catastrophic failure of the reconstructed video. In this paper, we study the performance of HEVC bitstream over additive white Gaussian noise (AWGN) channel. Moreover, HEVC over Quadrature Amplitude Modulation (QAM) combined with forward error correction (FEC) schemes are also explored over the noisy channel. The video will be encoded using HEVC, and the coded bitstream is channel coded to provide some redundancies. The channel coded bitstream is then modulated using QAM and transmitted over AWGN channel. At the receiver, the symbols are demodulated and channel decoded to obtain the video bitstream. The bitstream is then used to reconstruct the video using HEVC decoder. It is observed that as the signal to noise ratio of channel is decreased the quality of the reconstructed video decreases drastically. Using proper FEC codes, the quality of the video can be restored up to certain extent. Thus, the performance analysis of HEVC presented in this paper may assist in designing the optimized code rate of FEC such that the quality of the reconstructed video is maximized over wireless channels.

Keywords: AWGN, forward error correction, HEVC, video coding, QAM

Procedia PDF Downloads 146
3123 A Critical Case Study of Women Police in Ranchi, Jharkhand, India: An Analysis of Work Life Balance of Women in Jharkhand, India

Authors: Swati Minz, Pradeep Munda, Ranchi Jharkhand

Abstract:

Women of today’s era are well educated and they are best and proficient at their skills that are key to success anywhere. Government played a major role in uplifting women in Indian society. Through all these efforts Indian women decided to move forward and started choosing career path which was itself a challenge in their life. The people in the society had a very hatred feeling for the women who chose a career and moved forward. Women in today’s times have achieved a lot but in reality they have to still travel a long way. Women started leaving the secured domains of their home and moved out, but a harsh, cruel, exploitative world awaits them, where women have to prove their talent against the world who see women as merely vassals of producing children. In spite all modernisation, a woman has her limits and emerges to claim traditional male space, juggling with many family problems and multiple roles to excel at a level that would have been perceived as impossible a generation ago. Still a woman in India is storming traditional male fields. Even the occupation which had male monopoly life defense services, merchant navy, administrative or police services, these are the best examples for women now. If these women are taken under consideration they never had any issues while fighting a battle ,or trying to encroach into the men’s world ,but rather, they adopts themselves in the situation and are good ,trying to justify their roles and proving themselves. The last few decades there have been noticed an enormous growth in levels of education, confidence and the most importantly, ambition noticed towards in women, who all are striving their rights and claiming a dignified place in the society. Previously women were educated for the sake to get married and start new family but nowadays they utilize their skill productively. Since the time after independence, considering both women in India in general and women in Jharkhand in particular has played a very prominent role in all walks of life including the professions. Any success and achievement in any organisation depends on their contribution as well. Due to these consequences, there has always been a need to study and focus light on issues affecting women professionals, empowerment and their work life balance.

Keywords: women, work life balance, work empowerment, career, struggle, society, challenges, family, society, achievement

Procedia PDF Downloads 384
3122 Theoretical and Experimental Analysis of Hard Material Machining

Authors: Rajaram Kr. Gupta, Bhupendra Kumar, T. V. K. Gupta, D. S. Ramteke

Abstract:

Machining of hard materials is a recent technology for direct production of work-pieces. The primary challenge in machining these materials is selection of cutting tool inserts which facilitates an extended tool life and high-precision machining of the component. These materials are widely for making precision parts for the aerospace industry. Nickel-based alloys are typically used in extreme environment applications where a combination of strength, corrosion resistance and oxidation resistance material characteristics are required. The present paper reports the theoretical and experimental investigations carried out to understand the influence of machining parameters on the response parameters. Considering the basic machining parameters (speed, feed and depth of cut) a study has been conducted to observe their influence on material removal rate, surface roughness, cutting forces and corresponding tool wear. Experiments are designed and conducted with the help of Central Composite Rotatable Design technique. The results reveals that for a given range of process parameters, material removal rate is favorable for higher depths of cut and low feed rate for cutting forces. Low feed rates and high values of rotational speeds are suitable for better finish and higher tool life.

Keywords: speed, feed, depth of cut, roughness, cutting force, flank wear

Procedia PDF Downloads 281
3121 Stochastic Repair and Replacement with a Single Repair Channel

Authors: Mohammed A. Hajeeh

Abstract:

This paper examines the behavior of a system, which upon failure is either replaced with certain probability p or imperfectly repaired with probability q. The system is analyzed using Kolmogorov's forward equations method; the analytical expression for the steady state availability is derived as an indicator of the system’s performance. It is found that the analysis becomes more complex as the number of imperfect repairs increases. It is also observed that the availability increases as the number of states and replacement probability increases. Using such an approach in more complex configurations and in dynamic systems is cumbersome; therefore, it is advisable to resort to simulation or heuristics. In this paper, an example is provided for demonstration.

Keywords: repairable models, imperfect, availability, exponential distribution

Procedia PDF Downloads 284
3120 Consequences of Inadequate Funding in Nigerian Educational System

Authors: Sylvia Nkiru Ogbuoji

Abstract:

This paper discussed the consequences of inadequate funding in Nigerian education system. It briefly explained the meaning of education in relation to the context and identified various ways education in Nigeria can be funded. It highlighted some of the consequences of inadequate funding education system to include: Inadequate facilitates for teaching and learning, western brain drain, unemployment, crises of poverty, low staff morale it. Finally, some recommendations were put forward, the government should improve the annual budget allocation to education, in order to achieve educational objective, also government should monitor the utilization of allocated funds to minimize embezzlement.

Keywords: consequences, corruption, education, funding

Procedia PDF Downloads 447
3119 The Coalescence Process of Droplet Pairs in Different Junctions

Authors: Xiang Wang, Yan Pang, Zhaomiao Liu

Abstract:

Droplet-based microfluidics have been studied extensively with the development of the Micro-Electro-Mechanical System (MEMS) which bears the advantages of high throughput, high efficiency, low cost and low polydispersity. Droplets, worked as versatile carriers, could provide isolated chambers as the internal dispersed phase is protected from the outside continuous phase. Droplets are used to add reagents to start or end bio-chemical reactions, to generate concentration gradients, to realize hydrate crystallization or protein analyses, while droplets coalescence acts as an important control technology. In this paper, deionized water is used as the dispersed phase, and several kinds of oil are used as the continuous phase to investigate the influence of the viscosity ratio of the two phases on the coalescence process. The microchannels are fabricated by coating a polydimethylsiloxane (PDMS) layer onto another PDMS flat plate after corona treatment. All newly made microchannels are rinsed with the continuous oil phase for hours before experiments to ensure the swelling fully developed. High-speed microscope system is used to document the serial videos with a maximum speed of 2000 frames per second. The critical capillary numbers (Ca*) of droplet pairs in various junctions are studied and compared. Ca* varies with different junctions or different liquids within the range of 0.002 to 0.01. However, droplets without extra control would have the problem of synchronism which reduces the coalescence efficiency.

Keywords: coalescence, concentration, critical capillary number, droplet pair, split

Procedia PDF Downloads 244
3118 Low Energy Technology for Leachate Valorisation

Authors: Jesús M. Martín, Francisco Corona, Dolores Hidalgo

Abstract:

Landfills present long-term threats to soil, air, groundwater and surface water due to the formation of greenhouse gases (methane gas and carbon dioxide) and leachate from decomposing garbage. The composition of leachate differs from site to site and also within the landfill. The leachates alter with time (from weeks to years) since the landfilled waste is biologically highly active and their composition varies. Mainly, the composition of the leachate depends on factors such as characteristics of the waste, the moisture content, climatic conditions, degree of compaction and the age of the landfill. Therefore, the leachate composition cannot be generalized and the traditional treatment models should be adapted in each case. Although leachate composition is highly variable, what different leachates have in common is hazardous constituents and their potential eco-toxicological effects on human health and on terrestrial ecosystems. Since leachate has distinct compositions, each landfill or dumping site would represent a different type of risk on its environment. Nevertheless, leachates consist always of high organic concentration, conductivity, heavy metals and ammonia nitrogen. Leachate could affect the current and future quality of water bodies due to uncontrolled infiltrations. Therefore, control and treatment of leachate is one of the biggest issues in urban solid waste treatment plants and landfills design and management. This work presents a treatment model that will be carried out "in-situ" using a cost-effective novel technology that combines solar evaporation/condensation plus forward osmosis. The plant is powered by renewable energies (solar energy, biomass and residual heat), which will minimize the carbon footprint of the process. The final effluent quality is very high, allowing reuse (preferred) or discharge into watercourses. In the particular case of this work, the final effluents will be reused for cleaning and gardening purposes. A minority semi-solid residual stream is also generated in the process. Due to its special composition (rich in metals and inorganic elements), this stream will be valorized in ceramic industries to improve the final products characteristics.

Keywords: forward osmosis, landfills, leachate valorization, solar evaporation

Procedia PDF Downloads 200
3117 Accelerated Molecular Simulation: A Convolution Approach

Authors: Jannes Quer, Amir Niknejad, Marcus Weber

Abstract:

Computational Drug Design is often based on Molecular Dynamics simulations of molecular systems. Molecular Dynamics can be used to simulate, e.g., the binding and unbinding event of a small drug-like molecule with regard to the active site of an enzyme or a receptor. However, the time-scale of the overall binding event is many orders of magnitude longer than the time-scale of simulation. Thus, there is a need to speed-up molecular simulations. In order to speed up simulations, the molecular dynamics trajectories have to be ”steared” out of local minimizers of the potential energy surface – the so-called metastabilities – of the molecular system. Increasing the kinetic energy (temperature) is one possibility to accelerate simulated processes. However, with temperature the entropy of the molecular system increases, too. But this kind ”stearing” is not directed enough to stear the molecule out of the minimum toward the saddle point. In this article, we give a new mathematical idea, how a potential energy surface can be changed in such a way, that entropy is kept under control while the trajectories are still steared out of the metastabilities. In order to compute the unsteared transition behaviour based on a steared simulation, we propose to use extrapolation methods. In the end we mathematically show, that our method accelerates the simulations along the direction, in which the curvature of the potential energy surface changes the most, i.e., from local minimizers towards saddle points.

Keywords: extrapolation, Eyring-Kramers, metastability, multilevel sampling

Procedia PDF Downloads 323
3116 Study of Microstructure and Mechanical Properties Obtained by FSW of Similar and Dissimilar Non-Ferrous Alloys Used in Aerospace and Automobile Industry

Authors: Ajay Sidana, Kulbir Singh Sandhu, Balwinder Singh Sidhu

Abstract:

Joining of dissimilar non-ferrous alloys like aluminium and magnesium alloys becomes important in various automobile and aerospace applications due to their low density and good corrosion resistance. Friction Stir Welding (FSW), a solid state joining process, successfully welds difficult to weld similar and dissimilar aluminum and magnesium alloys. Two tool rotation speeds were selected by keeping the transverse speed constant to weld similar and dissimilar alloys. Similar(Al to Al) and Dissimilar(Al to Mg) weld joints were obtained by FSW. SEM scans revealed that higher tool rotation fragments the coarse grains of base material into fine grains in the weld zone. Also, there are less welding defects in weld joints obtained with higher tool rotation speed. The material of dissimilar alloys was mixed with each other forming recrystallised new intermetallics. There was decrease in hardness of similar weld joint however there is significant increase in hardness of weld zone in case of dissimilar weld joints due to stirring action of tool and formation of inter metallics. Tensile tests revealed that there was decrease in percentage elongation in both similar and dissimilar weld joints.

Keywords: aluminum alloys, magnesium alloys, friction stir welding, microstructure, mechanical properties

Procedia PDF Downloads 448
3115 An Evolutionary Approach for QAOA for Max-Cut

Authors: Francesca Schiavello

Abstract:

This work aims to create a hybrid algorithm, combining Quantum Approximate Optimization Algorithm (QAOA) with an Evolutionary Algorithm (EA) in the place of traditional gradient based optimization processes. QAOA’s were first introduced in 2014, where, at the time, their algorithm performed better than the traditional best known classical algorithm for Max-cut graphs. Whilst classical algorithms have improved since then and have returned to being faster and more efficient, this was a huge milestone for quantum computing, and their work is often used as a benchmarking tool and a foundational tool to explore variants of QAOA’s. This, alongside with other famous algorithms like Grover’s or Shor’s, highlights to the world the potential that quantum computing holds. It also presents the reality of a real quantum advantage where, if the hardware continues to improve, this could constitute a revolutionary era. Given that the hardware is not there yet, many scientists are working on the software side of things in the hopes of future progress. Some of the major limitations holding back quantum computing are the quality of qubits and the noisy interference they generate in creating solutions, the barren plateaus that effectively hinder the optimization search in the latent space, and the availability of number of qubits limiting the scale of the problem that can be solved. These three issues are intertwined and are part of the motivation for using EAs in this work. Firstly, EAs are not based on gradient or linear optimization methods for the search in the latent space, and because of their freedom from gradients, they should suffer less from barren plateaus. Secondly, given that this algorithm performs a search in the solution space through a population of solutions, it can also be parallelized to speed up the search and optimization problem. The evaluation of the cost function, like in many other algorithms, is notoriously slow, and the ability to parallelize it can drastically improve the competitiveness of QAOA’s with respect to purely classical algorithms. Thirdly, because of the nature and structure of EA’s, solutions can be carried forward in time, making them more robust to noise and uncertainty. Preliminary results show that the EA algorithm attached to QAOA can perform on par with the traditional QAOA with a Cobyla optimizer, which is a linear based method, and in some instances, it can even create a better Max-Cut. Whilst the final objective of the work is to create an algorithm that can consistently beat the original QAOA, or its variants, due to either speedups or quality of the solution, this initial result is promising and show the potential of EAs in this field. Further tests need to be performed on an array of different graphs with the parallelization aspect of the work commencing in October 2023 and tests on real hardware scheduled for early 2024.

Keywords: evolutionary algorithm, max cut, parallel simulation, quantum optimization

Procedia PDF Downloads 57
3114 A Case Study for User Rating Prediction on Automobile Recommendation System Using Mapreduce

Authors: Jiao Sun, Li Pan, Shijun Liu

Abstract:

Recommender systems have been widely used in contemporary industry, and plenty of work has been done in this field to help users to identify items of interest. Collaborative Filtering (CF, for short) algorithm is an important technology in recommender systems. However, less work has been done in automobile recommendation system with the sharp increase of the amount of automobiles. What’s more, the computational speed is a major weakness for collaborative filtering technology. Therefore, using MapReduce framework to optimize the CF algorithm is a vital solution to this performance problem. In this paper, we present a recommendation of the users’ comment on industrial automobiles with various properties based on real world industrial datasets of user-automobile comment data collection, and provide recommendation for automobile providers and help them predict users’ comment on automobiles with new-coming property. Firstly, we solve the sparseness of matrix using previous construction of score matrix. Secondly, we solve the data normalization problem by removing dimensional effects from the raw data of automobiles, where different dimensions of automobile properties bring great error to the calculation of CF. Finally, we use the MapReduce framework to optimize the CF algorithm, and the computational speed has been improved times. UV decomposition used in this paper is an often used matrix factorization technology in CF algorithm, without calculating the interpolation weight of neighbors, which will be more convenient in industry.

Keywords: collaborative filtering, recommendation, data normalization, mapreduce

Procedia PDF Downloads 214