Search results for: synchronous machine
407 Coils and Antennas Fabricated with Sewing Litz Wire for Wireless Power Transfer
Authors: Hikari Ryu, Yuki Fukuda, Kento Oishi, Chiharu Igarashi, Shogo Kiryu
Abstract:
Recently, wireless power transfer has been developed in various fields. Magnetic coupling is popular for feeding power at a relatively short distance and at a lower frequency. Electro-magnetic wave coupling at a high frequency is used for long-distance power transfer. The wireless power transfer has attracted attention in e-textile fields. Rigid batteries are required for many body-worn electric systems at the present time. The technology enables such batteries to be removed from the systems. Flexible coils have been studied for such applications. Coils with a high Q factor are required in the magnetic-coupling power transfer. Antennas with low return loss are needed for the electro-magnetic coupling. Litz wire is so flexible to fabricate coils and antennas sewn on fabric and has low resistivity. In this study, the electric characteristics of some coils and antennas fabricated with the Litz wire by using two sewing techniques are investigated. As examples, a coil and an antenna are described. Both were fabricated with 330/0.04 mm Litz wire. The coil was a planar coil with a square shape. The outer side was 150 mm, the number of turns was 15, and the pitch interval between each turn was 5 mm. The Litz wire of the coil was overstitched with a sewing machine. The coil was fabricated as a receiver coil for a magnetic coupled wireless power transfer. The Q factor was 200 at a frequency of 800 kHz. A wireless power system was constructed by using the coil. A power oscillator was used in the system. The resonant frequency of the circuit was set to 123 kHz, where the switching loss of power FETs was small. The power efficiencies were 0.44 – 0.99, depending on the distance between the transmitter and receiver coils. As an example of an antenna with a sewing technique, a fractal pattern antenna was stitched on a 500 mm x 500 mm fabric by using a needle punch method. The pattern was the 2nd-oder Vicsec fractal. The return loss of the antenna was -28 dB at a frequency of 144 MHz.Keywords: e-textile, flexible coils and antennas, Litz wire, wireless power transfer
Procedia PDF Downloads 134406 A Distributed Mobile Agent Based on Intrusion Detection System for MANET
Authors: Maad Kamal Al-Anni
Abstract:
This study is about an algorithmic dependence of Artificial Neural Network on Multilayer Perceptron (MPL) pertaining to the classification and clustering presentations for Mobile Adhoc Network vulnerabilities. Moreover, mobile ad hoc network (MANET) is ubiquitous intelligent internetworking devices in which it has the ability to detect their environment using an autonomous system of mobile nodes that are connected via wireless links. Security affairs are the most important subject in MANET due to the easy penetrative scenarios occurred in such an auto configuration network. One of the powerful techniques used for inspecting the network packets is Intrusion Detection System (IDS); in this article, we are going to show the effectiveness of artificial neural networks used as a machine learning along with stochastic approach (information gain) to classify the malicious behaviors in simulated network with respect to different IDS techniques. The monitoring agent is responsible for detection inference engine, the audit data is collected from collecting agent by simulating the node attack and contrasted outputs with normal behaviors of the framework, whenever. In the event that there is any deviation from the ordinary behaviors then the monitoring agent is considered this event as an attack , in this article we are going to demonstrate the signature-based IDS approach in a MANET by implementing the back propagation algorithm over ensemble-based Traffic Table (TT), thus the signature of malicious behaviors or undesirable activities are often significantly prognosticated and efficiently figured out, by increasing the parametric set-up of Back propagation algorithm during the experimental results which empirically shown its effectiveness for the ratio of detection index up to 98.6 percentage. Consequently it is proved in empirical results in this article, the performance matrices are also being included in this article with Xgraph screen show by different through puts like Packet Delivery Ratio (PDR), Through Put(TP), and Average Delay(AD).Keywords: Intrusion Detection System (IDS), Mobile Adhoc Networks (MANET), Back Propagation Algorithm (BPA), Neural Networks (NN)
Procedia PDF Downloads 194405 Improve Student Performance Prediction Using Majority Vote Ensemble Model for Higher Education
Authors: Wade Ghribi, Abdelmoty M. Ahmed, Ahmed Said Badawy, Belgacem Bouallegue
Abstract:
In higher education institutions, the most pressing priority is to improve student performance and retention. Large volumes of student data are used in Educational Data Mining techniques to find new hidden information from students' learning behavior, particularly to uncover the early symptom of at-risk pupils. On the other hand, data with noise, outliers, and irrelevant information may provide incorrect conclusions. By identifying features of students' data that have the potential to improve performance prediction results, comparing and identifying the most appropriate ensemble learning technique after preprocessing the data, and optimizing the hyperparameters, this paper aims to develop a reliable students' performance prediction model for Higher Education Institutions. Data was gathered from two different systems: a student information system and an e-learning system for undergraduate students in the College of Computer Science of a Saudi Arabian State University. The cases of 4413 students were used in this article. The process includes data collection, data integration, data preprocessing (such as cleaning, normalization, and transformation), feature selection, pattern extraction, and, finally, model optimization and assessment. Random Forest, Bagging, Stacking, Majority Vote, and two types of Boosting techniques, AdaBoost and XGBoost, are ensemble learning approaches, whereas Decision Tree, Support Vector Machine, and Artificial Neural Network are supervised learning techniques. Hyperparameters for ensemble learning systems will be fine-tuned to provide enhanced performance and optimal output. The findings imply that combining features of students' behavior from e-learning and students' information systems using Majority Vote produced better outcomes than the other ensemble techniques.Keywords: educational data mining, student performance prediction, e-learning, classification, ensemble learning, higher education
Procedia PDF Downloads 108404 The Effect of the Adhesive Ductility on Bond Characteristics of CFRP/Steel Double Strap Joints Subjected to Dynamic Tensile Loadings
Authors: Haider Al-Zubaidy, Xiao-Ling Zhao, Riadh Al-Mahaidi
Abstract:
In recent years, the technique adhesively-bonded fibre reinforced polymer (FRP) composites has found its way into civil engineering applications and it has attracted a widespread attention as a viable alternative strategy for the retrofitting of civil infrastructure such as bridges and buildings. When adopting this method, adhesive has a significant role and controls the general performance and degree of enhancement of the strengthened and/or upgraded structures. This is because the ultimate member strength is highly affected by the failure mode which is considerably dependent on the utilised adhesive. This paper concerns with experimental investigations on the effect of the adhesive used on the bond between CFRP patch and steel plate under medium impact tensile loading. Experiment were conducted using double strap joints and these samples were prepared using two different types of adhesives, Araldite 420 and MBrace saturant. Drop mass rig was used to carry out dynamic tests at impact speeds of 3.35, 4.43 and m/s while quasi-static tests were implemented at 2mm/min using Instrone machine. In this test program, ultimate load-carrying capacity and failure modes were examined for all loading speeds. For both static and dynamic tests, the adhesive type has a significant effect on ultimate joint strength. It was found that the double strap joints prepared using Araldite 420 showed higher strength than those prepared utilising MBrace saturant adhesive. Failure mechanism for joints prepared using Araldite 420 is completely different from those samples prepared utilising MBrace saturant. CFRP failure is the most common failure pattern for joints with Araldite 420, whereas the dominant failure for joints with MBrace saturant adhesive is adhesive failure.Keywords: CFRP/steel double strap joints, adhesives of different ductility, dynamic tensile loading, bond between CFRP and steel
Procedia PDF Downloads 236403 GAILoc: Improving Fingerprinting-Based Localization System Using Generative Artificial Intelligence
Authors: Getaneh Berie Tarekegn
Abstract:
A precise localization system is crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarming, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. The most common method for providing continuous positioning services in outdoor environments is by using a global navigation satellite system (GNSS). Due to nonline-of-sight, multipath, and weather conditions, GNSS systems do not perform well in dense urban, urban, and suburban areas.This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a novel semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. We also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 39 cm, and more than 90% of the errors are less than 82 cm. That is, numerical results proved that, in comparison to traditional methods, the proposed SRCLoc method can significantly improve positioning performance and reduce radio map construction costs.Keywords: location-aware services, feature extraction technique, generative adversarial network, long short-term memory, support vector machine
Procedia PDF Downloads 75402 Intrusion Detection in SCADA Systems
Authors: Leandros A. Maglaras, Jianmin Jiang
Abstract:
The protection of the national infrastructures from cyberattacks is one of the main issues for national and international security. The funded European Framework-7 (FP7) research project CockpitCI introduces intelligent intrusion detection, analysis and protection techniques for Critical Infrastructures (CI). The paradox is that CIs massively rely on the newest interconnected and vulnerable Information and Communication Technology (ICT), whilst the control equipment, legacy software/hardware, is typically old. Such a combination of factors may lead to very dangerous situations, exposing systems to a wide variety of attacks. To overcome such threats, the CockpitCI project combines machine learning techniques with ICT technologies to produce advanced intrusion detection, analysis and reaction tools to provide intelligence to field equipment. This will allow the field equipment to perform local decisions in order to self-identify and self-react to abnormal situations introduced by cyberattacks. In this paper, an intrusion detection module capable of detecting malicious network traffic in a Supervisory Control and Data Acquisition (SCADA) system is presented. Malicious data in a SCADA system disrupt its correct functioning and tamper with its normal operation. OCSVM is an intrusion detection mechanism that does not need any labeled data for training or any information about the kind of anomaly is expecting for the detection process. This feature makes it ideal for processing SCADA environment data and automates SCADA performance monitoring. The OCSVM module developed is trained by network traces off line and detects anomalies in the system real time. The module is part of an IDS (intrusion detection system) developed under CockpitCI project and communicates with the other parts of the system by the exchange of IDMEF messages that carry information about the source of the incident, the time and a classification of the alarm.Keywords: cyber-security, SCADA systems, OCSVM, intrusion detection
Procedia PDF Downloads 553401 Portuguese Guitar Strings Characterization and Comparison
Authors: P. Serrão, E. Costa, A. Ribeiro, V. Infante
Abstract:
The characteristic sonority of the Portuguese guitar is in great part what makes Fado so distinguishable from other traditional song styles. The Portuguese guitar is a pear-shaped plucked chordophone with six courses of double strings. This study compares the two types of plain strings available for Portuguese guitar and used by the musicians. One is stainless steel spring wire, the other is high carbon spring steel (music wire). Some musicians mention noticeable differences in sound quality between these two string materials, such as a little more brightness and sustain in the steel strings. Experimental tests were performed to characterize string tension at pitch; mechanical strength and tuning stability using the universal testing machine; dimensional control and chemical composition analysis using the scanning electron microscope. The string dynamical behaviour characterization experiments, including frequency response, inharmonicity, transient response, damping phenomena and were made in a monochord test set-up designed and built in-house. Damping factor was determined for the fundamental frequency. As musicians are able to detect very small damping differences, an accurate a characterization of the damping phenomena for all harmonics was necessary. With that purpose, another improved monochord was set and a new system identification methodology applied. Due to the complexity of this task several adjustments were necessary until obtaining good experimental data. In a few cases, dynamical tests were repeated to detect any evolution in damping parameters after break-in period when according to players experience a new string sounds gradually less dull until reaching the typically brilliant timbre. Finally, each set of strings was played on one guitar by a distinguished player and recorded. The recordings which include individual notes, scales, chords and a study piece, will be analysed to potentially characterize timbre variations.Keywords: damping factor, music wire, portuguese guitar, string dynamics
Procedia PDF Downloads 553400 Taguchi-Based Surface Roughness Optimization for Slotted and Tapered Cylindrical Products in Milling and Turning Operations
Authors: Vineeth G. Kuriakose, Joseph C. Chen, Ye Li
Abstract:
The research follows a systematic approach to optimize the parameters for parts machined by turning and milling processes. The quality characteristic chosen is surface roughness since the surface finish plays an important role for parts that require surface contact. A tapered cylindrical surface is designed as a test specimen for the research. The material chosen for machining is aluminum alloy 6061 due to its wide variety of industrial and engineering applications. HAAS VF-2 TR computer numerical control (CNC) vertical machining center is used for milling and HAAS ST-20 CNC machine is used for turning in this research. Taguchi analysis is used to optimize the surface roughness of the machined parts. The L9 Orthogonal Array is designed for four controllable factors with three different levels each, resulting in 18 experimental runs. Signal to Noise (S/N) Ratio is calculated for achieving the specific target value of 75 ± 15 µin. The controllable parameters chosen for turning process are feed rate, depth of cut, coolant flow and finish cut and for milling process are feed rate, spindle speed, step over and coolant flow. The uncontrollable factors are tool geometry for turning process and tool material for milling process. Hypothesis testing is conducted to study the significance of different uncontrollable factors on the surface roughnesses. The optimal parameter settings were identified from the Taguchi analysis and the process capability Cp and the process capability index Cpk were improved from 1.76 and 0.02 to 3.70 and 2.10 respectively for turning process and from 0.87 and 0.19 to 3.85 and 2.70 respectively for the milling process. The surface roughnesses were improved from 60.17 µin to 68.50 µin, reducing the defect rate from 52.39% to 0% for the turning process and from 93.18 µin to 79.49 µin, reducing the defect rate from 71.23% to 0% for the milling process. The purpose of this study is to efficiently utilize the Taguchi design analysis to improve the surface roughness.Keywords: surface roughness, Taguchi parameter design, CNC turning, CNC milling
Procedia PDF Downloads 155399 Acoustic Emission for Tool-Chip Interface Monitoring during Orthogonal Cutting
Authors: D. O. Ramadan, R. S. Dwyer-Joyce
Abstract:
The measurement of the interface conditions in a cutting tool contact is essential information for performance monitoring and control. This interface provides the path for the heat flux to the cutting tool. This elevate in the cutting tool temperature leads to motivate the mechanism of tool wear, thus affect the life of the cutting tool and the productivity. This zone is representative by the tool-chip interface. Therefore, understanding and monitoring this interface is considered an important issue in machining. In this paper, an acoustic emission (AE) technique was used to find the correlation between AE parameters and the tool-chip interface. For this reason, a response surface design (RSD) has been used to analyse and optimize the machining parameters. The experiment design was based on the face centered, central composite design (CCD) in the Minitab environment. According to this design, a series of orthogonal cutting experiments for different cutting conditions were conducted on a Triumph 2500 lathe machine to study the sensitivity of the acoustic emission (AE) signal to change in tool-chip contact length. The cutting parameters investigated were the cutting speed, depth of cut, and feed and the experiments were performed for 6082-T6 aluminium tube. All the orthogonal cutting experiments were conducted unlubricated. The tool-chip contact area was investigated using a scanning electron microscope (SEM). The results obtained in this paper indicate that there is a strong dependence of the root mean square (RMS) on the cutting speed, where the RMS increases with increasing the cutting speed. A dependence on the tool-chip contact length has been also observed. However there was no effect observed of changing the cutting depth and feed on the RMS. These dependencies have been clarified in terms of the strain and temperature in the primary and secondary shear zones, also the tool-chip sticking and sliding phenomenon and the effect of these mechanical variables on dislocation activity at high strain rates. In conclusion, the acoustic emission technique has the potential to monitor in situ the tool-chip interface in turning and consequently could indicate the approaching end of life of a cutting tool.Keywords: Acoustic emission, tool-chip interface, orthogonal cutting, monitoring
Procedia PDF Downloads 487398 Development of a Computer Aided Diagnosis Tool for Brain Tumor Extraction and Classification
Authors: Fathi Kallel, Abdulelah Alabd Uljabbar, Abdulrahman Aldukhail, Abdulaziz Alomran
Abstract:
The brain is an important organ in our body since it is responsible about the majority actions such as vision, memory, etc. However, different diseases such as Alzheimer and tumors could affect the brain and conduct to a partial or full disorder. Regular diagnosis are necessary as a preventive measure and could help doctors to early detect a possible trouble and therefore taking the appropriate treatment, especially in the case of brain tumors. Different imaging modalities are proposed for diagnosis of brain tumor. The powerful and most used modality is the Magnetic Resonance Imaging (MRI). MRI images are analyzed by doctor in order to locate eventual tumor in the brain and describe the appropriate and needed treatment. Diverse image processing methods are also proposed for helping doctors in identifying and analyzing the tumor. In fact, a large Computer Aided Diagnostic (CAD) tools including developed image processing algorithms are proposed and exploited by doctors as a second opinion to analyze and identify the brain tumors. In this paper, we proposed a new advanced CAD for brain tumor identification, classification and feature extraction. Our proposed CAD includes three main parts. Firstly, we load the brain MRI. Secondly, a robust technique for brain tumor extraction is proposed. This technique is based on both Discrete Wavelet Transform (DWT) and Principal Component Analysis (PCA). DWT is characterized by its multiresolution analytic property, that’s why it was applied on MRI images with different decomposition levels for feature extraction. Nevertheless, this technique suffers from a main drawback since it necessitates a huge storage and is computationally expensive. To decrease the dimensions of the feature vector and the computing time, PCA technique is considered. In the last stage, according to different extracted features, the brain tumor is classified into either benign or malignant tumor using Support Vector Machine (SVM) algorithm. A CAD tool for brain tumor detection and classification, including all above-mentioned stages, is designed and developed using MATLAB guide user interface.Keywords: MRI, brain tumor, CAD, feature extraction, DWT, PCA, classification, SVM
Procedia PDF Downloads 250397 Radar Track-based Classification of Birds and UAVs
Authors: Altilio Rosa, Chirico Francesco, Foglia Goffredo
Abstract:
In recent years, the number of Unmanned Aerial Vehicles (UAVs) has significantly increased. The rapid development of commercial and recreational drones makes them an important part of our society. Despite the growing list of their applications, these vehicles pose a huge threat to civil and military installations: detection, classification and neutralization of such flying objects become an urgent need. Radar is an effective remote sensing tool for detecting and tracking flying objects, but scenarios characterized by the presence of a high number of tracks related to flying birds make especially challenging the drone detection task: operator PPI is cluttered with a huge number of potential threats and his reaction time can be severely affected. Flying birds compared to UAVs show similar velocity, RADAR cross-section and, in general, similar characteristics. Building from the absence of a single feature that is able to distinguish UAVs and birds, this paper uses a multiple features approach where an original feature selection technique is developed to feed binary classifiers trained to distinguish birds and UAVs. RADAR tracks acquired on the field and related to different UAVs and birds performing various trajectories were used to extract specifically designed target movement-related features based on velocity, trajectory and signal strength. An optimization strategy based on a genetic algorithm is also introduced to select the optimal subset of features and to estimate the performance of several classification algorithms (Neural network, SVM, Logistic regression…) both in terms of the number of selected features and misclassification error. Results show that the proposed methods are able to reduce the dimension of the data space and to remove almost all non-drone false targets with a suitable classification accuracy (higher than 95%).Keywords: birds, classification, machine learning, UAVs
Procedia PDF Downloads 222396 Integration of Educational Data Mining Models to a Web-Based Support System for Predicting High School Student Performance
Authors: Sokkhey Phauk, Takeo Okazaki
Abstract:
The challenging task in educational institutions is to maximize the high performance of students and minimize the failure rate of poor-performing students. An effective method to leverage this task is to know student learning patterns with highly influencing factors and get an early prediction of student learning outcomes at the timely stage for setting up policies for improvement. Educational data mining (EDM) is an emerging disciplinary field of data mining, statistics, and machine learning concerned with extracting useful knowledge and information for the sake of improvement and development in the education environment. The study is of this work is to propose techniques in EDM and integrate it into a web-based system for predicting poor-performing students. A comparative study of prediction models is conducted. Subsequently, high performing models are developed to get higher performance. The hybrid random forest (Hybrid RF) produces the most successful classification. For the context of intervention and improving the learning outcomes, a feature selection method MICHI, which is the combination of mutual information (MI) and chi-square (CHI) algorithms based on the ranked feature scores, is introduced to select a dominant feature set that improves the performance of prediction and uses the obtained dominant set as information for intervention. By using the proposed techniques of EDM, an academic performance prediction system (APPS) is subsequently developed for educational stockholders to get an early prediction of student learning outcomes for timely intervention. Experimental outcomes and evaluation surveys report the effectiveness and usefulness of the developed system. The system is used to help educational stakeholders and related individuals for intervening and improving student performance.Keywords: academic performance prediction system, educational data mining, dominant factors, feature selection method, prediction model, student performance
Procedia PDF Downloads 106395 Study of Mini Steel Re-Rolling and Pickling Mills for the Reduction of Accidents and Health Hazards
Authors: S. P. Rana
Abstract:
Objectives: For the manufacture of a very thin strip or a strip with a high-quality finish, the stainless steel sheet that is called billet is re-rolled in re-rolling mill to make stainless steel sheet of 18 gauges. The rolls of re-rolling mill exert tremendous pressure over the sheet and there is likely chance of breaking of stainless steel strip from the sheet. The objective of the study was to minimise the number of accidents in steel re-rolling mills due to ejection of stainless steel strip and to minimize the pollution caused by the pickling process used in these units. Methods: Looking into the high rate of frequency and severity of accidents as well as pollution hazard in re-rolling and pickling mills, it becomes essential to make necessary arrangements for prevention of accidents in such type of industry. The author carried out survey/inspections of a large number of re-rolling and pickling mills and allied units. During the course of inspection, the working of these steel re-rolling and pickling mills was closely studied and monitored. A number of accidents involving re-rolling mills were investigated and subsequently remedial measures to prevent the occurrence of such accidents were suggested. Assessment of occupational safety and health system of these units was carried out and compliance level of the statutory requirements was checked. The workers were medically examined and monitored to ascertain their health conditions. Results: Proper use of safety gadgets by workers, machine guarding and regular training brought down the risk to an acceptable level and discharged effluent pollution was brought down to permissible limits. The fatal accidents have been reduced by 83%. Conclusions: Effective enforcement and implementation of the directions/suggestions given to the managements of such units brought down the no. of accidents to a rational level. The number of fatal accidents has reduced by 83% during the study period. The effective implementation of pollution control device curtailed the pollution level to an acceptable level.Keywords: re-rolling mill, hazard, accident, health hazards
Procedia PDF Downloads 443394 Designing the Management Plan for Health Care (Medical) Wastes in the Cities of Semnan, Mahdishahr and Shahmirzad
Authors: Rasouli Divkalaee Zeinab, Kalteh Safa, Roudbari Aliakbar
Abstract:
Introduction: Medical waste can lead to the generation and transmission of many infectious and contagious diseases due to the presence of pathogenic agents, thereby necessitating the need for special management to collect, decontaminate, and finally dispose of such products. This study aimed to design a centralized health care (medical) waste management program for the cities of Semnan, Mahdishahr, and Shahmirzad. Methods: This descriptive-analytical study was conducted for six months in the cities of Semnan, Mahdishahr, and Shahmirzad. In this study, the quantitative and qualitative characteristics of the generated wastes were determined by taking samples from all medical waste production centers. Then, the equipment, devices, and machines required for separate collection of the waste from the production centers and for their subsequent decontamination were estimated. Next, the investment costs, current costs, and working capital required for collection, decontamination, and final disposal of the wastes were determined. Finally, the payment for proper waste management of each category of medical waste-producing centers was determined. Results: 1021 kilograms of medical waste are produced daily in the cities of Semnan, Mahdishahr, and Shahmirzad. It was estimated that a 1000-liter autoclave, a machine for collecting medical waste, four 60-liter bins, four 120-liter bins, and four 1200-liter bins were required for implementing the study plan. Also, the estimated total annual medical waste management costs for Semnan City were determined (23,283,903,720 Iranian Rials). Conclusion: The study results showed that establishing a proper management system for medical wastes generated in the three studied cities will cost between 334,280 and 1,253,715 Iranian Rials in fees for the medical centers. The findings of this study provided comprehensive data regarding medical wastes from the generation point to the landfill site, which is vital for the government and the private sector.Keywords: clinics, decontamination, management, medical waste
Procedia PDF Downloads 78393 Technical Efficiency in Organic and Conventional Wheat Farms: Evidence from a Primary Survey from Two Districts of Ganga River Basin, India
Authors: S. P. Singh, Priya, Komal Sajwan
Abstract:
With the increasing spread of organic farming in India, costs, returns, efficiency, and social and environmental sustainability of organic vis-a-vis conventional farming systems have become topics of interest among agriculture scientists, economists, and policy analysts. A study on technical efficiency estimation under these farming systems, particularly in the Ganga River Basin, where the promotion of organic farming is incentivized, can help to understand whether the inputs are utilized to their maximum possible level and what measures can be taken to improve the efficiency. This paper, therefore, analyses the technical efficiency of wheat farms operating under organic and conventional farming systems. The study is based on a primary survey of 600 farms (300 organic ad 300 conventional) conducted in 2021 in two districts located in the Middle Ganga River Basin, India. Technical, managerial, and scale efficiencies of individual farms are estimated by applying the data envelopment analysis (DEA) methodology. The per hectare value of wheat production is taken as an output variable, and values of seeds, human labour, machine cost, plant nutrients, farm yard manure (FYM), plant protection, and irrigation charges are considered input variables for estimating the farm-level efficiencies. The post-DEA analysis is conducted using the Tobit regression model to know the efficiency determining factors. The results show that technical efficiency is significantly higher in conventional than organic farming systems due to a higher gap in scale efficiency than managerial efficiency. Further, 9.8% conventional and only 1.0% organic farms are found operating at the most productive scale size (MPSS), and 99% organic and 81% conventional farms at IRS. Organic farms perform well in managerial efficiency, but their technical efficiency is lower than conventional farms, mainly due to their relatively lower scale size. The paper suggests that technical efficiency in organic wheat can be increased by upscaling the farm size by incentivizing group/collective farming in clusters.Keywords: organic, conventional, technical efficiency, determinants, DEA, Tobit regression
Procedia PDF Downloads 99392 Artificial Intelligence Impact on Strategic Stability
Authors: Darius Jakimavicius
Abstract:
Artificial intelligence is the subject of intense debate in the international arena, identified both as a technological breakthrough and as a component of the strategic stability effect. Both the kinetic and non-kinetic development of AI and its application in the national strategies of the great powers may trigger a change in the security situation. Artificial intelligence is generally faster, more capable and more efficient than humans, and there is a temptation to transfer decision-making and control responsibilities to artificial intelligence. Artificial intelligence, which, once activated, can select and act on targets without further intervention by a human operator, blurs the boundary between human or robot (machine) warfare, or perhaps human and robot together. Artificial intelligence acts as a force multiplier that speeds up decision-making and reaction times on the battlefield. The role of humans is increasingly moving away from direct decision-making and away from command and control processes involving the use of force. It is worth noting that the autonomy and precision of AI systems make the process of strategic stability more complex. Deterrence theory is currently in a phase of development in which deterrence is undergoing further strain and crisis due to the complexity of the evolving models enabled by artificial intelligence. Based on the concept of strategic stability and deterrence theory, it is appropriate to develop further research on the development and impact of AI in order to assess AI from both a scientific and technical perspective: to capture a new niche in the scientific literature and academic terminology, to clarify the conditions for deterrence, and to identify the potential uses, impacts and possibly quantities of AI. The research problem is the impact of artificial intelligence developed by great powers on strategic stability. This thesis seeks to assess the impact of AI on strategic stability and deterrence principles, with human exclusion from the decision-making and control loop as a key axis. The interaction between AI and human actions and interests can determine fundamental changes in great powers' defense and deterrence, and the development and application of AI-based great powers strategies can lead to a change in strategic stability.Keywords: artificial inteligence, strategic stability, deterrence theory, decision making loop
Procedia PDF Downloads 42391 Heat-Induced Uncertainty of Industrial Computed Tomography Measuring a Stainless Steel Cylinder
Authors: Verena M. Moock, Darien E. Arce Chávez, Mariana M. Espejel González, Leopoldo Ruíz-Huerta, Crescencio García-Segundo
Abstract:
Uncertainty analysis in industrial computed tomography is commonly related to metrological trace tools, which offer precision measurements of external part features. Unfortunately, there is no such reference tool for internal measurements to profit from the unique imaging potential of X-rays. Uncertainty approximations for computed tomography are still based on general aspects of the industrial machine and do not adapt to acquisition parameters or part characteristics. The present study investigates the impact of the acquisition time on the dimensional uncertainty measuring a stainless steel cylinder with a circular tomography scan. The authors develop the figure difference method for X-ray radiography to evaluate the volumetric differences introduced within the projected absorption maps of the metal workpiece. The dimensional uncertainty is dominantly influenced by photon energy dissipated as heat causing the thermal expansion of the metal, as monitored by an infrared camera within the industrial tomograph. With the proposed methodology, we are able to show evolving temperature differences throughout the tomography acquisition. This is an early study showing that the number of projections in computer tomography induces dimensional error due to energy absorption. The error magnitude would depend on the thermal properties of the sample and the acquisition parameters by placing apparent non-uniform unwanted volumetric expansion. We introduce infrared imaging for the experimental display of metrological uncertainty in a particular metal part of symmetric geometry. We assess that the current results are of fundamental value to reach the balance between the number of projections and uncertainty tolerance when performing analysis with X-ray dimensional exploration in precision measurements with industrial tomography.Keywords: computed tomography, digital metrology, infrared imaging, thermal expansion
Procedia PDF Downloads 122390 Motion Planning and Simulation Design of a Redundant Robot for Sheet Metal Bending Processes
Authors: Chih-Jer Lin, Jian-Hong Hou
Abstract:
Industry 4.0 is a vision of integrated industry implemented by artificial intelligent computing, software, and Internet technologies. The main goal of industry 4.0 is to deal with the difficulty owing to competitive pressures in the marketplace. For today’s manufacturing factories, the type of production is changed from mass production (high quantity production with low product variety) to medium quantity-high variety production. To offer flexibility, better quality control, and improved productivity, robot manipulators are used to combine material processing, material handling, and part positioning systems into an integrated manufacturing system. To implement the automated system for sheet metal bending operations, motion planning of a 7-degrees of freedom (DOF) robot is studied in this paper. A virtual reality (VR) environment of a bending cell, which consists of the robot and a bending machine, is established using the virtual robot experimentation platform (V-REP) simulator. For sheet metal bending operations, the robot only needs six DOFs for the pick-and-place or tracking tasks. Therefore, this 7 DOF robot has more DOFs than the required to execute a specified task; it can be called a redundant robot. Therefore, this robot has kinematic redundancies to deal with the task-priority problems. For redundant robots, Pseudo-inverse of the Jacobian is the most popular motion planning method, but the pseudo-inverse methods usually lead to a kind of chaotic motion with unpredictable arm configurations as the Jacobian matrix lose ranks. To overcome the above problem, we proposed a method to formulate the motion planning problems as optimization problem. Moreover, a genetic algorithm (GA) based method is proposed to deal with motion planning of the redundant robot. Simulation results validate the proposed method feasible for motion planning of the redundant robot in an automated sheet-metal bending operations.Keywords: redundant robot, motion planning, genetic algorithm, obstacle avoidance
Procedia PDF Downloads 147389 Human Factors Interventions for Risk and Reliability Management of Defence Systems
Authors: Chitra Rajagopal, Indra Deo Kumar, Ila Chauhan, Ruchi Joshi, Binoy Bhargavan
Abstract:
Reliability and safety are essential for the success of mission-critical and safety-critical defense systems. Humans are part of the entire life cycle of defense systems development and deployment. The majority of industrial accidents or disasters are attributed to human errors. Therefore, considerations of human performance and human reliability are critical in all complex systems, including defense systems. Defense systems are operating from the ground, naval and aerial platforms in diverse conditions impose unique physical and psychological challenges to the human operators. Some of the safety and mission-critical defense systems with human-machine interactions are fighter planes, submarines, warships, combat vehicles, aerial and naval platforms based missiles, etc. Human roles and responsibilities are also going through a transition due to the infusion of artificial intelligence and cyber technologies. Human operators, not accustomed to such challenges, are more likely to commit errors, which may lead to accidents or loss events. In such a scenario, it is imperative to understand the human factors in defense systems for better systems performance, safety, and cost-effectiveness. A case study using Task Analysis (TA) based methodology for assessment and reduction of human errors in the Air and Missile Defense System in the context of emerging technologies were presented. Action-oriented task analysis techniques such as Hierarchical Task Analysis (HTA) and Operator Action Event Tree (OAET) along with Critical Action and Decision Event Tree (CADET) for cognitive task analysis was used. Human factors assessment based on the task analysis helps in realizing safe and reliable defense systems. These techniques helped in the identification of human errors during different phases of Air and Missile Defence operations, leading to meet the requirement of a safe, reliable and cost-effective mission.Keywords: defence systems, reliability, risk, safety
Procedia PDF Downloads 136388 Evaluation of Mechanical Properties and Surface Roughness of Nanofilled and Microhybrid Composites
Authors: Solmaz Eskandarion, Haniyeh Eftekhar, Amin Fallahi
Abstract:
Introduction: Nowadays cosmetic dentistry has gained greater attention because of the changing demands of dentistry patients. Composite resin restorations play an important role in the field of esthetic restorations. Due to the variation between the resin composites, it is important to be aware of their mechanical properties and surface roughness. So, the aim of this study was to compare the mechanical properties (surface hardness, compressive strength, diametral tensile strength) and surface roughness of four kinds of resin composites after thermal aging process. Materials and Method: 10 samples of each composite resins (Gradia-direct (GC), Filtek Z250 (3M), G-ænial (GC), Filtek Z350 (3M- filtek supreme) prepared for evaluation of each properties (totally 120 samples). Thermocycling (with temperature 5 and 55 degree of centigrade and 10000 cycles) were applied. Then, the samples were tested about their compressive strength and diametral tensile strength using UTM. And surface hardness was evaluated with Microhardness testing machine. Either surface roughness was evaluated with Scanning electron microscope after surface polishing. Result: About compressive strength (CS), Filtek Z250 showed the highest value. But there were not any significant differences between 4 groups about CS. Either Filtek Z250 detected as a composite with highest value of diametral tensile strength (DTS) and after that highest to lowest DTS was related to: Filtek Z350, G-ænial and Gradia-direct. And about DTS all of the groups showed significant differences (P<0.05). Vickers Hardness Number (VHN) of Filtek Z250 was the greatest. After that Filtek Z350, G-ænial and Gradia-direct followed it. The surface roughness of nano-filled composites was less than Microhybrid composites. Either the surface roughness of GC Ganial was a little greater than Filtek Z250. Conclusion: This study indicates that there is not any evident significant difference between the groups amoung their mechanical properties. But it seems that Filtek Z250 showed slightly better mechanical properties. About surface roughness, nanofilled composites were better that Microhybrid.Keywords: mechanical properties, surface roughness, resin composite, compressive strength, thermal aging
Procedia PDF Downloads 355387 Feature Engineering Based Detection of Buffer Overflow Vulnerability in Source Code Using Deep Neural Networks
Authors: Mst Shapna Akter, Hossain Shahriar
Abstract:
One of the most important challenges in the field of software code audit is the presence of vulnerabilities in software source code. Every year, more and more software flaws are found, either internally in proprietary code or revealed publicly. These flaws are highly likely exploited and lead to system compromise, data leakage, or denial of service. C and C++ open-source code are now available in order to create a largescale, machine-learning system for function-level vulnerability identification. We assembled a sizable dataset of millions of opensource functions that point to potential exploits. We developed an efficient and scalable vulnerability detection method based on deep neural network models that learn features extracted from the source codes. The source code is first converted into a minimal intermediate representation to remove the pointless components and shorten the dependency. Moreover, we keep the semantic and syntactic information using state-of-the-art word embedding algorithms such as glove and fastText. The embedded vectors are subsequently fed into deep learning networks such as LSTM, BilSTM, LSTM-Autoencoder, word2vec, BERT, and GPT-2 to classify the possible vulnerabilities. Furthermore, we proposed a neural network model which can overcome issues associated with traditional neural networks. Evaluation metrics such as f1 score, precision, recall, accuracy, and total execution time have been used to measure the performance. We made a comparative analysis between results derived from features containing a minimal text representation and semantic and syntactic information. We found that all of the deep learning models provide comparatively higher accuracy when we use semantic and syntactic information as the features but require higher execution time as the word embedding the algorithm puts on a bit of complexity to the overall system.Keywords: cyber security, vulnerability detection, neural networks, feature extraction
Procedia PDF Downloads 90386 Radar Fault Diagnosis Strategy Based on Deep Learning
Authors: Bin Feng, Zhulin Zong
Abstract:
Radar systems are critical in the modern military, aviation, and maritime operations, and their proper functioning is essential for the success of these operations. However, due to the complexity and sensitivity of radar systems, they are susceptible to various faults that can significantly affect their performance. Traditional radar fault diagnosis strategies rely on expert knowledge and rule-based approaches, which are often limited in effectiveness and require a lot of time and resources. Deep learning has recently emerged as a promising approach for fault diagnosis due to its ability to learn features and patterns from large amounts of data automatically. In this paper, we propose a radar fault diagnosis strategy based on deep learning that can accurately identify and classify faults in radar systems. Our approach uses convolutional neural networks (CNN) to extract features from radar signals and fault classify the features. The proposed strategy is trained and validated on a dataset of measured radar signals with various types of faults. The results show that it achieves high accuracy in fault diagnosis. To further evaluate the effectiveness of the proposed strategy, we compare it with traditional rule-based approaches and other machine learning-based methods, including decision trees, support vector machines (SVMs), and random forests. The results demonstrate that our deep learning-based approach outperforms the traditional approaches in terms of accuracy and efficiency. Finally, we discuss the potential applications and limitations of the proposed strategy, as well as future research directions. Our study highlights the importance and potential of deep learning for radar fault diagnosis. It suggests that it can be a valuable tool for improving the performance and reliability of radar systems. In summary, this paper presents a radar fault diagnosis strategy based on deep learning that achieves high accuracy and efficiency in identifying and classifying faults in radar systems. The proposed strategy has significant potential for practical applications and can pave the way for further research.Keywords: radar system, fault diagnosis, deep learning, radar fault
Procedia PDF Downloads 91385 Big Data Analytics and Public Policy: A Study in Rural India
Authors: Vasantha Gouri Prathapagiri
Abstract:
Innovations in ICT sector facilitate qualitative life style for citizens across the globe. Countries that facilitate usage of new techniques in ICT, i.e., big data analytics find it easier to fulfil the needs of their citizens. Big data is characterised by its volume, variety, and speed. Analytics involves its processing in a cost effective way in order to draw conclusion for their useful application. Big data also involves into the field of machine learning, artificial intelligence all leading to accuracy in data presentation useful for public policy making. Hence using data analytics in public policy making is a proper way to march towards all round development of any country. The data driven insights can help the government to take important strategic decisions with regard to socio-economic development of her country. Developed nations like UK and USA are already far ahead on the path of digitization with the support of Big Data analytics. India is a huge country and is currently on the path of massive digitization being realised through Digital India Mission. Internet connection per household is on the rise every year. This transforms into a massive data set that has the potential to improvise the public services delivery system into an effective service mechanism for Indian citizens. In fact, when compared to developed nations, this capacity is being underutilized in India. This is particularly true for administrative system in rural areas. The present paper focuses on the need for big data analytics adaptation in Indian rural administration and its contribution towards development of the country on a faster pace. Results of the research focussed on the need for increasing awareness and serious capacity building of the government personnel working for rural development with regard to big data analytics and its utility for development of the country. Multiple public policies are framed and implemented for rural development yet the results are not as effective as they should be. Big data has a major role to play in this context as can assist in improving both policy making and implementation aiming at all round development of the country.Keywords: Digital India Mission, public service delivery system, public policy, Indian administration
Procedia PDF Downloads 159384 Microstructure Dependent Fatigue Crack Growth in Aluminum Alloy
Authors: M. S. Nandana, K. Udaya Bhat, C. M. Manjunatha
Abstract:
In this study aluminum alloy 7010 was subjected to three different ageing treatments i.e., peak ageing (T6), over-ageing (T7451) and retrogression and re ageing (RRA) to study the influence of precipitate microstructure on the fatigue crack growth rate behavior. The microstructural modification was studied by using transmission electron microscope (TEM) to examine the change in the size and morphology of precipitates in the matrix and on the grain boundaries. The standard compact tension (CT) specimens were fabricated and tested under constant amplitude fatigue crack growth tests to evaluate the influence of heat treatment on the fatigue crack growth rate properties. The tests were performed in a computer-controlled servo-hydraulic test machine applying a load ratio, R = 0.1 at a loading frequency of 10 Hz as per ASTM E647. The fatigue crack growth was measured by adopting compliance technique using a CMOD gauge attached to the CT specimen. The average size of the matrix precipitates were found to be of 16-20 nm in T7451, 5-6 nm in RRA and 2-3 nm in T6 conditions respectively. The grain boundary precipitate which was continuous in T6, was disintegrated in RRA and T7451 condition. The PFZ width was lower in RRA compared to T7451 condition. The crack growth rate was higher in T7451 and lowest in RRA treated alloy. The RRA treated alloy also exhibits an increase in threshold stress intensity factor range (∆Kₜₕ). The ∆Kₜₕ measured was 11.1, 10.3 and 5.7 MPam¹/² in RRA, T6 and T7451 alloys respectively. The fatigue crack growth rate in RRA treated alloy was nearly 2-3 times lower than that in T6 and was one order lower than that observed in T7451 condition. The surface roughness of RRA treated alloy was more pronounced when compared to the other conditions. The reduction in fatigue crack growth rate in RRA alloy was majorly due to the increase in roughness and partially due to increase in spacing between the matrix precipitates. The reduction in crack growth rate and increase in threshold stress intensity range is expected to benefit the damage tolerant capability of aircraft structural components under service loads.Keywords: damage tolerance, fatigue, heat treatment, PFZ, RRA
Procedia PDF Downloads 154383 The Role of Home Composting in Waste Management Cost Reduction
Authors: Nahid Hassanshahi, Ayoub Karimi-Jashni, Nasser Talebbeydokhti
Abstract:
Due to the economic and environmental benefits of producing less waste, the US Environmental Protection Agency (EPA) introduces source reduction as one of the most important means to deal with the problems caused by increased landfills and pollution. Waste reduction involves all waste management methods, including source reduction, recycling, and composting, which reduce waste flow to landfills or other disposal facilities. Source reduction of waste can be studied from two perspectives: avoiding waste production, or reducing per capita waste production, and waste deviation that indicates the reduction of waste transfer to landfills. The present paper has investigated home composting as a managerial solution for reduction of waste transfer to landfills. Home composting has many benefits. The use of household waste for the production of compost will result in a much smaller amount of waste being sent to landfills, which in turn will reduce the costs of waste collection, transportation and burial. Reducing the volume of waste for disposal and using them for the production of compost and plant fertilizer might help to recycle the material in a shorter time and to use them effectively in order to preserve the environment and reduce contamination. Producing compost in a home-based manner requires very small piece of land for preparation and recycling compared with other methods. The final product of home-made compost is valuable and helps to grow crops and garden plants. It is also used for modifying the soil structure and maintaining its moisture. The food that is transferred to landfills will spoil and produce leachate after a while. It will also release methane and greenhouse gases. But, composting these materials at home is the best way to manage degradable materials, use them efficiently and reduce environmental pollution. Studies have shown that the benefits of the sale of produced compost and the reduced costs of collecting, transporting, and burying waste can well be responsive to the costs of purchasing home compost machine and the cost of related trainings. Moreover, the process of producing home compost may be profitable within 4 to 5 years and as a result, it will have a major role in reducing waste management.Keywords: compost, home compost, reducing waste, waste management
Procedia PDF Downloads 428382 Development of a Microfluidic Device for Low-Volume Sample Lysis
Authors: Abbas Ali Husseini, Ali Mohammad Yazdani, Fatemeh Ghadiri, Alper Şişman
Abstract:
We developed a microchip device that uses surface acoustic waves for rapid lysis of low level of cell samples. The device incorporates sharp-edge glass microparticles for improved performance. We optimized the lysis conditions for high efficiency and evaluated the device's feasibility for point-of-care applications. The microchip contains a 13-finger pair interdigital transducer with a 30-degree focused angle. It generates high-intensity acoustic beams that converge 6 mm away. The microchip operates at a frequency of 16 MHz, exciting Rayleigh waves with a 250 µm wavelength on the LiNbO3 substrate. Cell lysis occurs when Candida albicans cells and glass particles are placed within the focal area. The high-intensity surface acoustic waves induce centrifugal forces on the cells and glass particles, resulting in cell lysis through lateral forces from the sharp-edge glass particles. We conducted 42 pilot cell lysis experiments to optimize the surface acoustic wave-induced streaming. We varied electrical power, droplet volume, glass particle size, concentration, and lysis time. A regression machine-learning model determined the impact of each parameter on lysis efficiency. Based on these findings, we predicted optimal conditions: electrical signal of 2.5 W, sample volume of 20 µl, glass particle size below 10 µm, concentration of 0.2 µg, and a 5-minute lysis period. Downstream analysis successfully amplified a DNA target fragment directly from the lysate. The study presents an efficient microchip-based cell lysis method employing acoustic streaming and microparticle collisions within microdroplets. Integration of a surface acoustic wave-based lysis chip with an isothermal amplification method enables swift point-of-care applications.Keywords: cell lysis, surface acoustic wave, micro-glass particle, droplet
Procedia PDF Downloads 79381 Gene Expressions in Left Ventricle Heart Tissue of Rat after 150 Mev Proton Irradiation
Abstract:
Introduction: In mediastinal radiotherapy and to a lesser extend also in total-body irradiation (TBI) radiation exposure may lead to development of cardiac diseases. Radiation-induced heart disease is dose-dependent and it is characterized by a loss of cardiac function, associated with progressive heart cells degeneration. We aimed to determine the in-vivo radiation effects on fibronectin, ColaA1, ColaA2, galectin and TGFb1 gene expression levels in left ventricle heart tissues of rats after irradiation. Material and method: Four non-treatment adult Wistar rats as control group (group A) were selected. In group B, 4 adult Wistar rats irradiated to 20 Gy single dose of 150 Mev proton beam locally in heart only. In heart plus lung irradiate group (group C) 4 adult rats was irradiated by 50% of lung laterally plus heart radiation that mentioned in before group. At 8 weeks after radiation animals sacrificed and left ventricle heart dropped in liquid nitrogen for RNA extraction by Absolutely RNA® Miniprep Kit (Stratagen, Cat no. 400800). cDNA was synthesized using M-MLV reverse transcriptase (Life Technologies, Cat no. 28025-013). We used Bio-Rad machine (Bio Rad iQ5 Real Time PCR) for QPCR testing by relative standard curve method. Results: We found that gene expression of fibronectin in group C significantly increased compared to control group, but it was not showed significant change in group B compared to group A. The levels of gene expressions of Cola1 and Cola2 in mRNA did not show any significant changes between normal and radiation groups. Changes of expression of galectin target significantly increased only in group C compared to group A. TGFb1 expressions in group C more than group B showed significant enhancement compared to group A. Conclusion: In summary we can say that 20 Gy of proton exposure of heart tissue may lead to detectable damages in heart cells and may distribute function of them as a component of heart tissue structure in molecular level.Keywords: gene expression, heart damage, proton irradiation, radiotherapy
Procedia PDF Downloads 489380 Exploring the Applications of Neural Networks in the Adaptive Learning Environment
Authors: Baladitya Swaika, Rahul Khatry
Abstract:
Computer Adaptive Tests (CATs) is one of the most efficient ways for testing the cognitive abilities of students. CATs are based on Item Response Theory (IRT) which is based on item selection and ability estimation using statistical methods of maximum information selection/selection from posterior and maximum-likelihood (ML)/maximum a posteriori (MAP) estimators respectively. This study aims at combining both classical and Bayesian approaches to IRT to create a dataset which is then fed to a neural network which automates the process of ability estimation and then comparing it to traditional CAT models designed using IRT. This study uses python as the base coding language, pymc for statistical modelling of the IRT and scikit-learn for neural network implementations. On creation of the model and on comparison, it is found that the Neural Network based model performs 7-10% worse than the IRT model for score estimations. Although performing poorly, compared to the IRT model, the neural network model can be beneficially used in back-ends for reducing time complexity as the IRT model would have to re-calculate the ability every-time it gets a request whereas the prediction from a neural network could be done in a single step for an existing trained Regressor. This study also proposes a new kind of framework whereby the neural network model could be used to incorporate feature sets, other than the normal IRT feature set and use a neural network’s capacity of learning unknown functions to give rise to better CAT models. Categorical features like test type, etc. could be learnt and incorporated in IRT functions with the help of techniques like logistic regression and can be used to learn functions and expressed as models which may not be trivial to be expressed via equations. This kind of a framework, when implemented would be highly advantageous in psychometrics and cognitive assessments. This study gives a brief overview as to how neural networks can be used in adaptive testing, not only by reducing time-complexity but also by being able to incorporate newer and better datasets which would eventually lead to higher quality testing.Keywords: computer adaptive tests, item response theory, machine learning, neural networks
Procedia PDF Downloads 175379 Energy Efficiency Approach to Reduce Costs of Ownership of Air Jet Weaving
Authors: Corrado Grassi, Achim Schröter, Yves Gloy, Thomas Gries
Abstract:
Air jet weaving is the most productive, but also the most energy consuming weaving method. Increasing energy costs and environmental impact are constantly a challenge for the manufacturers of weaving machines. Current technological developments concern with low energy costs, low environmental impact, high productivity, and constant product quality. The high degree of energy consumption of the method can be ascribed to the high need of compressed air. An energy efficiency method is applied to the air jet weaving technology. Such method identifies and classifies the main relevant energy consumers and processes from the exergy point of view and it leads to the identification of energy efficiency potentials during the weft insertion process. Starting from the design phase, energy efficiency is considered as the central requirement to be satisfied. The initial phase of the method consists of an analysis of the state of the art of the main weft insertion components in order to point out a prioritization of the high demanding energy components and processes. The identified major components are investigated to reduce the high demand of energy of the weft insertion process. During the interaction of the flow field coming from the relay nozzles within the profiled reed, only a minor part of the stream is really accelerating the weft yarn, hence resulting in large energy inefficiency. Different tools such as FEM analysis, CFD simulation models and experimental analysis are used in order to design a more energy efficient design of the involved components in the filling insertion. A different concept for the metal strip of the profiled reed is developed. The developed metal strip allows a reduction of the machine energy consumption. Based on a parametric and aerodynamic study, the designed reed transmits higher values of the flow power to the filling yarn. The innovative reed fulfills both the requirement of raising energy efficiency and the compliance with the weaving constraints.Keywords: air jet weaving, aerodynamic simulation, energy efficiency, experimental validation, weft insertion
Procedia PDF Downloads 197378 Milling Simulations with a 3-DOF Flexible Planar Robot
Authors: Hoai Nam Huynh, Edouard Rivière-Lorphèvre, Olivier Verlinden
Abstract:
Manufacturing technologies are becoming continuously more diversified over the years. The increasing use of robots for various applications such as assembling, painting, welding has also affected the field of machining. Machining robots can deal with larger workspaces than conventional machine-tools at a lower cost and thus represent a very promising alternative for machining applications. Furthermore, their inherent structure ensures them a great flexibility of motion to reach any location on the workpiece with the desired orientation. Nevertheless, machining robots suffer from a lack of stiffness at their joints restricting their use to applications involving low cutting forces especially finishing operations. Vibratory instabilities may also happen while machining and deteriorate the precision leading to scrap parts. Some researchers are therefore concerned with the identification of optimal parameters in robotic machining. This paper continues the development of a virtual robotic machining simulator in order to find optimized cutting parameters in terms of depth of cut or feed per tooth for example. The simulation environment combines an in-house milling routine (DyStaMill) achieving the computation of cutting forces and material removal with an in-house multibody library (EasyDyn) which is used to build a dynamic model of a 3-DOF planar robot with flexible links. The position of the robot end-effector submitted to milling forces is controlled through an inverse kinematics scheme while controlling the position of its joints separately. Each joint is actuated through a servomotor for which the transfer function has been computed in order to tune the corresponding controller. The output results feature the evolution of the cutting forces when the robot structure is deformable or not and the tracking errors of the end-effector. Illustrations of the resulting machined surfaces are also presented. The consideration of the links flexibility has highlighted an increase of the cutting forces magnitude. This proof of concept will aim to enrich the database of results in robotic machining for potential improvements in production.Keywords: control, milling, multibody, robotic, simulation
Procedia PDF Downloads 249