Search results for: machine capacity
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6697

Search results for: machine capacity

6127 Application of Strength Criteria for Cellular Pressure Vessels

Authors: Antanas Žiliukas, Mindaugas Kukis

Abstract:

The work deals with cellular pressure vessels subjected to internal pressure. Their cellular insert can be used for placing liquids or gases, which is necessary to carry out technological processes, and the vessel itself has a good bearing capacity. Numerical calculations of the three core structures, which measure the influence of the inner cylinder thickness on maximum bearing capacity are presented. The calculations are compared using strength criteria and they show the different strength safety level.

Keywords: pressure, strength criterion, sandwich plate, cellular vessel

Procedia PDF Downloads 291
6126 A Combined Meta-Heuristic with Hyper-Heuristic Approach to Single Machine Production Scheduling Problem

Authors: C. E. Nugraheni, L. Abednego

Abstract:

This paper is concerned with minimization of mean tardiness and flow time in a real single machine production scheduling problem. Two variants of genetic algorithm as meta-heuristic are combined with hyper-heuristic approach are proposed to solve this problem. These methods are used to solve instances generated with real world data from a company. Encouraging results are reported.

Keywords: hyper-heuristics, evolutionary algorithms, production scheduling, meta-heuristic

Procedia PDF Downloads 367
6125 Analysis of Splicing Methods for High Speed Automated Fibre Placement Applications

Authors: Phillip Kearney, Constantina Lekakou, Stephen Belcher, Alessandro Sordon

Abstract:

The focus in the automotive industry is to reduce human operator and machine interaction, so manufacturing becomes more automated and safer. The aim is to lower part cost and construction time as well as defects in the parts, sometimes occurring due to the physical limitations of human operators. A move to automate the layup of reinforcement material in composites manufacturing has resulted in the use of tapes that are placed in position by a robotic deposition head, also described as Automated Fibre Placement (AFP). The process of AFP is limited with respect to the finite amount of material that can be loaded into the machine at any one time. Joining two batches of tape material together involves a splice to secure the ends of the finishing tape to the starting edge of the new tape. The splicing method of choice for the majority of prepreg applications is a hand stich method, and as the name suggests requires human input to achieve. This investigation explores three methods for automated splicing, namely, adhesive, binding and stitching. The adhesive technique uses an additional adhesive placed on the tape ends to be joined. Binding uses the binding agent that is already impregnated onto the tape through the application of heat. The stitching method is used as a baseline to compare the new splicing methods to the traditional technique currently in use. As the methods will be used within a High Speed Automated Fibre Placement (HSAFP) process, this meant the parameters of the splices have to meet certain specifications: (a) the splice must be able to endure a load of 50 N in tension applied at a rate of 1 mm/s; (b) the splice must be created in less than 6 seconds, dictated by the capacity of the tape accumulator within the system. The samples for experimentation were manufactured with controlled overlaps, alignment and splicing parameters, these were then tested in tension using a tensile testing machine. Initial analysis explored the use of the impregnated binding agent present on the tape, as in the binding splicing technique. It analysed the effect of temperature and overlap on the strength of the splice. It was found that the optimum splicing temperature was at the higher end of the activation range of the binding agent, 100 °C. The optimum overlap was found to be 25 mm; it was found that there was no improvement in bond strength from 25 mm to 30 mm overlap. The final analysis compared the different splicing methods to the baseline of a stitched bond. It was found that the addition of an adhesive was the best splicing method, achieving a maximum load of over 500 N compared to the 26 N load achieved by a stitching splice and 94 N by the binding method.

Keywords: analysis, automated fibre placement, high speed, splicing

Procedia PDF Downloads 135
6124 PaSA: A Dataset for Patent Sentiment Analysis to Highlight Patent Paragraphs

Authors: Renukswamy Chikkamath, Vishvapalsinhji Ramsinh Parmar, Christoph Hewel, Markus Endres

Abstract:

Given a patent document, identifying distinct semantic annotations is an interesting research aspect. Text annotation helps the patent practitioners such as examiners and patent attorneys to quickly identify the key arguments of any invention, successively providing a timely marking of a patent text. In the process of manual patent analysis, to attain better readability, recognising the semantic information by marking paragraphs is in practice. This semantic annotation process is laborious and time-consuming. To alleviate such a problem, we proposed a dataset to train machine learning algorithms to automate the highlighting process. The contributions of this work are: i) we developed a multi-class dataset of size 150k samples by traversing USPTO patents over a decade, ii) articulated statistics and distributions of data using imperative exploratory data analysis, iii) baseline Machine Learning models are developed to utilize the dataset to address patent paragraph highlighting task, and iv) future path to extend this work using Deep Learning and domain-specific pre-trained language models to develop a tool to highlight is provided. This work assists patent practitioners in highlighting semantic information automatically and aids in creating a sustainable and efficient patent analysis using the aptitude of machine learning.

Keywords: machine learning, patents, patent sentiment analysis, patent information retrieval

Procedia PDF Downloads 72
6123 Simulation-Based Validation of Safe Human-Robot-Collaboration

Authors: Titanilla Komenda

Abstract:

Human-machine-collaboration defines a direct interaction between humans and machines to fulfil specific tasks. Those so-called collaborative machines are used without fencing and interact with humans in predefined workspaces. Even though, human-machine-collaboration enables a flexible adaption to variable degrees of freedom, industrial applications are rarely found. The reasons for this are not technical progress but rather limitations in planning processes ensuring safety for operators. Until now, humans and machines were mainly considered separately in the planning process, focusing on ergonomics and system performance respectively. Within human-machine-collaboration, those aspects must not be seen in isolation from each other but rather need to be analysed in interaction. Furthermore, a simulation model is needed that can validate the system performance and ensure the safety for the operator at any given time. Following on from this, a holistic simulation model is presented, enabling a simulative representation of collaborative tasks – including both, humans and machines. The presented model does not only include a geometry and a motion model of interacting humans and machines but also a numerical behaviour model of humans as well as a Boole’s probabilistic sensor model. With this, error scenarios can be simulated by validating system behaviour in unplanned situations. As these models can be defined on the basis of Failure Mode and Effects Analysis as well as probabilities of errors, the implementation in a collaborative model is discussed and evaluated regarding limitations and simulation times. The functionality of the model is shown on industrial applications by comparing simulation results with video data. The analysis shows the impact of considering human factors in the planning process in contrast to only meeting system performance. In this sense, an optimisation function is presented that meets the trade-off between human and machine factors and aids in a successful and safe realisation of collaborative scenarios.

Keywords: human-machine-system, human-robot-collaboration, safety, simulation

Procedia PDF Downloads 346
6122 Adsorption of Peppermint Essential Oil by Polypropylene Nanofiber

Authors: Duduku Krishnaiah, S. M. Anisuzzaman, Kumaran Govindaraj, Chiam Chel Ken, Zykamilia Kamin

Abstract:

Pure essential oil is highly demanded in the market since most of the so-called pure essential oils in the market contains alcohol. This is because of the usage of alcohol in separating oil and water mixture. Removal of pure essential oil from water without using any chemical solvent has become a challenging issue. Adsorbents generally have the properties of separating hydrophobic oil from hydrophilic mixture. Polypropylen nanofiber is a thermoplastic polymer which is produced from propylene. It was used as an adsorbent in this study. Based on the research, it was found that the polypropylene nanofiber was able to adsorb peppermint oil from the aqueous solution over a wide range of concentration. Based on scanning electron microscope (SEM), nanofiber has very small nano diameter fiber size in average before the adsorption and larger scaled average diameter of fibers after adsorption which indicates that smaller diameter of nanofiber enhances the adsorption process. The adsorption capacity of peppermint oil increases as the initial concentration of peppermint oil and amount of polypropylene nanofiber used increases. The maximum adsorption capacity of polypropylene nanofiber was found to be 689.5 mg/g at (T= 30°C). Moreover, the adsorption capacity of peppermint oil decreases as the temperature of solution increases. The equilibrium data of polypropylene nanofiber is best represented by Freundlich isotherm with the maximum adsorption capacity of 689.5 mg/g. The adsorption kinetics of polypropylene nanofiber was best represented by pseudo-second order model.

Keywords: nanofiber, adsorption, peppermint essential oil, isotherms, adsorption kinetics

Procedia PDF Downloads 137
6121 Classification of Manufacturing Data for Efficient Processing on an Edge-Cloud Network

Authors: Onyedikachi Ulelu, Andrew P. Longstaff, Simon Fletcher, Simon Parkinson

Abstract:

The widespread interest in 'Industry 4.0' or 'digital manufacturing' has led to significant research requiring the acquisition of data from sensors, instruments, and machine signals. In-depth research then identifies methods of analysis of the massive amounts of data generated before and during manufacture to solve a particular problem. The ultimate goal is for industrial Internet of Things (IIoT) data to be processed automatically to assist with either visualisation or autonomous system decision-making. However, the collection and processing of data in an industrial environment come with a cost. Little research has been undertaken on how to specify optimally what data to capture, transmit, process, and store at various levels of an edge-cloud network. The first step in this specification is to categorise IIoT data for efficient and effective use. This paper proposes the required attributes and classification to take manufacturing digital data from various sources to determine the most suitable location for data processing on the edge-cloud network. The proposed classification framework will minimise overhead in terms of network bandwidth/cost and processing time of machine tool data via efficient decision making on which dataset should be processed at the ‘edge’ and what to send to a remote server (cloud). A fast-and-frugal heuristic method is implemented for this decision-making. The framework is tested using case studies from industrial machine tools for machine productivity and maintenance.

Keywords: data classification, decision making, edge computing, industrial IoT, industry 4.0

Procedia PDF Downloads 160
6120 The Effects of COVID-19 on the Energy Trends and Production Capacity of Turkish Cement Industry

Authors: Adem Atmaca

Abstract:

More than 500 million COVID-19 cases were noted in February 2022 in Turkey. The country is one of the most impacted countries all around the world with twenty million cases. The cement industry in Turkey ranks among the most energy-intensive sectors with huge production capacities among the biggest exporter countries. The purpose of this paper is to clarify the effects of the pandemic on the cement industry in Turkey by showing the changes in manufacturing capacities and export rates of all facilities in the country. The investigation has revealed that the epidemic has slight effects on the factory production capacities and export rates. Even though the capacity usage rates of the factories decreased dramatically in 2019, it seems that Turkish cement companies turned the pandemic to their advantage by increasing their production capacities, capacity usage rates and export rates gradually by reaching new markets during the pandemic.

Keywords: energy, emissions, cement industry, COVID-19

Procedia PDF Downloads 106
6119 Numerical Study of Modulus of Subgrade Reaction in Eccentrically Loaded Circular Footing Resting

Authors: Seyed Abolhasan Naeini, Mohammad Hossein Zade

Abstract:

This article is an attempt to present a numerically study of the behaviour of an eccentrically loaded circular footing resting on sand to determine ‎its ultimate bearing capacity. A surface circular footing of diameter 12 cm (D) was used as ‎shallow foundation. For this purpose, three dimensional models consist of foundation, and medium sandy soil was modelled by ABAQUS software. Bearing capacity of footing was evaluated and the ‎effects of the load eccentricity on bearing capacity, its settlement, and modulus of subgrade reaction were studied. Three different values of load eccentricity with equal space from inside the core on the core boundary and outside the core boundary, which were respectively e=0.75, 1.5, and 2.25 cm, were considered. The results show that by increasing the load eccentricity, the ultimate load and the ‎modulus of subgrade reaction decreased.

Keywords: circular foundation, sand, eccentric loading, modulus of subgrade reaction

Procedia PDF Downloads 331
6118 Development of pm2.5 Forecasting System in Seoul, South Korea Using Chemical Transport Modeling and ConvLSTM-DNN

Authors: Ji-Seok Koo, Hee‑Yong Kwon, Hui-Young Yun, Kyung-Hui Wang, Youn-Seo Koo

Abstract:

This paper presents a forecasting system for PM2.5 levels in Seoul, South Korea, leveraging a combination of chemical transport modeling and ConvLSTM-DNN machine learning technology. Exposure to PM2.5 has known detrimental impacts on public health, making its prediction crucial for establishing preventive measures. Existing forecasting models, like the Community Multiscale Air Quality (CMAQ) and Weather Research and Forecasting (WRF), are hindered by their reliance on uncertain input data, such as anthropogenic emissions and meteorological patterns, as well as certain intrinsic model limitations. The system we've developed specifically addresses these issues by integrating machine learning and using carefully selected input features that account for local and distant sources of PM2.5. In South Korea, the PM2.5 concentration is greatly influenced by both local emissions and long-range transport from China, and our model effectively captures these spatial and temporal dynamics. Our PM2.5 prediction system combines the strengths of advanced hybrid machine learning algorithms, convLSTM and DNN, to improve upon the limitations of the traditional CMAQ model. Data used in the system include forecasted information from CMAQ and WRF models, along with actual PM2.5 concentration and weather variable data from monitoring stations in China and South Korea. The system was implemented specifically for Seoul's PM2.5 forecasting.

Keywords: PM2.5 forecast, machine learning, convLSTM, DNN

Procedia PDF Downloads 41
6117 Polyampholytic Resins: Advances in Ion Exchanging Properties

Authors: N. P. G. N. Chandrasekara, R. M. Pashley

Abstract:

Ion exchange (IEX) resins are commonly available as cationic or anionic resins but not as polyampholytic resins. This is probably because sequential acid and base washing cannot produce complete regeneration of polyampholytic resins with chemically attached anionic and cationic groups in close proximity. The ‘Sirotherm’ process, developed by the Commonwealth Scientific and Industrial Research Organization (CSIRO) in Melbourne, Australia was originally based on the use of a physical mixture of weakly basic (WB) and weakly acidic (WA) ion-exchange resin beads. These resins were regenerated thermally and they were capable of removing salts from an aqueous solution at higher temperatures compared to the salt sorbed at ambient temperatures with a significant reduction of the sorption capacity with increasing temperature. A new process for the efficient regeneration of mixed bead resins using ammonium bicarbonate with heat was studied recently and this chemical/thermal regeneration technique has the capability for completely regenerating polyampholytic resins. Even so, the low IEX capacities of polyampholytic resins restrict their commercial applications. Recently, we have established another novel process for increasing the IEX capacity of a typical polyampholytic resin. In this paper we will discuss the chemical/thermal regeneration of a polyampholytic (WA/WB) resin and a novel process for enhancing its ion exchange capacity, by increasing its internal pore area. We also show how effective this method is for completely recycled regeneration, with the potential of substantially reducing chemical waste.

Keywords: capacity, ion exchange, polyampholytic resin, regeneration

Procedia PDF Downloads 366
6116 Comparative Evaluation of Accuracy of Selected Machine Learning Classification Techniques for Diagnosis of Cancer: A Data Mining Approach

Authors: Rajvir Kaur, Jeewani Anupama Ginige

Abstract:

With recent trends in Big Data and advancements in Information and Communication Technologies, the healthcare industry is at the stage of its transition from clinician oriented to technology oriented. Many people around the world die of cancer because the diagnosis of disease was not done at an early stage. Nowadays, the computational methods in the form of Machine Learning (ML) are used to develop automated decision support systems that can diagnose cancer with high confidence in a timely manner. This paper aims to carry out the comparative evaluation of a selected set of ML classifiers on two existing datasets: breast cancer and cervical cancer. The ML classifiers compared in this study are Decision Tree (DT), Support Vector Machine (SVM), k-Nearest Neighbor (k-NN), Logistic Regression, Ensemble (Bagged Tree) and Artificial Neural Networks (ANN). The evaluation is carried out based on standard evaluation metrics Precision (P), Recall (R), F1-score and Accuracy. The experimental results based on the evaluation metrics show that ANN showed the highest-level accuracy (99.4%) when tested with breast cancer dataset. On the other hand, when these ML classifiers are tested with the cervical cancer dataset, Ensemble (Bagged Tree) technique gave better accuracy (93.1%) in comparison to other classifiers.

Keywords: artificial neural networks, breast cancer, classifiers, cervical cancer, f-score, machine learning, precision, recall

Procedia PDF Downloads 262
6115 Experimental Study of CO2 Absorption in Different Blend Solutions as Solvent for CO2 Capture

Authors: Rouzbeh Ramezani, Renzo Di Felice

Abstract:

Nowadays, removal of CO2 as one of the major contributors to global warming using alternative solvents with high CO2 absorption efficiency, is an important industrial operation. In this study, three amines, including 2-methylpiperazine, potassium sarcosinate and potassium lysinate as potential additives, were added to the potassium carbonate solution as a base solvent for CO2 capture. In order to study the absorption performance of CO2 in terms of loading capacity of CO2 and absorption rate, the absorption experiments in a blend of additives with potassium carbonate were carried out using the vapor-liquid equilibrium apparatus at a temperature of 313.15 K, CO2 partial pressures ranging from 0 to 50 kPa and at mole fractions 0.2, 0.3, and 0.4. Furthermore, the performance of CO2 absorption in these blend solutions was compared with pure monoethanolamine and with pure potassium carbonate. Finally, a correlation with good accuracy was developed using the nonlinear regression analysis in order to predict CO2 loading capacity.

Keywords: absorption rate, carbon dioxide, CO2 capture, global warming, loading capacity

Procedia PDF Downloads 263
6114 Oil Logistics for Refining to Northern Europe

Authors: Vladimir Klepikov

Abstract:

To develop the programs to supply crude oil to North European refineries, it is necessary to take into account the refineries’ location, crude refining capacity, and the transport infrastructure capacity. Among the countries of the region, we include those having a marine boundary along the Northern Sea and the Baltic Sea (from France in the west to Finland in the east). The paper envisages the geographic allocation of the refineries and contains the evaluation of the refineries’ capacities for the region under review. The sustainable operations of refineries in the region are determined by the transportation system capacity to supply crude oil to them. The assessment of capacity of crude oil transportation to the refineries is conducted. The research is performed for the period of 2005/2015, using the quantitative analysis method. The countries are classified by the refineries’ aggregate capacities and the crude oil output on their territory. The crude oil output capacities in the region in the period under review are determined. The capacities of the region’s transportation system to supply crude oil produced in the region to the refineries are revealed. The analysis suggested that imported raw materials are the main source of oil for the refineries in the region. The main sources of crude oil supplies to North European refineries are reviewed. The change in the refineries’ capacities in the group of countries and each particular country, as well as the utilization of the refineries' capacities in the region in the period under review, was studied. The input suggests that the bulk of crude oil is supplied by marine and pipeline transport. The paper contains the assessment of the crude oil transportation by pipeline transport in the overall crude oil cargo flow. The refineries’ production rate for the groups of countries under the review and for each particular country was the subject of study. Our study yielded the trend towards the increase in the crude oil refining at the refineries of the region and reduction in the crude oil output. If this trend persists in the near future, the cargo flow of imported crude oil and the utilization of the North European logistics infrastructure may increase. According to the study, the existing transport infrastructure in the region is able to handle the increasing imported crude oil flow.

Keywords: European region, infrastructure, oil terminal capacity, pipeline capacity, tanker draft

Procedia PDF Downloads 155
6113 Schedule a New Production Plan by Heuristic Methods

Authors: Hanife Merve Öztürk, Sıdıka Dalgan

Abstract:

In this project, a capacity analysis study is done at TAT A. Ş. Maret Plant. Production capacity of products which generate 80% of sales amount are determined. Obtained data entered the LEKIN Scheduling Program and we get production schedules by using heuristic methods. Besides heuristic methods, as mathematical model, disjunctive programming formulation is adapted to flexible job shop problems by adding a new constraint to find optimal schedule solution.

Keywords: scheduling, flexible job shop problem, shifting bottleneck heuristic, mathematical modelling

Procedia PDF Downloads 380
6112 Predicting the Compressive Strength of Geopolymer Concrete Using Machine Learning Algorithms: Impact of Chemical Composition and Curing Conditions

Authors: Aya Belal, Ahmed Maher Eltair, Maggie Ahmed Mashaly

Abstract:

Geopolymer concrete is gaining recognition as a sustainable alternative to conventional Portland Cement concrete due to its environmentally friendly nature, which is a key goal for Smart City initiatives. It has demonstrated its potential as a reliable material for the design of structural elements. However, the production of Geopolymer concrete is hindered by batch-to-batch variations, which presents a significant challenge to the widespread adoption of Geopolymer concrete. To date, Machine learning has had a profound impact on various fields by enabling models to learn from large datasets and predict outputs accurately. This paper proposes an integration between the current drift to Artificial Intelligence and the composition of Geopolymer mixtures to predict their mechanical properties. This study employs Python software to develop machine learning model in specific Decision Trees. The research uses the percentage oxides and the chemical composition of the Alkali Solution along with the curing conditions as the input independent parameters, irrespective of the waste products used in the mixture yielding the compressive strength of the mix as the output parameter. The results showed 90 % agreement of the predicted values to the actual values having the ratio of the Sodium Silicate to the Sodium Hydroxide solution being the dominant parameter in the mixture.

Keywords: decision trees, geopolymer concrete, machine learning, smart cities, sustainability

Procedia PDF Downloads 64
6111 Machine Learning Based Gender Identification of Authors of Entry Programs

Authors: Go Woon Kwak, Siyoung Jun, Soyun Maeng, Haeyoung Lee

Abstract:

Entry is an education platform used in South Korea, created to help students learn to program, in which they can learn to code while playing. Using the online version of the entry, teachers can easily assign programming homework to the student and the students can make programs simply by linking programming blocks. However, the programs may be made by others, so that the authors of the programs should be identified. In this paper, as the first step toward author identification of entry programs, we present an artificial neural network based classification approach to identify genders of authors of a program written in an entry. A neural network has been trained from labeled training data that we have collected. Our result in progress, although preliminary, shows that the proposed approach could be feasible to be applied to the online version of entry for gender identification of authors. As future work, we will first use a machine learning technique for age identification of entry programs, which would be the second step toward the author identification.

Keywords: artificial intelligence, author identification, deep neural network, gender identification, machine learning

Procedia PDF Downloads 301
6110 Navigating Government Finance Statistics: Effortless Retrieval and Comparative Analysis through Data Science and Machine Learning

Authors: Kwaku Damoah

Abstract:

This paper presents a methodology and software application (App) designed to empower users in accessing, retrieving, and comparatively exploring data within the hierarchical network framework of the Government Finance Statistics (GFS) system. It explores the ease of navigating the GFS system and identifies the gaps filled by the new methodology and App. The GFS, embodies a complex Hierarchical Network Classification (HNC) structure, encapsulating institutional units, revenues, expenses, assets, liabilities, and economic activities. Navigating this structure demands specialized knowledge, experience, and skill, posing a significant challenge for effective analytics and fiscal policy decision-making. Many professionals encounter difficulties deciphering these classifications, hindering confident utilization of the system. This accessibility barrier obstructs a vast number of professionals, students, policymakers, and the public from leveraging the abundant data and information within the GFS. Leveraging R programming language, Data Science Analytics and Machine Learning, an efficient methodology enabling users to access, navigate, and conduct exploratory comparisons was developed. The machine learning Fiscal Analytics App (FLOWZZ) democratizes access to advanced analytics through its user-friendly interface, breaking down expertise barriers.

Keywords: data science, data wrangling, drilldown analytics, government finance statistics, hierarchical network classification, machine learning, web application.

Procedia PDF Downloads 48
6109 A Study on the Correlation Analysis between the Pre-Sale Competition Rate and the Apartment Unit Plan Factor through Machine Learning

Authors: Seongjun Kim, Jinwooung Kim, Sung-Ah Kim

Abstract:

The development of information and communication technology also affects human cognition and thinking, especially in the field of design, new techniques are being tried. In architecture, new design methodologies such as machine learning or data-driven design are being applied. In particular, these methodologies are used in analyzing the factors related to the value of real estate or analyzing the feasibility in the early planning stage of the apartment housing. However, since the value of apartment buildings is often determined by external factors such as location and traffic conditions, rather than the interior elements of buildings, data is rarely used in the design process. Therefore, although the technical conditions are provided, the internal elements of the apartment are difficult to apply the data-driven design in the design process of the apartment. As a result, the designers of apartment housing were forced to rely on designer experience or modular design alternatives rather than data-driven design at the design stage, resulting in a uniform arrangement of space in the apartment house. The purpose of this study is to propose a methodology to support the designers to design the apartment unit plan with high consumer preference by deriving the correlation and importance of the floor plan elements of the apartment preferred by the consumers through the machine learning and reflecting this information from the early design process. The data on the pre-sale competition rate and the elements of the floor plan are collected as data, and the correlation between pre-sale competition rate and independent variables is analyzed through machine learning. This analytical model can be used to review the apartment unit plan produced by the designer and to assist the designer. Therefore, it is possible to make a floor plan of apartment housing with high preference because it is possible to feedback apartment unit plan by using trained model when it is used in floor plan design of apartment housing.

Keywords: apartment unit plan, data-driven design, design methodology, machine learning

Procedia PDF Downloads 244
6108 Exploiting JPEG2000 into Reversible Information

Authors: Te-Jen Chang, I-Hui Pan, Kuang-Hsiung Tan, Shan-Jen Cheng, Chien-Wu Lan, Chih-Chan Hu

Abstract:

With the event of multimedia age in order to protect data not to be tampered, damaged, and faked, information hiding technologies are proposed. Information hiding means important secret information is hidden into cover multimedia and then camouflaged media is produced. This camouflaged media has the characteristic of natural protection. Under the undoubted situation, important secret information is transmitted out.Reversible information hiding technologies for high capacity is proposed in this paper. The gray images are as cover media in this technology. We compress gray images and compare with the original image to produce the estimated differences. By using the estimated differences, expression information hiding is used, and higher information capacity can be achieved. According to experimental results, the proposed technology can be approved. For these experiments, the whole capacity of information payload and image quality can be satisfied.

Keywords: cover media, camouflaged media, reversible information hiding, gray image

Procedia PDF Downloads 314
6107 Occupational Heat Stress Condition According to Wet Bulb Globe Temperature Index in Textile Processing Unit: A Case Study of Surat, Gujarat, India

Authors: Dharmendra Jariwala, Robin Christian

Abstract:

Thermal exposure is a common problem in every manufacturing industry where heat is used in the manufacturing process. In developing countries like India, a lack of awareness regarding the proper work environmental condition is observed among workers. Improper planning of factory building, arrangement of machineries, ventilation system, etc. play a vital role in the rise of temperature within the manufacturing areas. Due to the uncontrolled thermal stress, workers may be subjected to various heat illnesses from mild disorder to heat stroke. Heat stress is responsible for the health risk and reduction in production. Wet Bulb Globe Temperature (WBGT) index and relative humidity are used to evaluate heat stress conditions. WBGT index is a weighted average of natural wet bulb temperature, globe temperature, dry bulb temperature, which are measured with standard instrument QuestTemp 36 area stress monitor. In this study textile processing units have been selected in the industrial estate in the Surat city. Based on the manufacturing process six locations were identified within the plant at which process was undertaken at 120°C to 180°C. These locations were jet dying machine area, stenter machine area, printing machine, looping machine area, washing area which generate process heat. Office area was also selected for comparision purpose as a sixth location. Present Study was conducted in the winter season and summer season for day and night shift. The results shows that average WBGT index was found above Threshold Limiting Value (TLV) during summer season for day and night shift in all three industries except office area. During summer season highest WBGT index of 32.8°C was found during day shift and 31.5°C was found during night shift at printing machine area. Also during winter season highest WBGT index of 30°C and 29.5°C was found at printing machine area during day shift and night shift respectively.

Keywords: relative humidity, textile industry, thermal stress, WBGT

Procedia PDF Downloads 158
6106 Air Cargo Overbooking Model under Stochastic Weight and Volume Cancellation

Authors: Naragain Phumchusri, Krisada Roekdethawesab, Manoj Lohatepanont

Abstract:

Overbooking is an approach of selling more goods or services than available capacities because sellers anticipate that some buyers will not show-up or may cancel their bookings. At present, many airlines deploy overbooking strategy in order to deal with the uncertainty of their customers. Particularly, some airlines sell more cargo capacity than what they have available to freight forwarders with beliefs that some of them will cancel later. In this paper, we propose methods to find the optimal overbooking level of volume and weight for air cargo in order to minimize the total cost, containing cost of spoilage and cost of offloaded. Cancellations of volume and weight are jointly random variables with a known joint distribution. Heuristic approaches applying the idea of weight and volume independency is considered to find an appropriate answer to the full problem. Computational experiments are used to explore the performance of approaches presented in this paper, as compared to a naïve method under different scenarios.

Keywords: air cargo overbooking, offloading capacity, optimal overbooking level, revenue management, spoilage capacity

Procedia PDF Downloads 307
6105 An Assessment of Floodplain Vegetation Response to Groundwater Changes Using the Soil & Water Assessment Tool Hydrological Model, Geographic Information System, and Machine Learning in the Southeast Australian River Basin

Authors: Newton Muhury, Armando A. Apan, Tek N. Marasani, Gebiaw T. Ayele

Abstract:

The changing climate has degraded freshwater availability in Australia that influencing vegetation growth to a great extent. This study assessed the vegetation responses to groundwater using Terra’s moderate resolution imaging spectroradiometer (MODIS), Normalised Difference Vegetation Index (NDVI), and soil water content (SWC). A hydrological model, SWAT, has been set up in a southeast Australian river catchment for groundwater analysis. The model was calibrated and validated against monthly streamflow from 2001 to 2006 and 2007 to 2010, respectively. The SWAT simulated soil water content for 43 sub-basins and monthly MODIS NDVI data for three different types of vegetation (forest, shrub, and grass) were applied in the machine learning tool, Waikato Environment for Knowledge Analysis (WEKA), using two supervised machine learning algorithms, i.e., support vector machine (SVM) and random forest (RF). The assessment shows that different types of vegetation response and soil water content vary in the dry and wet seasons. The WEKA model generated high positive relationships (r = 0.76, 0.73, and 0.81) between NDVI values of all vegetation in the sub-basins against soil water content (SWC), the groundwater flow (GW), and the combination of these two variables, respectively, during the dry season. However, these responses were reduced by 36.8% (r = 0.48) and 13.6% (r = 0.63) against GW and SWC, respectively, in the wet season. Although the rainfall pattern is highly variable in the study area, the summer rainfall is very effective for the growth of the grass vegetation type. This study has enriched our knowledge of vegetation responses to groundwater in each season, which will facilitate better floodplain vegetation management.

Keywords: ArcSWAT, machine learning, floodplain vegetation, MODIS NDVI, groundwater

Procedia PDF Downloads 81
6104 Supply Chains Resilience within Machine-Made Rug Producers in Iran

Authors: Malihe Shahidan, Azin Madhi, Meisam Shahbaz

Abstract:

In recent decades, the role of supply chains in sustaining businesses and establishing their superiority in the market has been under focus. The realization of the goals and strategies of a business enterprise is largely dependent on the cooperation of the chain, including suppliers, distributors, retailers, etc. Supply chains can potentially be disrupted by both internal and external factors. In this paper, resilience strategies have been identified and analyzed in three levels: sourcing, producing, and distributing by considering economic depression as a current risk factor for the machine-made rugs industry. In this study, semi-structured interviews for data gathering and thematic analysis for data analysis are applied. Supply chain data has been gathered from seven rug factories before and after the economic depression through semi-structured interviews. The identified strategies were derived from literature review and validated by collecting data from a group of eighteen industry and university experts, and the results were analyzed using statistical tests. Finally, the outsourcing of new products and products in the new market, the development and completion of the product portfolio, the flexibility in the composition and volume of products, the expansion of the market to price-sensitive, direct sales, and disintermediation have been determined as strategies affecting supply chain resilience of machine-made rugs' industry during an economic depression.

Keywords: distribution, economic depression, machine-made rug, outsourcing, production, sourcing, supply chain, supply chain resilience

Procedia PDF Downloads 139
6103 The Wear Recognition on Guide Surface Based on the Feature of Radar Graph

Authors: Youhang Zhou, Weimin Zeng, Qi Xie

Abstract:

Abstract: In order to solve the wear recognition problem of the machine tool guide surface, a new machine tool guide surface recognition method based on the radar-graph barycentre feature is presented in this paper. Firstly, the gray mean value, skewness, projection variance, flat degrees and kurtosis features of the guide surface image data are defined as primary characteristics. Secondly, data Visualization technology based on radar graph is used. The visual barycentre graphical feature is demonstrated based on the radar plot of multi-dimensional data. Thirdly, a classifier based on the support vector machine technology is used, the radar-graph barycentre feature and wear original feature are put into the classifier separately for classification and comparative analysis of classification and experiment results. The calculation and experimental results show that the method based on the radar-graph barycentre feature can detect the guide surface effectively.

Keywords: guide surface, wear defects, feature extraction, data visualization

Procedia PDF Downloads 498
6102 Improving the Flow Capacity (CV) of the Valves

Authors: Pradeep A. G, Gorantla Giridhar, Vijay Turaga, Vinod Srinivasa

Abstract:

The major problem in the flow control valve is of lower Cv, which will reduce the overall efficiency of the flow circuit. Designers are continuously working to improve the Cv of the valve, but they need to validate the design ideas they have regarding the improvement of Cv. The traditional method of prototyping and testing takes a lot of time. That is where CFD comes into the picture with very quick and accurate validation along with visualization, which is not possible with the traditional testing method. We have developed a method to predict Cv value using CFD analysis by iterating on various Boundary conditions, solver settings and by carrying out grid convergence studies to establish the correlation between the CFD model and Test data. The present study investigates 3 different ideas put forward by the designers for improving the flow capacity of the valves, like reducing the cage thickness, changing the port position, and using the parabolic plug to guide the flow. Using CFD, we analyzed all design changes using the established methodology that we developed. We were able to evaluate the effect of these design changes on the Valve Cv. We optimized the wetted surface of the valve further by suggesting the design modification to the lower part of the valve to make the flow more streamlined. We could find that changing cage thickness and port position has little impact on the valve Cv. The combination of optimized wetted surface and introduction of parabolic plug improved the Flow capacity (Cv) of the valve significantly.

Keywords: flow control valves, flow capacity (Cv), CFD simulations, design validation

Procedia PDF Downloads 145
6101 Automated Machine Learning Algorithm Using Recurrent Neural Network to Perform Long-Term Time Series Forecasting

Authors: Ying Su, Morgan C. Wang

Abstract:

Long-term time series forecasting is an important research area for automated machine learning (AutoML). Currently, forecasting based on either machine learning or statistical learning is usually built by experts, and it requires significant manual effort, from model construction, feature engineering, and hyper-parameter tuning to the construction of the time series model. Automation is not possible since there are too many human interventions. To overcome these limitations, this article proposed to use recurrent neural networks (RNN) through the memory state of RNN to perform long-term time series prediction. We have shown that this proposed approach is better than the traditional Autoregressive Integrated Moving Average (ARIMA). In addition, we also found it is better than other network systems, including Fully Connected Neural Networks (FNN), Convolutional Neural Networks (CNN), and Nonpooling Convolutional Neural Networks (NPCNN).

Keywords: automated machines learning, autoregressive integrated moving average, neural networks, time series analysis

Procedia PDF Downloads 75
6100 Analysis of Fuel Efficiency in Heavy Construction Compaction Machine and Factors Affecting Fuel Efficiency

Authors: Amey Kulkarni, Paavan Shetty, Amol Patil, B. Rajiv

Abstract:

Fuel Efficiency plays a very important role in overall performance of an automobile. In this paper study of fuel efficiency of heavy construction, compaction machine is done. The fuel Consumption trials are performed in order to obtain the consumption of fuel in performing certain set of actions by the compactor. Usually, Heavy Construction machines are put to work in locations where refilling the fuel tank is not an easy task and also the fuel is consumed at a greater rate than a passenger automobile. So it becomes important to have a fuel efficient machine for long working hours. The fuel efficiency is the most important point in determining the future scope of the product. A heavy construction compaction machine operates in five major roles. These five roles are traveling, Static working, High-frequency Low amplitude compaction, Low-frequency High amplitude compaction, low idle. Fuel consumption readings for 1950 rpm, 2000 rpm & 2350 rpm of the engine are taken by using differential fuel flow meter and are analyzed. And the optimum RPM setting which fulfills the fuel efficiency, as well as engine performance criteria, is considered. Also, other factors such as rear end gears, Intake and exhaust restriction for an engine, vehicle operating techniques, air drag, Tribological aspects, Tires are considered for increasing the fuel efficiency of the compactor. The fuel efficiency of compactor can be precisely calculated by using Differential Fuel Flow Meter. By testing the compactor at different combinations of Engine RPM and also considering other factors such as rear end gears, Intake and exhaust restriction of an engine, vehicle operating techniques, air drag, Tribological aspects, The optimum solution was obtained which lead to significant improvement in fuel efficiency of the compactor.

Keywords: differential fuel flow meter, engine RPM, fuel efficiency, heavy construction compaction machine

Procedia PDF Downloads 271
6099 Integration of Virtual Learning of Induction Machines for Undergraduates

Authors: Rajesh Kumar, Puneet Aggarwal

Abstract:

In context of understanding problems faced by undergraduate students while carrying out laboratory experiments dealing with high voltages, it was found that most of the students are hesitant to work directly on machine. The reason is that error in the circuitry might lead to deterioration of machine and laboratory instruments. So, it has become inevitable to include modern pedagogic techniques for undergraduate students, which would help them to first carry out experiment in virtual system and then to work on live circuit. Further advantages include that students can try out their intuitive ideas and perform in virtual environment, hence leading to new research and innovations. In this paper, virtual environment used is of MATLAB/Simulink for three-phase induction machines. The performance analysis of three-phase induction machine is carried out using virtual environment which includes Direct Current (DC) Test, No-Load Test, and Block Rotor Test along with speed torque characteristics for different rotor resistances and input voltage, respectively. Further, this paper carries out computer aided teaching of basic Voltage Source Inverter (VSI) drive circuitry. Hence, this paper gave undergraduates a clearer view of experiments performed on virtual machine (No-Load test, Block Rotor test and DC test, respectively). After successful implementation of basic tests, VSI circuitry is implemented, and related harmonic distortion (THD) and Fast Fourier Transform (FFT) of current and voltage waveform are studied.

Keywords: block rotor test, DC test, no load test, virtual environment, voltage source inverter

Procedia PDF Downloads 335
6098 Optimization Based Extreme Learning Machine for Watermarking of an Image in DWT Domain

Authors: RAM PAL SINGH, VIKASH CHAUDHARY, MONIKA VERMA

Abstract:

In this paper, we proposed the implementation of optimization based Extreme Learning Machine (ELM) for watermarking of B-channel of color image in discrete wavelet transform (DWT) domain. ELM, a regularization algorithm, works based on generalized single-hidden-layer feed-forward neural networks (SLFNs). However, hidden layer parameters, generally called feature mapping in context of ELM need not to be tuned every time. This paper shows the embedding and extraction processes of watermark with the help of ELM and results are compared with already used machine learning models for watermarking.Here, a cover image is divide into suitable numbers of non-overlapping blocks of required size and DWT is applied to each block to be transformed in low frequency sub-band domain. Basically, ELM gives a unified leaning platform with a feature mapping, that is, mapping between hidden layer and output layer of SLFNs, is tried for watermark embedding and extraction purpose in a cover image. Although ELM has widespread application right from binary classification, multiclass classification to regression and function estimation etc. Unlike SVM based algorithm which achieve suboptimal solution with high computational complexity, ELM can provide better generalization performance results with very small complexity. Efficacy of optimization method based ELM algorithm is measured by using quantitative and qualitative parameters on a watermarked image even though image is subjected to different types of geometrical and conventional attacks.

Keywords: BER, DWT, extreme leaning machine (ELM), PSNR

Procedia PDF Downloads 292