Search results for: post-editing machine translation output
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5194

Search results for: post-editing machine translation output

514 News Reading Practices: Traditional Media versus New Media

Authors: Nuran Öze

Abstract:

People always want to be aware of what is happening around them. The nature of man constantly triggers the need for gathering information because of curiosity. The media has emerged to save people the need for information. It is known that the media has changed with the technological developments over time, diversified and, people's information needs are provided in different ways. Today, the Internet has become an integral part of everyday life. The invasion of the Internet into everyday life practices at this level affects every aspect of life. These effects cause people to change their life practices. Technological developments have always influenced of people, the way they reach information. Looking at the history of the media, the breaking point about the dissemination of information is seen as the invention of the machine of the printing press. This adventure that started with written media has now become a multi-dimensional structure. Written, audio, visual media has now changed shape with new technologies. Especially emerging of the internet to everyday life, of course, has effects on media field. 'New media' has appeared which contains most of traditional media features in its'. While in the one hand this transformation enables captures a harmony between traditional and new media, on the other hand, new media and traditional media are rivaling each other. The purpose of this study is to examine the problematic relationship between traditional media and new media through the news reading practices of individuals. This study can be evaluated as a kind of media sociology. To reach this aim, two different field researches will be done besides literature review. The research will be conducted in Northern Cyprus. Northern Cyprus Northern Cyprus is located in the Mediterranean Sea. North Cyprus is a country which is not recognized by any country except Turkey. Despite this, takes its share from all technological developments take place in the world. One of the field researches will consist of the questionnaires to be applied on media readers' news reading practices. This survey will be conducted in a social media environment. The second field survey will be conducted in the form of interviews with general editorials or news directors in traditional media. In the second field survey, in-depth interview method will be applied. As a result of these investigations, supporting sides between the new media and the traditional media and directions which contrast with each other will be revealed. In addition to that, it will try to understand the attitudes and perceptions of readers about the traditional media and the new media in this study.

Keywords: new media, news, North Cyprus, traditional media

Procedia PDF Downloads 223
513 Psychophysiological Adaptive Automation Based on Fuzzy Controller

Authors: Liliana Villavicencio, Yohn Garcia, Pallavi Singh, Luis Fernando Cruz, Wilfrido Moreno

Abstract:

Psychophysiological adaptive automation is a concept that combines human physiological data and computer algorithms to create personalized interfaces and experiences for users. This approach aims to enhance human learning by adapting to individual needs and preferences and optimizing the interaction between humans and machines. According to neurosciences, the working memory demand during the student learning process is modified when the student is learning a new subject or topic, managing and/or fulfilling a specific task goal. A sudden increase in working memory demand modifies the level of students’ attention, engagement, and cognitive load. The proposed psychophysiological adaptive automation system will adapt the task requirements to optimize cognitive load, the process output variable, by monitoring the student's brain activity. Cognitive load changes according to the student’s previous knowledge, the type of task, the difficulty level of the task, and the overall psychophysiological state of the student. Scaling the measured cognitive load as low, medium, or high; the system will assign a task difficulty level to the next task according to the ratio between the previous-task difficulty level and student stress. For instance, if a student becomes stressed or overwhelmed during a particular task, the system detects this through signal measurements such as brain waves, heart rate variability, or any other psychophysiological variables analyzed to adjust the task difficulty level. The control of engagement and stress are considered internal variables for the hypermedia system which selects between three different types of instructional material. This work assesses the feasibility of a fuzzy controller to track a student's physiological responses and adjust the learning content and pace accordingly. Using an industrial automation approach, the proposed fuzzy logic controller is based on linguistic rules that complement the instrumentation of the system to monitor and control the delivery of instructional material to the students. From the test results, it can be proved that the implemented fuzzy controller can satisfactorily regulate the delivery of academic content based on the working memory demand without compromising students’ health. This work has a potential application in the instructional design of virtual reality environments for training and education.

Keywords: fuzzy logic controller, hypermedia control system, personalized education, psychophysiological adaptive automation

Procedia PDF Downloads 77
512 Investigating the Impacts on Cyclist Casualty Severity at Roundabouts: A UK Case Study

Authors: Nurten Akgun, Dilum Dissanayake, Neil Thorpe, Margaret C. Bell

Abstract:

Cycling has gained a great attention with comparable speeds, low cost, health benefits and reducing the impact on the environment. The main challenge associated with cycling is the provision of safety for the people choosing to cycle as their main means of transport. From the road safety point of view, cyclists are considered as vulnerable road users because they are at higher risk of serious casualty in the urban network but more specifically at roundabouts. This research addresses the development of an enhanced mathematical model by including a broad spectrum of casualty related variables. These variables were geometric design measures (approach number of lanes and entry path radius), speed limit, meteorological condition variables (light, weather, road surface) and socio-demographic characteristics (age and gender), as well as contributory factors. Contributory factors included driver’s behavior related variables such as failed to look properly, sudden braking, a vehicle passing too close to a cyclist, junction overshot, failed to judge other person’s path, restart moving off at the junction, poor turn or manoeuvre and disobeyed give-way. Tyne and Wear in the UK were selected as a case study area. The cyclist casualty data was obtained from UK STATS19 National dataset. The reference categories for the regression model were set to slight and serious cyclist casualties. Therefore, binary logistic regression was applied. Binary logistic regression analysis showed that approach number of lanes was statistically significant at the 95% level of confidence. A higher number of approach lanes increased the probability of severity of cyclist casualty occurrence. In addition, sudden braking statistically significantly increased the cyclist casualty severity at the 95% level of confidence. The result concluded that cyclist casualty severity was highly related to approach a number of lanes and sudden braking. Further research should be carried out an in-depth analysis to explore connectivity of sudden braking and approach number of lanes in order to investigate the driver’s behavior at approach locations. The output of this research will inform investment in measure to improve the safety of cyclists at roundabouts.

Keywords: binary logistic regression, casualty severity, cyclist safety, roundabout

Procedia PDF Downloads 171
511 An Efficient Motion Recognition System Based on LMA Technique and a Discrete Hidden Markov Model

Authors: Insaf Ajili, Malik Mallem, Jean-Yves Didier

Abstract:

Human motion recognition has been extensively increased in recent years due to its importance in a wide range of applications, such as human-computer interaction, intelligent surveillance, augmented reality, content-based video compression and retrieval, etc. However, it is still regarded as a challenging task especially in realistic scenarios. It can be seen as a general machine learning problem which requires an effective human motion representation and an efficient learning method. In this work, we introduce a descriptor based on Laban Movement Analysis technique, a formal and universal language for human movement, to capture both quantitative and qualitative aspects of movement. We use Discrete Hidden Markov Model (DHMM) for training and classification motions. We improve the classification algorithm by proposing two DHMMs for each motion class to process the motion sequence in two different directions, forward and backward. Such modification allows avoiding the misclassification that can happen when recognizing similar motions. Two experiments are conducted. In the first one, we evaluate our method on a public dataset, the Microsoft Research Cambridge-12 Kinect gesture data set (MSRC-12) which is a widely used dataset for evaluating action/gesture recognition methods. In the second experiment, we build a dataset composed of 10 gestures(Introduce yourself, waving, Dance, move, turn left, turn right, stop, sit down, increase velocity, decrease velocity) performed by 20 persons. The evaluation of the system includes testing the efficiency of our descriptor vector based on LMA with basic DHMM method and comparing the recognition results of the modified DHMM with the original one. Experiment results demonstrate that our method outperforms most of existing methods that used the MSRC-12 dataset, and a near perfect classification rate in our dataset.

Keywords: human motion recognition, motion representation, Laban Movement Analysis, Discrete Hidden Markov Model

Procedia PDF Downloads 204
510 Hand Gesture Detection via EmguCV Canny Pruning

Authors: N. N. Mosola, S. J. Molete, L. S. Masoebe, M. Letsae

Abstract:

Hand gesture recognition is a technique used to locate, detect, and recognize a hand gesture. Detection and recognition are concepts of Artificial Intelligence (AI). AI concepts are applicable in Human Computer Interaction (HCI), Expert systems (ES), etc. Hand gesture recognition can be used in sign language interpretation. Sign language is a visual communication tool. This tool is used mostly by deaf societies and those with speech disorder. Communication barriers exist when societies with speech disorder interact with others. This research aims to build a hand recognition system for Lesotho’s Sesotho and English language interpretation. The system will help to bridge the communication problems encountered by the mentioned societies. The system has various processing modules. The modules consist of a hand detection engine, image processing engine, feature extraction, and sign recognition. Detection is a process of identifying an object. The proposed system uses Canny pruning Haar and Haarcascade detection algorithms. Canny pruning implements the Canny edge detection. This is an optimal image processing algorithm. It is used to detect edges of an object. The system employs a skin detection algorithm. The skin detection performs background subtraction, computes the convex hull, and the centroid to assist in the detection process. Recognition is a process of gesture classification. Template matching classifies each hand gesture in real-time. The system was tested using various experiments. The results obtained show that time, distance, and light are factors that affect the rate of detection and ultimately recognition. Detection rate is directly proportional to the distance of the hand from the camera. Different lighting conditions were considered. The more the light intensity, the faster the detection rate. Based on the results obtained from this research, the applied methodologies are efficient and provide a plausible solution towards a light-weight, inexpensive system which can be used for sign language interpretation.

Keywords: canny pruning, hand recognition, machine learning, skin tracking

Procedia PDF Downloads 179
509 Deep Cryogenic Treatment With Subsequent Aging Applied to Martensitic Stainless Steel: Evaluation of Hardness, Tenacity and Microstructure

Authors: Victor Manuel Alcántara Alza

Abstract:

The way in which the application of the deep cryogenic treatment DCT(-196°C) affects, applied with subsequent aging, was investigated, regarding the mechanical properties of hardness, toughness and microstructure, applied to martensitic stainless steels, with the aim of establishing a different methodology compared to the traditional DCT cryogenic treatment with subsequent tempering. For this experimental study, a muffle furnace was used, first subjecting the specimens to deep cryogenization in a liquid Nitrogen bath/4h, after being previously austenitized at the following temperatures: 1020-1030-1040-1050 (°C) / 1 hour; and then tempered in oil. A first group of cryogenic samples were subjected to subsequent aging at 150°C, with immersion times: 2.5 -5- 10 - 20 - 50 – 100 (h). The next group was subjected to subsequent tempering at temperatures: 480-500-510-520-530-540 (°C)/ 2h. The hardness tests were carried out under standards, using a Universal Durometer, and the readings were made on the HRC scale. The Impact Resistance tests were carried out in a Charpy machine following the ASTM E 23 – 93ª standard. Measurements were taken in joules. Microscopy was performed at the optical level using a 1000X microscope. It was found: For the entire aging interval, the samples austenitized at 1050°C present greater hardness than austenitized at 1040°C, with the maximum peak aged being at 30h. In all cases, the aged samples exceed the hardness of the tempered samples, even in their minimum values. In post-tempered samples, the tempering temperature hardly have effect on the impact strength of material. In the Cryogenic Treatment: DCT + subsequent aging, the maximum hardness value (58.7 HRC) is linked to an impact toughness value (54J) obtained with aging time of 39h, which is considered an optimal condition. The higher hardness of steel after the DCT treatment is attributed to the transformation of retained austenite into martensite. The microstructure is composed mainly of lath martensite; and the original grain size of the austenite can be appreciated. The choice of the combination: Hardness-toughness, is subject to the required service conditions of steel.

Keywords: deep cryogenic treatment; aged precipitation; martensitic steels;, mechanical properties; martensitic steels, hardness, carbides precipitaion

Procedia PDF Downloads 71
508 Performance Evaluation of Production Schedules Based on Process Mining

Authors: Kwan Hee Han

Abstract:

External environment of enterprise is rapidly changing majorly by global competition, cost reduction pressures, and new technology. In these situations, production scheduling function plays a critical role to meet customer requirements and to attain the goal of operational efficiency. It deals with short-term decision making in the production process of the whole supply chain. The major task of production scheduling is to seek a balance between customer orders and limited resources. In manufacturing companies, this task is so difficult because it should efficiently utilize resource capacity under the careful consideration of many interacting constraints. At present, many computerized software solutions have been utilized in many enterprises to generate a realistic production schedule to overcome the complexity of schedule generation. However, most production scheduling systems do not provide sufficient information about the validity of the generated schedule except limited statistics. Process mining only recently emerged as a sub-discipline of both data mining and business process management. Process mining techniques enable the useful analysis of a wide variety of processes such as process discovery, conformance checking, and bottleneck analysis. In this study, the performance of generated production schedule is evaluated by mining event log data of production scheduling software system by using the process mining techniques since every software system generates event logs for the further use such as security investigation, auditing and error bugging. An application of process mining approach is proposed for the validation of the goodness of production schedule generated by scheduling software systems in this study. By using process mining techniques, major evaluation criteria such as utilization of workstation, existence of bottleneck workstations, critical process route patterns, and work load balance of each machine over time are measured, and finally, the goodness of production schedule is evaluated. By using the proposed process mining approach for evaluating the performance of generated production schedule, the quality of production schedule of manufacturing enterprises can be improved.

Keywords: data mining, event log, process mining, production scheduling

Procedia PDF Downloads 278
507 Interfacial Adhesion and Properties Improvement of Polyethylene/Thermoplastic Starch Blend Compatibilized by Stearic Acid-Grafted-Starch

Authors: Nattaporn Khanoonkon, Rangrong Yoksan, Amod A. Ogale

Abstract:

Polyethylene (PE) is one of the most petroleum-based thermoplastic materials used in many applications including packaging due to its cheap, light-weight, chemically inert and capable to be converted into various shapes and sizes of products. Although PE is a commercially potential material, its non-biodegradability caused environmental problems. At present, bio-based polymers become more interesting owing to its bio-degradability, non-toxicity, and renewability as well as being eco-friendly. Thermoplastic starch (TPS) is a bio-based and biodegradable plastic produced from the plasticization of starch under applying heat and shear force. In many researches, TPS was blended with petroleum-based polymers including PE in order to reduce the cost and the use of those polymers. However, the phase separation between hydrophobic PE and hydrophilic TPS limited the amount of TPS incorporated. The immiscibility of two different polarity polymers can be diminished by adding compatibilizer. PE-based compatibilizers, e.g. polyethylene-grafted-maleic anhydride, polyethylene-co-vinyl alcohol, etc. have been applied for the PE/TPS blend system in order to improve their miscibility. Until now, there is no report about the utilization of starch-based compatibilizer for PE/TPS blend system. The aims of the present research were therefore to synthesize a new starch-based compatibilizer, i.e. stearic acid-grafted starch (SA-g-starch) and to study the effect of SA-g-starch on chemical interaction, morphological properties, tensile properties and water vapor as well as oxygen barrier properties of the PE/TPS blend films. PE/TPS blends without and with incorporating SA-g-starch with a content of 1, 3 and 5 part(s) per hundred parts of starch (phr) were prepared using a twin screw extruder and then blown into films using a film blowing machine. Incorporating 1 phr and 3 phr of SA-g-starch could improve miscibility of the two polymers as confirmed from the reduction of TPS phase size and the good dispersion of TPS phase in PE matrix. In addition, the blend containing SA-g-starch with contents of 1 phr and 3 phr exhibited higher tensile strength and extensibility, as well as lower water vapor and oxygen permeabilities than the naked blend. The above results suggested that SA-g-starch could be potentially applied as a compatibilizer for the PE/TPS blend system.

Keywords: blend, compatibilizer, polyethylene, thermoplastic starch

Procedia PDF Downloads 436
506 Austempered Compacted Graphite Irons: Influence of Austempering Temperature on Microstructure and Microscratch Behavior

Authors: Rohollah Ghasemi, Arvin Ghorbani

Abstract:

This study investigates the effect of austempering temperature on microstructure and scratch behavior of the austempered heat-treated compacted graphite irons. The as-cast was used as base material for heat treatment practices. The samples were extracted from as-cast ferritic CGI pieces and were heat treated under austenitising temperature of 900°C for 60 minutes which followed by quenching in salt-bath at different austempering temperatures of 275°C, 325°C and 375°C. For all heat treatments, an austempering holding time of 30 minutes was selected for this study. Light optical microscope (LOM) and scanning electron microscope (SEM) and electron back scattered diffraction (EBSD) analysis confirmed the ausferritic matrix formed in all heat-treated samples. Microscratches were performed under the load of 200, 600 and 1000 mN using a sphero-conical diamond indenter with a tip radius of 50 μm and induced cone angle 90° at a speed of 10 μm/s at room temperature ~25°C. An instrumented nanoindentation machine was used for performing nanoindentation hardness measurement and microscratch testing. Hardness measurements and scratch resistance showed a significant increase in Brinell, Vickers, and nanoindentation hardness values as well as microscratch resistance of the heat-treated samples compared to the as-cast ferritic sample. The increase in hardness and improvement in microscratch resistance are associated with the formation of the ausferrite matrix consisted of carbon-saturated retained austenite and acicular ferrite in austempered matrix. The maximum hardness was observed for samples austempered at 275°C which resulted in the formation of very fine acicular ferrite. In addition, nanohardness values showed a quite significant variation in the matrix due to the presence of acicular ferrite and carbon-saturated retained austenite. It was also observed that the increase of austempering temperature resulted in increase of volume of the carbon-saturated retained austenite and decrease of hardness values.

Keywords: austempered CGI, austempering, scratch testing, scratch plastic deformation, scratch hardness

Procedia PDF Downloads 131
505 On Cloud Computing: A Review of the Features

Authors: Assem Abdel Hamed Mousa

Abstract:

The Internet of Things probably already influences your life. And if it doesn’t, it soon will, say computer scientists; Ubiquitous computing names the third wave in computing, just now beginning. First were mainframes, each shared by lots of people. Now we are in the personal computing era, person and machine staring uneasily at each other across the desktop. Next comes ubiquitous computing, or the age of calm technology, when technology recedes into the background of our lives. Alan Kay of Apple calls this "Third Paradigm" computing. Ubiquitous computing is essentially the term for human interaction with computers in virtually everything. Ubiquitous computing is roughly the opposite of virtual reality. Where virtual reality puts people inside a computer-generated world, ubiquitous computing forces the computer to live out here in the world with people. Virtual reality is primarily a horse power problem; ubiquitous computing is a very difficult integration of human factors, computer science, engineering, and social sciences. The approach: Activate the world. Provide hundreds of wireless computing devices per person per office, of all scales (from 1" displays to wall sized). This has required new work in operating systems, user interfaces, networks, wireless, displays, and many other areas. We call our work "ubiquitous computing". This is different from PDA's, dynabooks, or information at your fingertips. It is invisible; everywhere computing that does not live on a personal device of any sort, but is in the woodwork everywhere. The initial incarnation of ubiquitous computing was in the form of "tabs", "pads", and "boards" built at Xerox PARC, 1988-1994. Several papers describe this work, and there are web pages for the Tabs and for the Boards (which are a commercial product now): Ubiquitous computing will drastically reduce the cost of digital devices and tasks for the average consumer. With labor intensive components such as processors and hard drives stored in the remote data centers powering the cloud , and with pooled resources giving individual consumers the benefits of economies of scale, monthly fees similar to a cable bill for services that feed into a consumer’s phone.

Keywords: internet, cloud computing, ubiquitous computing, big data

Procedia PDF Downloads 378
504 Adaption to Climate Change as a Challenge for the Manufacturing Industry: Finding Business Strategies by Game-Based Learning

Authors: Jan Schmitt, Sophie Fischer

Abstract:

After the Corona pandemic, climate change is a further, long-lasting challenge the society must deal with. An ongoing climate change need to be prevented. Nevertheless, the adoption tothe already changed climate conditionshas to be focused in many sectors. Recently, the decisive role of the economic sector with high value added can be seen in the Corona crisis. Hence, manufacturing industry as such a sector, needs to be prepared for climate change and adaption. Several examples from the manufacturing industry show the importance of a strategic effort in this field: The outsourcing of a major parts of the value chain to suppliers in other countries and optimizing procurement logistics in a time-, storage- and cost-efficient manner within a network of global value creation, can lead vulnerable impacts due to climate-related disruptions. E.g. the total damage costs after the 2011 flood disaster in Thailand, including costs for delivery failures, were estimated at 45 billion US dollars worldwide. German car manufacturers were also affected by supply bottlenecks andhave close its plant in Thailand for a short time. Another OEM must reduce the production output. In this contribution, a game-based learning approach is presented, which should enable manufacturing companies to derive their own strategies for climate adaption out of a mix of different actions. Based on data from a regional study of small, medium and large manufacturing companies in Mainfranken, a strongly industrialized region of northern Bavaria (Germany) the game-based learning approach is designed. Out of this, the actual state of efforts due to climate adaption is evaluated. First, the results are used to collect single actions for manufacturing companies and second, further actions can be identified. Then, a variety of climate adaption activities can be clustered according to the scope of activity of the company. The combination of different actions e.g. the renewal of the building envelope with regard to thermal insulation, its benefits and drawbacks leads to a specific strategy for climate adaption for each company. Within the game-based approach, the players take on different roles in a fictionalcompany and discuss the order and the characteristics of each action taken into their climate adaption strategy. Different indicators such as economic, ecologic and stakeholder satisfaction compare the success of the respective measures in a competitive format with other virtual companies deriving their own strategy. A "play through" climate change scenarios with targeted adaptation actions illustrate the impact of different actions and their combination onthefictional company.

Keywords: business strategy, climate change, climate adaption, game-based learning

Procedia PDF Downloads 203
503 Modeling of in 738 LC Alloy Mechanical Properties Based on Microstructural Evolution Simulations for Different Heat Treatment Conditions

Authors: M. Tarik Boyraz, M. Bilge Imer

Abstract:

Conventionally cast nickel-based super alloys, such as commercial alloy IN 738 LC, are widely used in manufacturing of industrial gas turbine blades. With carefully designed microstructure and the existence of alloying elements, the blades show improved mechanical properties at high operating temperatures and corrosive environment. The aim of this work is to model and estimate these mechanical properties of IN 738 LC alloy solely based on simulations for projected heat treatment conditions or service conditions. The microstructure (size, fraction and frequency of gamma prime- γ′ and carbide phases in gamma- γ matrix, and grain size) of IN 738 LC needs to be optimized to improve the high temperature mechanical properties by heat treatment process. This process can be performed at different soaking temperature, time and cooling rates. In this work, micro-structural evolution studies were performed experimentally at various heat treatment process conditions, and these findings were used as input for further simulation studies. The operation time, soaking temperature and cooling rate provided by experimental heat treatment procedures were used as micro-structural simulation input. The results of this simulation were compared with the size, fraction and frequency of γ′ and carbide phases, and grain size provided by SEM (EDS module and mapping), EPMA (WDS module) and optical microscope for before and after heat treatment. After iterative comparison of experimental findings and simulations, an offset was determined to fit the real time and theoretical findings. Thereby, it was possible to estimate the final micro-structure without any necessity to carry out the heat treatment experiment. The output of this microstructure simulation based on heat treatment was used as input to estimate yield stress and creep properties. Yield stress was calculated mainly as a function of precipitation, solid solution and grain boundary strengthening contributors in microstructure. Creep rate was calculated as a function of stress, temperature and microstructural factors such as dislocation density, precipitate size, inter-particle spacing of precipitates. The estimated yield stress values were compared with the corresponding experimental hardness and tensile test values. The ability to determine best heat treatment conditions that achieve the desired microstructural and mechanical properties were developed for IN 738 LC based completely on simulations.

Keywords: heat treatment, IN738LC, simulations, super-alloys

Procedia PDF Downloads 245
502 Discerning Divergent Nodes in Social Networks

Authors: Mehran Asadi, Afrand Agah

Abstract:

In data mining, partitioning is used as a fundamental tool for classification. With the help of partitioning, we study the structure of data, which allows us to envision decision rules, which can be applied to classification trees. In this research, we used online social network dataset and all of its attributes (e.g., Node features, labels, etc.) to determine what constitutes an above average chance of being a divergent node. We used the R statistical computing language to conduct the analyses in this report. The data were found on the UC Irvine Machine Learning Repository. This research introduces the basic concepts of classification in online social networks. In this work, we utilize overfitting and describe different approaches for evaluation and performance comparison of different classification methods. In classification, the main objective is to categorize different items and assign them into different groups based on their properties and similarities. In data mining, recursive partitioning is being utilized to probe the structure of a data set, which allow us to envision decision rules and apply them to classify data into several groups. Estimating densities is hard, especially in high dimensions, with limited data. Of course, we do not know the densities, but we could estimate them using classical techniques. First, we calculated the correlation matrix of the dataset to see if any predictors are highly correlated with one another. By calculating the correlation coefficients for the predictor variables, we see that density is strongly correlated with transitivity. We initialized a data frame to easily compare the quality of the result classification methods and utilized decision trees (with k-fold cross validation to prune the tree). The method performed on this dataset is decision trees. Decision tree is a non-parametric classification method, which uses a set of rules to predict that each observation belongs to the most commonly occurring class label of the training data. Our method aggregates many decision trees to create an optimized model that is not susceptible to overfitting. When using a decision tree, however, it is important to use cross-validation to prune the tree in order to narrow it down to the most important variables.

Keywords: online social networks, data mining, social cloud computing, interaction and collaboration

Procedia PDF Downloads 153
501 System Dietadhoc® - A Fusion of Human-Centred Design and Agile Development for the Explainability of AI Techniques Based on Nutritional and Clinical Data

Authors: Michelangelo Sofo, Giuseppe Labianca

Abstract:

In recent years, the scientific community's interest in the exploratory analysis of biomedical data has increased exponentially. Considering the field of research of nutritional biologists, the curative process, based on the analysis of clinical data, is a very delicate operation due to the fact that there are multiple solutions for the management of pathologies in the food sector (for example can recall intolerances and allergies, management of cholesterol metabolism, diabetic pathologies, arterial hypertension, up to obesity and breathing and sleep problems). In this regard, in this research work a system was created capable of evaluating various dietary regimes for specific patient pathologies. The system is founded on a mathematical-numerical model and has been created tailored for the real working needs of an expert in human nutrition using the human-centered design (ISO 9241-210), therefore it is in step with continuous scientific progress in the field and evolves through the experience of managed clinical cases (machine learning process). DietAdhoc® is a decision support system nutrition specialists for patients of both sexes (from 18 years of age) developed with an agile methodology. Its task consists in drawing up the biomedical and clinical profile of the specific patient by applying two algorithmic optimization approaches on nutritional data and a symbolic solution, obtained by transforming the relational database underlying the system into a deductive database. For all three solution approaches, particular emphasis has been given to the explainability of the suggested clinical decisions through flexible and customizable user interfaces. Furthermore, the system has multiple software modules based on time series and visual analytics techniques that allow to evaluate the complete picture of the situation and the evolution of the diet assigned for specific pathologies.

Keywords: medical decision support, physiological data extraction, data driven diagnosis, human centered AI, symbiotic AI paradigm

Procedia PDF Downloads 14
500 The Metabolism of Built Environment: Energy Flow and Greenhouse Gas Emissions in Nigeria

Authors: Yusuf U. Datti

Abstract:

It is becoming increasingly clear that the consumption of resources now enjoyed in the developed nations will be impossible to be sustained worldwide. While developing countries still have the advantage of low consumption and a smaller ecological footprint per person, they cannot simply develop in the same way as other western cities have developed in the past. The severe reality of population and consumption inequalities makes it contentious whether studies done in developed countries can be translated and applied to developing countries. Additional to this disparities, there are few or no metabolism of energy studies in Nigeria. Rather more contentious majority of energy metabolism studies have been done only in developed countries. While researches in Nigeria concentrate on other aspects/principles of sustainability such as water supply, sewage disposal, energy supply, energy efficiency, waste disposal, etc., which will not accurately capture the environmental impact of energy flow in Nigeria, this research will set itself apart by examining the flow of energy in Nigeria and the impact that the flow will have on the environment. The aim of the study is to examine and quantify the metabolic flows of energy in Nigeria and its corresponding environmental impact. The study will quantify the level and pattern of energy inflow and the outflow of greenhouse emissions in Nigeria. This study will describe measures to address the impact of existing energy sources and suggest alternative renewable energy sources in Nigeria that will lower the emission of greenhouse gas emissions. This study will investigate the metabolism of energy in Nigeria through a three-part methodology. The first step involved selecting and defining the study area and some variables that would affect the output of the energy (time of the year, stability of the country, income level, literacy rate and population). The second step involves analyzing, categorizing and quantifying the amount of energy generated by the various energy sources in the country. The third step involves analyzing what effect the variables would have on the environment. To ensure a representative sample of the study area, Africa’s most populous country, with economy that is the second biggest and that is among the top largest oil producing countries in the world is selected. This is due to the understanding that countries with large economy and dense populations are ideal places to examine sustainability strategies; hence, the choice of Nigeria for the study. National data will be utilized unless where such data cannot be found, then local data will be employed which will be aggregated to reflect the national situation. The outcome of the study will help policy-makers better target energy conservation and efficiency programs and enables early identification and mitigation of any negative effects in the environment.

Keywords: built environment, energy metabolism, environmental impact, greenhouse gas emissions and sustainability

Procedia PDF Downloads 179
499 “CheckPrivate”: Artificial Intelligence Powered Mobile Application to Enhance the Well-Being of Sextual Transmitted Diseases Patients in Sri Lanka under Cultural Barriers

Authors: Warnakulasuriya Arachichige Malisha Ann Rosary Fernando, Udalamatta Gamage Omila Chalanka Jinadasa, Bihini Pabasara Amandi Amarasinghe, Manul Thisuraka Mandalawatta, Uthpala Samarakoon, Manori Gamage

Abstract:

The surge in sexually transmitted diseases (STDs) has become a critical public health crisis demanding urgent attention and action. Like many other nations, Sri Lanka is grappling with a significant increase in STDs due to a lack of education and awareness regarding their dangers. Presently, the available applications for tracking and managing STDs cover only a limited number of easily detectable infections, resulting in a significant gap in effectively controlling their spread. To address this gap and combat the rising STD rates, it is essential to leverage technology and data. Employing technology to enhance the tracking and management of STDs is vital to prevent their further propagation and to enable early intervention and treatment. This requires adopting a comprehensive approach that involves raising public awareness about the perils of STDs, improving access to affordable healthcare services for early detection and treatment, and utilizing advanced technology and data analysis. The proposed mobile application aims to cater to a broad range of users, including STD patients, recovered individuals, and those unaware of their STD status. By harnessing cutting-edge technologies like image detection, symptom-based identification, prevention methods, doctor and clinic recommendations, and virtual counselor chat, the application offers a holistic approach to STD management. In conclusion, the escalating STD rates in Sri Lanka and across the globe require immediate action. The integration of technology-driven solutions, along with comprehensive education and healthcare accessibility, is the key to curbing the spread of STDs and promoting better overall public health.

Keywords: STD, machine learning, NLP, artificial intelligence

Procedia PDF Downloads 78
498 Determination of Community Based Reference Interval of Aspartate Aminotransferase to Platelet Ratio Index (APRI) among Healthy Populations in Mekelle City Tigray, Northern Ethiopia

Authors: Getachew Belay Kassahun

Abstract:

Background: Aspartate aminotransferase to Platelet Ratio Index (APRI) currently becomes a biomarker for screening liver fibrosis since liver biopsy procedure is invasive and variation in pathological interpretation. Clinical Laboratory Standard Institute recommends establishing age, sex and environment specific reference interval for biomarkers in a homogenous population. The current study was aimed to derive community based reference interval of APRI aged between 12 and 60 years old in Mekelle city Tigrai, Northern Ethiopia. Method: Six hundred eighty eight study participants were collected from three districts in Mekelle city. The 3 districts were selected through random sampling technique and sample size to kebelles (small administration) were distributed proportional to household number in each district. Lottery method was used at household level if more than 2 study participants to each age partition were found. A community based cross sectional in a total of 534 study participants, 264 male and 270 females, were included in the final laboratory and data analysis but around 154 study participants were excluded through exclusion criteria. Aspartate aminotransferase was analyzed through Biosystem chemistry analyzer and Sysmix machine was used to analyze platelet. Man Whitney U test non parametric stastical tool was used to appreciate stastical difference among gender after excluding the outliers through Box and Whisker. Result: The study appreciated stastical difference among gender for APRI reference interval. The combined, male and female reference interval in the current study was 0.098-0.390, 0.133-0.428 and 0.090-0.319 respectively. The upper and lower reference interval of males was higher than females in all age partition and there was no stastical difference (p-value (<0.05)) between age partition. Conclusion: The current study showed using sex specific reference interval is significant to APRI biomarker in clinical practice for result interpretation.

Keywords: reference interval, aspartate aminotransferase to platelet ratio Index, Ethiopia, tigray

Procedia PDF Downloads 109
497 Machine That Provides Mineral Fertilizer Equal to the Soil on the Slopes

Authors: Huseyn Nuraddin Qurbanov

Abstract:

The reliable food supply of the population of the republic is one of the main directions of the state's economic policy. Grain growing, which is the basis of agriculture, is important in this area. In the cultivation of cereals on the slopes, the application of equal amounts of mineral fertilizers the under the soil before sowing is a very important technological process. The low level of technical equipment in this area prevents producers from providing the country with the necessary quality cereals. Experience in the operation of modern technical means has shown that, at present, there is a need to provide an equal amount of fertilizer on the slopes to under the soil, fully meeting the agro-technical requirements. No fundamental changes have been made to the industrial machines that fertilize the under the soil, and unequal application of fertilizers under the soil on the slopes has been applied. This technological process leads to the destruction of new seedlings and reduced productivity due to intolerance to frost during the winter for the plant planted in the fall. In special climatic conditions, there is an optimal fertilization rate for each agricultural product. The application of fertilizers to the soil is one of the conditions that increase their efficiency in the field. As can be seen, the development of a new technical proposal for fertilizing and plowing the slopes in equal amounts on the slopes, improving the technological and design parameters, and taking into account the physical and mechanical properties of fertilizers is very important. Taking into account the above-mentioned issues, a combined plough was developed in our laboratory. Combined plough carries out pre-sowing technological operation in the cultivation of cereals, providing a smooth equal amount of mineral fertilizers under the soil on the slopes. Mathematical models of a smooth spreader that evenly distributes fertilizers in the field have been developed. Thus, diagrams and graphs obtained without distribution on the 8 partitions of the smooth spreader are constructed under the inclined angles of the slopes. Percentage and productivity of equal distribution in the field were noted by practical and theoretical analysis.

Keywords: combined plough, mineral fertilizer, equal sowing, fertilizer norm, grain-crops, sowing fertilizer

Procedia PDF Downloads 133
496 AI/ML Atmospheric Parameters Retrieval Using the “Atmospheric Retrievals conditional Generative Adversarial Network (ARcGAN)”

Authors: Thomas Monahan, Nicolas Gorius, Thanh Nguyen

Abstract:

Exoplanet atmospheric parameters retrieval is a complex, computationally intensive, inverse modeling problem in which an exoplanet’s atmospheric composition is extracted from an observed spectrum. Traditional Bayesian sampling methods require extensive time and computation, involving algorithms that compare large numbers of known atmospheric models to the input spectral data. Runtimes are directly proportional to the number of parameters under consideration. These increased power and runtime requirements are difficult to accommodate in space missions where model size, speed, and power consumption are of particular importance. The use of traditional Bayesian sampling methods, therefore, compromise model complexity or sampling accuracy. The Atmospheric Retrievals conditional Generative Adversarial Network (ARcGAN) is a deep convolutional generative adversarial network that improves on the previous model’s speed and accuracy. We demonstrate the efficacy of artificial intelligence to quickly and reliably predict atmospheric parameters and present it as a viable alternative to slow and computationally heavy Bayesian methods. In addition to its broad applicability across instruments and planetary types, ARcGAN has been designed to function on low power application-specific integrated circuits. The application of edge computing to atmospheric retrievals allows for real or near-real-time quantification of atmospheric constituents at the instrument level. Additionally, edge computing provides both high-performance and power-efficient computing for AI applications, both of which are critical for space missions. With the edge computing chip implementation, ArcGAN serves as a strong basis for the development of a similar machine-learning algorithm to reduce the downlinked data volume from the Compact Ultraviolet to Visible Imaging Spectrometer (CUVIS) onboard the DAVINCI mission to Venus.

Keywords: deep learning, generative adversarial network, edge computing, atmospheric parameters retrieval

Procedia PDF Downloads 164
495 Analysis of Residents’ Travel Characteristics and Policy Improving Strategies

Authors: Zhenzhen Xu, Chunfu Shao, Shengyou Wang, Chunjiao Dong

Abstract:

To improve the satisfaction of residents' travel, this paper analyzes the characteristics and influencing factors of urban residents' travel behavior. First, a Multinominal Logit Model (MNL) model is built to analyze the characteristics of residents' travel behavior, reveal the influence of individual attributes, family attributes and travel characteristics on the choice of travel mode, and identify the significant factors. Then put forward suggestions for policy improvement. Finally, Support Vector Machine (SVM) and Multi-Layer Perceptron (MLP) models are introduced to evaluate the policy effect. This paper selects Futian Street in Futian District, Shenzhen City for investigation and research. The results show that gender, age, education, income, number of cars owned, travel purpose, departure time, journey time, travel distance and times all have a significant influence on residents' choice of travel mode. Based on the above results, two policy improvement suggestions are put forward from reducing public transportation and non-motor vehicle travel time, and the policy effect is evaluated. Before the evaluation, the prediction effect of MNL, SVM and MLP models was evaluated. After parameter optimization, it was found that the prediction accuracy of the three models was 72.80%, 71.42%, and 76.42%, respectively. The MLP model with the highest prediction accuracy was selected to evaluate the effect of policy improvement. The results showed that after the implementation of the policy, the proportion of public transportation in plan 1 and plan 2 increased by 14.04% and 9.86%, respectively, while the proportion of private cars decreased by 3.47% and 2.54%, respectively. The proportion of car trips decreased obviously, while the proportion of public transport trips increased. It can be considered that the measures have a positive effect on promoting green trips and improving the satisfaction of urban residents, and can provide a reference for relevant departments to formulate transportation policies.

Keywords: neural network, travel characteristics analysis, transportation choice, travel sharing rate, traffic resource allocation

Procedia PDF Downloads 135
494 Study on the Prediction of Serviceability of Garments Based on the Seam Efficiency and Selection of the Right Seam to Ensure Better Serviceability of Garments

Authors: Md Azizul Islam

Abstract:

Seam is the line of joining two separate fabric layers for functional or aesthetic purposes. Different kinds of seams are used for assembling the different areas or parts of the garment to increase serviceability. To empirically support the importance of seam efficiency on serviceability of garments, this study is focused on choosing the right type of seams for particular sewing parts of the garments based on the seam efficiency to ensure better serviceability. Seam efficiency is the ratio of seam strength and fabric strength. Single jersey knitted finished fabrics of four different GSMs (gram per square meter) were used to make the test garments T-shirt. Three distinct types of the seam: superimposed, lapped and flat seam was applied to the side seams of T-shirt and sewn by lockstitch (stitch class- 301) in a flat-bed plain sewing machine (maximum sewing speed: 5000 rpm) to make (3x4) 12 T-shirts. For experimental purposes, needle thread count (50/3 Ne), bobbin thread count (50/2 Ne) and the stitch density (stitch per inch: 8-9), Needle size (16 in singer system), stitch length (31 cm), and seam allowance (2.5cm) were kept same for all specimens. The grab test (ASTM D5034-08) was done in the Universal tensile tester to measure the seam strength and fabric strength. The produced T-shirts were given to 12 soccer players who wore the shirts for 20 soccer matches (each match of 90 minutes duration). Serviceability of the shirt were measured by visual inspection of a 5 points scale based on the seam conditions. The study found that T-shirts produced with lapped seam show better serviceability and T-shirts made of flat seams perform the lowest score in serviceability score. From the calculated seam efficiency (seam strength/ fabric strength), it was obvious that the performance (in terms of strength) of the lapped and bound seam is higher than that of the superimposed seam and the performance of superimposed seam is far better than that of the flat seam. So it can be predicted that to get a garment of high serviceability, lapped seams could be used instead of superimposed or other types of the seam. In addition, less stressed garments can be assembled by others seems like superimposed seams or flat seams.

Keywords: seam, seam efficiency, serviceability, T-shirt

Procedia PDF Downloads 197
493 Fault-Tolerant Control Study and Classification: Case Study of a Hydraulic-Press Model Simulated in Real-Time

Authors: Jorge Rodriguez-Guerra, Carlos Calleja, Aron Pujana, Iker Elorza, Ana Maria Macarulla

Abstract:

Society demands more reliable manufacturing processes capable of producing high quality products in shorter production cycles. New control algorithms have been studied to satisfy this paradigm, in which Fault-Tolerant Control (FTC) plays a significant role. It is suitable to detect, isolate and adapt a system when a harmful or faulty situation appears. In this paper, a general overview about FTC characteristics are exposed; highlighting the properties a system must ensure to be considered faultless. In addition, a research to identify which are the main FTC techniques and a classification based on their characteristics is presented in two main groups: Active Fault-Tolerant Controllers (AFTCs) and Passive Fault-Tolerant Controllers (PFTCs). AFTC encompasses the techniques capable of re-configuring the process control algorithm after the fault has been detected, while PFTC comprehends the algorithms robust enough to bypass the fault without further modifications. The mentioned re-configuration requires two stages, one focused on detection, isolation and identification of the fault source and the other one in charge of re-designing the control algorithm by two approaches: fault accommodation and control re-design. From the algorithms studied, one has been selected and applied to a case study based on an industrial hydraulic-press. The developed model has been embedded under a real-time validation platform, which allows testing the FTC algorithms and analyse how the system will respond when a fault arises in similar conditions as a machine will have on factory. One AFTC approach has been picked up as the methodology the system will follow in the fault recovery process. In a first instance, the fault will be detected, isolated and identified by means of a neural network. In a second instance, the control algorithm will be re-configured to overcome the fault and continue working without human interaction.

Keywords: fault-tolerant control, electro-hydraulic actuator, fault detection and isolation, control re-design, real-time

Procedia PDF Downloads 169
492 Subjective Temporal Resources: On the Relationship Between Time Perspective and Chronic Time Pressure to Burnout

Authors: Diamant Irene, Dar Tamar

Abstract:

Burnout, conceptualized within the framework of stress research, is to a large extent a result of a threat on resources of time or a feeling of time shortage. In reaction to numerous tasks, deadlines, high output, management of different duties encompassing work-home conflicts, many individuals experience ‘time pressure’. Time pressure is characterized as the perception of a lack of available time in relation to the amount of workload. It can be a result of local objective constraints, but it can also be a chronic attribute in coping with life. As such, time pressure is associated in the literature with general stress experience and can therefore be a direct, contributory burnout factor. The present study examines the relation of chronic time pressure – feeling of time shortage and of being rushed, with another central aspect in subjective temporal experience - time perspective. Time perspective is a stable personal disposition, capturing the extent to which people subjectively remember the past, live the present and\or anticipate the future. Based on Hobfoll’s Conservation of Resources Theory, it was hypothesized that individuals with chronic time pressure would experience a permanent threat on their time resources resulting in relatively increased burnout. In addition, it was hypothesized that different time perspective profiles, based on Zimbardo’s typology of five dimensions – Past Positive, Past Negative, Present Hedonistic, Present Fatalistic, and Future, would be related to different magnitudes of chronic time pressure and of burnout. We expected that individuals with ‘Past Negative’ or ‘Present Fatalist’ time perspectives would experience more burnout, with chronic time pressure being a moderator variable. Conversely, individuals with a ‘Present Hedonistic’ - with little concern with the future consequences of actions, would experience less chronic time pressure and less burnout. Another temporal experience angle examined in this study is the difference between the actual distribution of time (as in a typical day) versus desired distribution of time (such as would have been distributed optimally during a day). It was hypothesized that there would be a positive correlation between the gap between these time distributions and chronic time pressure and burnout. Data was collected through an online self-reporting survey distributed on social networks, with 240 participants (aged 21-65) recruited through convenience and snowball sampling methods from various organizational sectors. The results of the present study support the hypotheses and constitute a basis for future debate regarding the elements of burnout in the modern work environment, with an emphasis on subjective temporal experience. Our findings point to the importance of chronic and stable temporal experiences, as time pressure and time perspective, in occupational experience. The findings are also discussed with a view to the development of practical methods of burnout prevention.

Keywords: conservation of resources, burnout, time pressure, time perspective

Procedia PDF Downloads 171
491 Determinants of Budget Performance in an Oil-Based Economy

Authors: Adeola Adenikinju, Olusanya E. Olubusoye, Lateef O. Akinpelu, Dilinna L. Nwobi

Abstract:

Since the enactment of the Fiscal Responsibility Act (2007), the Federal Government of Nigeria (FGN) has made public its fiscal budget and the subsequent implementation report. A critical review of these documents shows significant variations in the five macroeconomic variables which are inputs in each Presidential budget; oil Production target (mbpd), oil price ($), Foreign exchange rate(N/$), and Gross Domestic Product growth rate (%) and inflation rate (%). This results in underperformance of the Federal budget expected output in terms of non-oil and oil revenue aggregates. This paper evaluates first the existing variance between budgeted and actuals, then the relationship and causality between the determinants of Federal fiscal budget assumptions, and finally the determinants of FGN’s Gross Oil Revenue. The paper employed the use of descriptive statistics, the Autoregressive distributed lag (ARDL) model, and a Profit oil probabilistic model to achieve these objectives. This model permits for both the static and dynamic effect(s) of the independent variable(s) on the dependent variable, unlike a static model that accounts for static or fixed effect(s) only. It offers a technique for checking the existence of a long-run relationship between variables, unlike other tests of cointegration, such as the Engle-Granger and Johansen tests, which consider only non-stationary series that are integrated of the same order. Finally, even with small sample size, the ARDL model is known to generate a valid result, for it is the dependent variable and is the explanatory variable. The results showed that there is a long-run relationship between oil revenue as a proxy for budget performance and its determinants; oil price, produced oil quantity, and foreign exchange rate. There is a short-run relationship between oil revenue and its determinants; oil price, produced oil quantity, and foreign exchange rate. There is a long-run relationship between non-oil revenue and its determinants; inflation rate, GDP growth rate, and foreign exchange rate. The grangers’ causality test results show that there is a mono-directional causality between oil revenue and its determinants. The Federal budget assumptions only explain 68% of oil revenue and 62% of non-oil revenue. There is a mono-directional causality between non-oil revenue and its determinants. The Profit oil Model describes production sharing contracts, joint ventures, and modified carrying arrangements as the greatest contributors to FGN’s gross oil revenue. This provides empirical justification for the selected macroeconomic variables used in the Federal budget design and performance evaluation. The research recommends other variables, debt and money supply, be included in the Federal budget design to explain the Federal budget revenue performance further.

Keywords: ARDL, budget performance, oil price, oil quantity, oil revenue

Procedia PDF Downloads 168
490 Application of Multilinear Regression Analysis for Prediction of Synthetic Shear Wave Velocity Logs in Upper Assam Basin

Authors: Triveni Gogoi, Rima Chatterjee

Abstract:

Shear wave velocity (Vs) estimation is an important approach in the seismic exploration and characterization of a hydrocarbon reservoir. There are varying methods for prediction of S-wave velocity, if recorded S-wave log is not available. But all the available methods for Vs prediction are empirical mathematical models. Shear wave velocity can be estimated using P-wave velocity by applying Castagna’s equation, which is the most common approach. The constants used in Castagna’s equation vary for different lithologies and geological set-ups. In this study, multiple regression analysis has been used for estimation of S-wave velocity. The EMERGE module from Hampson-Russel software has been used here for generation of S-wave log. Both single attribute and multi attributes analysis have been carried out for generation of synthetic S-wave log in Upper Assam basin. Upper Assam basin situated in North Eastern India is one of the most important petroleum provinces of India. The present study was carried out using four wells of the study area. Out of these wells, S-wave velocity was available for three wells. The main objective of the present study is a prediction of shear wave velocities for wells where S-wave velocity information is not available. The three wells having S-wave velocity were first used to test the reliability of the method and the generated S-wave log was compared with actual S-wave log. Single attribute analysis has been carried out for these three wells within the depth range 1700-2100m, which corresponds to Barail group of Oligocene age. The Barail Group is the main target zone in this study, which is the primary producing reservoir of the basin. A system generated list of attributes with varying degrees of correlation appeared and the attribute with the highest correlation was concerned for the single attribute analysis. Crossplot between the attributes shows the variation of points from line of best fit. The final result of the analysis was compared with the available S-wave log, which shows a good visual fit with a correlation of 72%. Next multi-attribute analysis has been carried out for the same data using all the wells within the same analysis window. A high correlation of 85% has been observed between the output log from the analysis and the recorded S-wave. The almost perfect fit between the synthetic S-wave and the recorded S-wave log validates the reliability of the method. For further authentication, the generated S-wave data from the wells have been tied to the seismic and correlated them. Synthetic share wave log has been generated for the well M2 where S-wave is not available and it shows a good correlation with the seismic. Neutron porosity, density, AI and P-wave velocity are proved to be the most significant variables in this statistical method for S-wave generation. Multilinear regression method thus can be considered as a reliable technique for generation of shear wave velocity log in this study.

Keywords: Castagna's equation, multi linear regression, multi attribute analysis, shear wave logs

Procedia PDF Downloads 222
489 Development of Bioplastic Disposable Food Packaging from Starch and Cellulose

Authors: Lidya Hailu, Ramesh Duraisamy, Masood Akhtar Khan, Belete Yilma

Abstract:

Disposable food packaging is a single-use plastics that can include any disposable plastic item which could be designed and use only once. In this context, this study aimed to prepare and evaluate bioplastic food packaging material from avocado seed starch and sugarcane bagasse cellulose and to characterise avocado seed starch. Performed the physicomechanical, structural, thermal properties, and biodegradability of raw materials and readily prepared bioplastic using the universal tensile testing machine, FTIR, UV-Vis spectroscopy, TGA, XRD, and SEM. Results have shown that an increasing amount of glycerol (3-5 mL) resulted in increases in water absorption, density, water vapor permeability, and elongation at the break of prepared bioplastic. However, it causes decreases in % transmittance, thermal degradation, and the tensile strength of prepared bioplastic. Likewise, the addition of cellulose fiber (0-15 %) increases % transmittance ranged (91.34±0.12-63.03±0.05 %), density (0.93±0.04-1.27±0.02 g/cm3), thermal degradation (310.01-321.61°C), tensile strength (2.91±6.18-4.21±6.713 MPa) of prepared bioplastic. On the other hand, it causes decreases in water absorption (14.4±0.25-9.40±0.007 %), water vapor permeability (9.306x10-12±0.3-3.57x10-12±0.15 g•s−1•m−1•Pa−1) and elongation at break (34.46±3.37-27.63±5.67 %) of prepared bioplastic. All the readily prepared bioplastic films rapidly degraded in the soil in the first 6 days and decompose within 12 days with a diminutive leftover and completely degraded within 15 days under an open soil atmosphere. Studied results showed starch derived bioplastic reinforced with 15 % cellulose fiber that plasticized with 3 mL of glycerol had improved results than other combinations of glycerol and bagasse cellulose with avocado seed starch. Thus, biodegradable disposable food packaging cup has been successfully produced in the lab-scale level using the studied approach. Biodegradable disposable food packaging materials have been successfully produced by employing avocado seed starch and sugarcane bagasse cellulose. The future study should be done on nano scale production since this study was done at the micro level.

Keywords: avocado seed, food packaging, glycerol, sugarcane bagasse

Procedia PDF Downloads 333
488 Computational Linguistic Implications of Gender Bias: Machines Reflect Misogyny in Society

Authors: Irene Yi

Abstract:

Machine learning, natural language processing, and neural network models of language are becoming more and more prevalent in the fields of technology and linguistics today. Training data for machines are at best, large corpora of human literature and at worst, a reflection of the ugliness in society. Computational linguistics is a growing field dealing with such issues of data collection for technological development. Machines have been trained on millions of human books, only to find that in the course of human history, derogatory and sexist adjectives are used significantly more frequently when describing females in history and literature than when describing males. This is extremely problematic, both as training data, and as the outcome of natural language processing. As machines start to handle more responsibilities, it is crucial to ensure that they do not take with them historical sexist and misogynistic notions. This paper gathers data and algorithms from neural network models of language having to deal with syntax, semantics, sociolinguistics, and text classification. Computational analysis on such linguistic data is used to find patterns of misogyny. Results are significant in showing the existing intentional and unintentional misogynistic notions used to train machines, as well as in developing better technologies that take into account the semantics and syntax of text to be more mindful and reflect gender equality. Further, this paper deals with the idea of non-binary gender pronouns and how machines can process these pronouns correctly, given its semantic and syntactic context. This paper also delves into the implications of gendered grammar and its effect, cross-linguistically, on natural language processing. Languages such as French or Spanish not only have rigid gendered grammar rules, but also historically patriarchal societies. The progression of society comes hand in hand with not only its language, but how machines process those natural languages. These ideas are all extremely vital to the development of natural language models in technology, and they must be taken into account immediately.

Keywords: computational analysis, gendered grammar, misogynistic language, neural networks

Procedia PDF Downloads 116
487 Mathematical Model to Simulate Liquid Metal and Slag Accumulation, Drainage and Heat Transfer in Blast Furnace Hearth

Authors: Hemant Upadhyay, Tarun Kumar Kundu

Abstract:

It is utmost important for a blast furnace operator to understand the mechanisms governing the liquid flow, accumulation, drainage and heat transfer between various phases in blast furnace hearth for a stable and efficient blast furnace operation. Abnormal drainage behavior may lead to high liquid build up in the hearth. Operational problems such as pressurization, low wind intake, and lower material descent rates, normally be encountered if the liquid levels in the hearth exceed a critical limit when Hearth coke and Deadman start to float. Similarly, hot metal temperature is an important parameter to be controlled in the BF operation; it should be kept at an optimal level to obtain desired product quality and a stable BF performance. It is not possible to carry out any direct measurement of above due to the hostile conditions in the hearth with chemically aggressive hot liquids. The objective here is to develop a mathematical model to simulate the variation in hot metal / slag accumulation and temperature during the tapping of the blast furnace based on the computed drainage rate, production rate, mass balance, heat transfer between metal and slag, metal and solids, slag and solids as well as among the various zones of metal and slag itself. For modeling purpose, the BF hearth is considered as a pressurized vessel, filled with solid coke particles. Liquids trickle down in hearth from top and accumulate in voids between the coke particles which are assumed thermally saturated. A set of generic mass balance equations gives the amount of metal and slag intake in hearth. A small drainage (tap hole) is situated at the bottom of the hearth and flow rate of liquids from tap hole is computed taking in account the amount of both the phases accumulated their level in hearth, pressure from gases in the furnace and erosion behaviors of tap hole itself. Heat transfer equations provide the exchange of heat between various layers of liquid metal and slag, and heat loss to cooling system through refractories. Based on all that information a dynamic simulation is carried out which provides real time information of liquids accumulation in hearth before and during tapping, drainage rate and its variation, predicts critical event timings during tapping and expected tapping temperature of metal and slag on preset time intervals. The model is in use at JSPL, India BF-II and its output is regularly cross-checked with actual tapping data, which are in good agreement.

Keywords: blast furnace, hearth, deadman, hotmetal

Procedia PDF Downloads 183
486 The Capacity of Bolted and Screw Connections in Cold-Formed Steel Truss Structure through Analytical and Experimental Method

Authors: Slamet Setioboro, Rahutami Kusumaningsih, Prabowo Setiyawan, Danna Darmayadi

Abstract:

Designing of cold-formed steel capacity connections often based on the formula used for hot rolled steel. It makes the result of the actual capacity connection doesn’t accurate anymore. When the hot rolled steel receives the axial load pull, it will have different characteristics. As the result, there will be failure result when designing Truss structure made of hot rolled steel. This research aims to determine the capacity of actual cold-formed steel connections section which is loaded by the axial tensile force. It will test the appeal of the connection using bolt grafting tool and screw grafting tool. The variations of the test will be on the type of connection (single and double slap), the number of the connection tools and connection configuration. Bold and screw connections failure mode observed in this research are different each other. Failure mode of bolted connections includes sliding pivot plate, tearing at the plate and cutting of the bolt head. While the failure mode of screw connections includes tilting, hole-bearing, pull over and cutting the screw body out. This research was conducted using a laboratory test of HW2-600S Universal Testing Machine model with ASTM E8. It has done in the materials testing laboratory of Mechanical Engineering Department, Faculty of Engineering UNNES. The results obtained through the laboratory diversification towards theoretical calculations using the standards specified in ISO 7971-2013 Cold-Rolled Steel Structures. Based on the research, it can be concluded that the effective connection in receiving force strength is bolted connections neither single nor double plate. The method used is by applying 4 bolts through 2 parallel lines configuration. Furthermore, this connection deals with the consequences of holding the highest Pmaks, lowest failure risk and getting a little kind of mode of failure.

Keywords: axial load, cold-formed steel, capacity connections, bolted connections, screw connections

Procedia PDF Downloads 274
485 Design Optimisation of a Novel Cross Vane Expander-Compressor Unit for Refrigeration System

Authors: Y. D. Lim, K. S. Yap, K. T. Ooi

Abstract:

In recent years, environmental issue has been a hot topic in the world, especially the global warming effect caused by conventional non-environmentally friendly refrigerants has increased. Several studies of a more energy-efficient and environmentally friendly refrigeration system have been conducted in order to tackle the issue. In search of a better refrigeration system, CO2 refrigeration system has been proposed as a better option. However, the high throttling loss involved during the expansion process of the refrigeration cycle leads to a relatively low efficiency and thus the system is impractical. In order to improve the efficiency of the refrigeration system, it is suggested by replacing the conventional expansion valve in the refrigeration system with an expander. Based on this issue, a new type of expander-compressor combined unit, named Cross Vane Expander-Compressor (CVEC) was introduced to replace the compressor and the expansion valve of a conventional refrigeration system. A mathematical model was developed to calculate the performance of CVEC, and it was found that the machine is capable of saving the energy consumption of a refrigeration system by as much as 18%. Apart from energy saving, CVEC is also geometrically simpler and more compact. To further improve its efficiency, optimization study of the device is carried out. In this report, several design parameters of CVEC were chosen to be the variables of optimization study. This optimization study was done in a simulation program by using complex optimization method, which is a direct search, multi-variables and constrained optimization method. It was found that the main design parameters, which was shaft radius was reduced around 8% while the inner cylinder radius was remained unchanged at its lower limit after optimization. Furthermore, the port sizes were increased to their upper limit after optimization. The changes of these design parameters have resulted in reduction of around 12% in the total frictional loss and reduction of 4% in power consumption. Eventually, the optimization study has resulted in an improvement in the mechanical efficiency CVEC by 4% and improvement in COP by 6%.

Keywords: complex optimization method, COP, cross vane expander-compressor, CVEC, design optimization, direct search, energy saving, improvement, mechanical efficiency, multi variables

Procedia PDF Downloads 370