Search results for: rice processing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4238

Search results for: rice processing

3788 Characterization of Heterotrimeric G Protein α Subunit in Tomato

Authors: Thi Thao Ninh, Yuri Trusov, José Ramón Botella

Abstract:

Heterotrimeric G proteins, comprised of three subunits, α, β and γ, are involved in signal transduction pathways that mediate a vast number of processes across the eukaryotic kingdom. 23 Gα subunits are present in humans whereas most plant genomes encode for only one canonical Gα. The disparity observed between Arabidopsis, rice, and maize Gα-deficient mutant phenotypes suggest that Gα functions have diversified between eudicots and monocots during evolution. Alternatively, since the only Gα mutations available in dicots have been produced in Arabidopsis, the possibility exists that this species might be an exception to the rule. In order to test this hypothesis, we studied the G protein α subunit (TGA1) in tomato. Four tga1 knockout lines were generated in tomato cultivar Moneymaker using CRISPR/Cas9. The tga1 mutants exhibit a number of auxin-related phenotypes including changes in leaf shape, reduced plant height, fruit size and number of seeds per fruit. In addition, tga1 mutants have increased sensitivity to abscisic acid during seed germination, reduced sensitivity to exogenous auxin during adventitious root formation from cotyledons and excised hypocotyl explants. Our results suggest that Gα mutant phenotypes in tomato are very similar to those observed in monocots, i.e. rice and maize, and cast doubts about the validity of using Arabidopsis as a model system for plant G protein studies.

Keywords: auxin-related phenotypes, CRISPR/Cas9, G protein α subunit, heterotrimeric G proteins, tomato

Procedia PDF Downloads 136
3787 One Step Further: Pull-Process-Push Data Processing

Authors: Romeo Botes, Imelda Smit

Abstract:

In today’s modern age of technology vast amounts of data needs to be processed in real-time to keep users satisfied. This data comes from various sources and in many formats, including electronic and mobile devices such as GPRS modems and GPS devices. They make use of different protocols including TCP, UDP, and HTTP/s for data communication to web servers and eventually to users. The data obtained from these devices may provide valuable information to users, but are mostly in an unreadable format which needs to be processed to provide information and business intelligence. This data is not always current, it is mostly historical data. The data is not subject to implementation of consistency and redundancy measures as most other data usually is. Most important to the users is that the data are to be pre-processed in a readable format when it is entered into the database. To accomplish this, programmers build processing programs and scripts to decode and process the information stored in databases. Programmers make use of various techniques in such programs to accomplish this, but sometimes neglect the effect some of these techniques may have on database performance. One of the techniques generally used,is to pull data from the database server, process it and push it back to the database server in one single step. Since the processing of the data usually takes some time, it keeps the database busy and locked for the period of time that the processing takes place. Because of this, it decreases the overall performance of the database server and therefore the system’s performance. This paper follows on a paper discussing the performance increase that may be achieved by utilizing array lists along with a pull-process-push data processing technique split in three steps. The purpose of this paper is to expand the number of clients when comparing the two techniques to establish the impact it may have on performance of the CPU storage and processing time.

Keywords: performance measures, algorithm techniques, data processing, push data, process data, array list

Procedia PDF Downloads 244
3786 Quantitative Analysis of Multiprocessor Architectures for Radar Signal Processing

Authors: Deepak Kumar, Debasish Deb, Reena Mamgain

Abstract:

Radar signal processing requires high number crunching capability. Most often this is achieved using multiprocessor platform. Though multiprocessor platform provides the capability of meeting the real time computational challenges, the architecture of the same along with mapping of the algorithm on the architecture plays a vital role in efficiently using the platform. Towards this, along with standard performance metrics, few additional metrics are defined which helps in evaluating the multiprocessor platform along with the algorithm mapping. A generic multiprocessor architecture can not suit all the processing requirements. Depending on the system requirement and type of algorithms used, the most suitable architecture for the given problem is decided. In the paper, we study different architectures and quantify the different performance metrics which enables comparison of different architectures for their merit. We also carried out case study of different architectures and their efficiency depending on parallelism exploited on algorithm or data or both.

Keywords: radar signal processing, multiprocessor architecture, efficiency, load imbalance, buffer requirement, pipeline, parallel, hybrid, cluster of processors (COPs)

Procedia PDF Downloads 412
3785 The Influence of Concreteness on English Compound Noun Processing: Modulation of Constituent Transparency

Authors: Turgut Coskun

Abstract:

'Concreteness effect' refers to faster processing of concrete words and 'compound facilitation' refers to faster response to compounds. In this study, our main goal was to investigate the interaction between compound facilitation and concreteness effect. The latter might modulate compound processing basing on constituents’ transparency patterns. To evaluate these, we created lists for compound and monomorphemic words, sub-categorized them into concrete and abstract words, and further sub-categorized them basing on their transparency. The transparency conditions were opaque-opaque (OO), transparent-opaque (TO), and transparent-transparent (TT). We used RT data from English Lexicon Project (ELP) for our comparisons. The results showed the importance of concreteness factor (facilitation) in both compound and monomorphemic processing. Important for our present concern, separate concrete and abstract compound analyses revealed different patterns for OO, TO, and TT compounds. Concrete TT and TO conditions were processed faster than Concrete OO, Abstract OO and Abstract TT compounds, however, they weren’t processed faster than Abstract TO compounds. These results may reflect on different representation patterns of concrete and abstract compounds.

Keywords: abstract word, compound representation, concrete word, constituent transparency, processing speed

Procedia PDF Downloads 198
3784 Arousal, Encoding, And Intrusive Memories

Authors: Hannah Gutmann, Rick Richardson, Richard Bryant

Abstract:

Intrusive memories following a traumatic event are not uncommon. However, in some individuals, these memories become maladaptive and lead to prolonged stress reactions. A seminal model of PTSD explains that aberrant processing during trauma may lead to prolonged stress reactions and intrusive memories. This model explains that elevated arousal at the time of the trauma promotes data driven processing, leading to fragmented and intrusive memories. This study investigated the role of elevated arousal on the development of intrusive memories. We measured salivary markers of arousal and investigated what impact this had on data driven processing, memory fragmentation, and subsequently, the development of intrusive memories. We assessed 100 healthy participants to understand their processing style, arousal, and experience of intrusive memories. Participants were randomised to a control or experimental condition, the latter of which was designed to increase their arousal. Based on current theory, participants in the experimental condition were expected to engage in more data driven processing and experience more intrusive memories than participants in the control condition. This research aims to shed light on the mechanisms underlying the development of intrusive memories to illustrate ways in which therapeutic approaches for PTSD may be augmented for greater efficacy.

Keywords: stress, cortisol, SAA, PTSD, intrusive memories

Procedia PDF Downloads 197
3783 Reactive Learning about Food Waste Reduction in a Food Processing Plant in Gauteng Province, South Africa

Authors: Nesengani Elelwani Clinton

Abstract:

This paper presents reflective learning as an opportunity commonly available and used for food waste learning in a food processing company in the transition to sustainable and just food systems. In addressing how employees learn about food waste during food processing, the opportunities available for food waste learning were investigated. Reflective learning appeared to be the most used approach to learning about food waste. In the case of food waste learning, reflective learning was a response after employees wasted a substantial amount of food, where process controllers and team leaders would highlight the issue to employees who wasted food and explain how food waste could be reduced. This showed that learning about food waste is not proactive, and there continues to be a lack of structured learning around food waste. Several challenges were highlighted around reflective learning about food waste. Some of the challenges included understanding the language, lack of interest from employees, set times to reach production targets, and working pressures. These challenges were reported to be hindering factors in understanding food waste learning, which is not structured. A need was identified for proactive learning through structured methods. This is because it was discovered that in the plant, where food processing activities happen, the signage and posters that are there are directly related to other sustainability issues such as food safety and health. This indicated that there are low levels of awareness about food waste. Therefore, this paper argues that food waste learning should be proactive. The proactive learning approach should include structured learning materials around food waste during food processing. In the structuring of the learning materials, individual trainers should be multilingual. This will make it possible for those who do not understand English to understand in their own language. And lastly, there should be signage and posters in the food processing plant around food waste. This will bring more awareness around food waste, and employees' behaviour can be influenced by the posters and signage in the food processing plant. Thus, will enable a transition to a just and sustainable food system.

Keywords: sustainable and just food systems, food waste, food waste learning, reflective learning approach

Procedia PDF Downloads 130
3782 Detecting and Disabling Digital Cameras Using D3CIP Algorithm Based on Image Processing

Authors: S. Vignesh, K. S. Rangasamy

Abstract:

The paper deals with the device capable of detecting and disabling digital cameras. The system locates the camera and then neutralizes it. Every digital camera has an image sensor known as a CCD, which is retro-reflective and sends light back directly to its original source at the same angle. The device shines infrared LED light, which is invisible to the human eye, at a distance of about 20 feet. It then collects video of these reflections with a camcorder. Then the video of the reflections is transferred to a computer connected to the device, where it is sent through image processing algorithms that pick out infrared light bouncing back. Once the camera is detected, the device would project an invisible infrared laser into the camera's lens, thereby overexposing the photo and rendering it useless. Low levels of infrared laser neutralize digital cameras but are neither a health danger to humans nor a physical damage to cameras. We also discuss the simplified design of the above device that can used in theatres to prevent piracy. The domains being covered here are optics and image processing.

Keywords: CCD, optics, image processing, D3CIP

Procedia PDF Downloads 357
3781 Security in Resource Constraints: Network Energy Efficient Encryption

Authors: Mona Almansoori, Ahmed Mustafa, Ahmad Elshamy

Abstract:

Wireless nodes in a sensor network gather and process critical information designed to process and communicate, information flooding through such network is critical for decision making and data processing, the integrity of such data is one of the most critical factors in wireless security without compromising the processing and transmission capability of the network. This paper presents mechanism to securely transmit data over a chain of sensor nodes without compromising the throughput of the network utilizing available battery resources available at the sensor node.

Keywords: hybrid protocol, data integrity, lightweight encryption, neighbor based key sharing, sensor node data processing, Z-MAC

Procedia PDF Downloads 145
3780 Image Processing and Calculation of NGRDI Embedded System in Raspberry

Authors: Efren Lopez Jimenez, Maria Isabel Cajero, J. Irving-Vasqueza

Abstract:

The use and processing of digital images have opened up new opportunities for the resolution of problems of various kinds, such as the calculation of different vegetation indexes, among other things, differentiating healthy vegetation from humid vegetation. However, obtaining images from which these indexes are calculated is still the exclusive subject of active research. In the present work, we propose to obtain these images using a low cost embedded system (Raspberry Pi) and its processing, using a set of libraries of open code called OpenCV, in order to obtain the Normalized Red-Green Difference Index (NGRDI).

Keywords: Raspberry Pi, vegetation index, Normalized Red-Green Difference Index (NGRDI), OpenCV

Procedia PDF Downloads 291
3779 Utilization of Rice Husk Ash with Clay to Produce Lightweight Coarse Aggregates for Concrete

Authors: Shegufta Zahan, Muhammad A. Zahin, Muhammad M. Hossain, Raquib Ahsan

Abstract:

Rice Husk Ash (RHA) is one of the agricultural waste byproducts available widely in the world and contains a large amount of silica. In Bangladesh, stones cannot be used as coarse aggregate in infrastructure works as they are not available and need to be imported from abroad. As a result, bricks are mostly used as coarse aggregates in concrete as they are cheaper and easily produced here. Clay is the raw material for producing brick. Due to rapid urban growth and the industrial revolution, demand for brick is increasing, which led to a decrease in the topsoil. This study aims to produce lightweight block aggregates with sufficient strength utilizing RHA at low cost and use them as an ingredient of concrete. RHA, because of its pozzolanic behavior, can be utilized to produce better quality block aggregates at lower cost, replacing clay content in the bricks. The whole study can be divided into three parts. In the first part, characterization tests on RHA and clay were performed to determine their properties. Six different types of RHA from different mills were characterized by XRD and SEM analysis. Their fineness was determined by conducting a fineness test. The result of XRD confirmed the amorphous state of RHA. The characterization test for clay identifies the sample as “silty clay” with a specific gravity of 2.59 and 14% optimum moisture content. In the second part, blocks were produced with six different types of RHA with different combinations by volume with clay. Then mixtures were manually compacted in molds before subjecting them to oven drying at 120 °C for 7 days. After that, dried blocks were placed in a furnace at 1200 °C to produce ultimate blocks. Loss on ignition test, apparent density test, crushing strength test, efflorescence test, and absorption test were conducted on the blocks to compare their performance with the bricks. For 40% of RHA, the crushing strength result was found 60 MPa, where crushing strength for brick was observed 48.1 MPa. In the third part, the crushed blocks were used as coarse aggregate in concrete cylinders and compared them with brick concrete cylinders. Specimens were cured for 7 days and 28 days. The highest compressive strength of block cylinders for 7 days curing was calculated as 26.1 MPa, whereas, for 28 days curing, it was found 34 MPa. On the other hand, for brick cylinders, the value of compressing strength of 7 days and 28 days curing was observed as 20 MPa and 30 MPa, respectively. These research findings can help with the increasing demand for topsoil of the earth, and also turn a waste product into a valuable one.

Keywords: characterization, furnace, pozzolanic behavior, rice husk ash

Procedia PDF Downloads 107
3778 Standard and Processing of Photodegradable Polyethylene

Authors: Nurul-Akidah M. Yusak, Rahmah Mohamed, Noor Zuhaira Abd Aziz

Abstract:

The introduction of degradable plastic materials into agricultural sectors has represented a promising alternative to promote green agriculture and environmental friendly of modern farming practices. Major challenges of developing degradable agricultural films are to identify the most feasible types of degradation mechanisms, composition of degradable polymers and related processing techniques. The incorrect choice of degradable mechanisms to be applied during the degradation process will cause premature losses of mechanical performance and strength. In order to achieve controlled process of agricultural film degradation, the compositions of degradable agricultural film also important in order to stimulate degradation reaction at required interval of time and to achieve sustainability of the modern agricultural practices. A set of photodegradable polyethylene based agricultural film was developed and produced, following the selective optimization of processing parameters of the agricultural film manufacturing system. Example of agricultural films application for oil palm seedlings cultivation is presented.

Keywords: photodegradable polyethylene, plasticulture, processing schemes

Procedia PDF Downloads 518
3777 Thermomechanical Processing of a CuZnAl Shape-Memory Alloy

Authors: Pedro Henrique Alves Martins, Paulo Guilherme Ferreira De Siqueira, Franco De Castro Bubani, Maria Teresa Paulino Aguilar, Paulo Roberto Cetlin

Abstract:

Cu-base shape-memory alloys (CuZnAl, CuAlNi, CuAlBe, etc.) are promising engineering materials for several unconventional devices, such as sensors, actuators, and mechanical vibration dampers. Brittleness is one of the factors that limit the commercial use of these alloys, as it makes thermomechanical processing difficult. In this work, a method for the hot extrusion of a 75.50% Cu, 16,74% Zn, 7,76% Al (weight %) alloy is presented. The effects of the thermomechanical processing in the microstructure and the pseudoelastic behavior of the alloy are assessed by optical metallography, compression and hardness tests. Results show that hot extrusion is a suitable method to obtain severe cross-section reductions in the CuZnAl shape-memory alloy studied. The alloy maintained its pseudoelastic effect after the extrusion and the modifications in the mechanical behavior caused by precipitation during hot extrusion can be minimized by a suitable precipitate dissolution heat treatment.

Keywords: hot extrusion, pseudoelastic, shape-memory alloy, thermomechanical processing

Procedia PDF Downloads 374
3776 Language Processing of Seniors with Alzheimer’s Disease: From the Perspective of Temporal Parameters

Authors: Lai Yi-Hsiu

Abstract:

The present paper aims to examine the language processing of Chinese-speaking seniors with Alzheimer’s disease (AD) from the perspective of temporal cues. Twenty healthy adults, 17 healthy seniors, and 13 seniors with AD in Taiwan participated in this study to tell stories based on two sets of pictures. Nine temporal cues were fetched and analyzed. Oral productions in Mandarin Chinese were compared and discussed to examine to what extent and in what way these three groups of participants performed with significant differences. Results indicated that the age effects were significant in filled pauses. The dementia effects were significant in mean duration of pauses, empty pauses, filled pauses, lexical pauses, normalized mean duration of filled pauses and lexical pauses. The findings reported in the current paper help characterize the nature of language processing in seniors with or without AD, and contribute to the interactions between the AD neural mechanism and their temporal parameters.

Keywords: language processing, Alzheimer’s disease, Mandarin Chinese, temporal cues

Procedia PDF Downloads 446
3775 High Speed Image Rotation Algorithm

Authors: Hee-Choul Kwon, Hyungjin Cho, Heeyong Kwon

Abstract:

Image rotation is one of main pre-processing step in image processing or image pattern recognition. It is implemented with rotation matrix multiplication. However it requires lots of floating point arithmetic operations and trigonometric function calculations, so it takes long execution time. We propose a new high speed image rotation algorithm without two major time-consuming operations. We compare the proposed algorithm with the conventional rotation one with various size images. Experimental results show that the proposed algorithm is superior to the conventional rotation ones.

Keywords: high speed rotation operation, image processing, image rotation, pattern recognition, transformation matrix

Procedia PDF Downloads 506
3774 Pre-Processing of Ultrasonography Image Quality Improvement in Cases of Cervical Cancer Using Image Enhancement

Authors: Retno Supriyanti, Teguh Budiono, Yogi Ramadhani, Haris B. Widodo, Arwita Mulyawati

Abstract:

Cervical cancer is the leading cause of mortality in cancer-related diseases. In this diagnosis doctors usually perform several tests to determine the presence of cervical cancer in a patient. However, these checks require support equipment to get the results in more detail. One is by using ultrasonography. However, for the developing countries most of the existing ultrasonography has a low resolution. The goal of this research is to obtain abnormalities on low-resolution ultrasound images especially for cervical cancer case. In this paper, we emphasize our work to use Image Enhancement for pre-processing image quality improvement. The result shows that pre-processing stage is promising to support further analysis.

Keywords: cervical cancer, mortality, low-resolution, image enhancement.

Procedia PDF Downloads 636
3773 Synergistic Effect of Zr-Modified Cu-ZnO-Al₂O₃ and Bio-Templated HZSM-5 Catalysts in CO₂ Hydrogenation to Methanol and DME

Authors: Abrar Hussain, Kuen-Song Lin, Sayed Maeen Badshah, Jamshid Hussain

Abstract:

The conversion of CO₂ into versatile, useful compounds such as fuels and other chemicals remains a challenging frontier in research, demanding the innovation of increasingly effective catalysts. In the present work, a catalyst-incorporating zirconium (Zr) modification within CuO–ZnO–Al₂O₃ (CZA) was synthesized via a co-precipitation method to convert CO₂ into methanol. Furthermore, bio-HZSM-5 was used to promote methanol dehydration to produce dimethyl ether (DME). We prepared the porous hierarchy bio-HZSM-5 with remarkable pore connectivity by utilizing an economical loofah sponge and rice husks as biotemplates. The synthesized catalysts were characterized using Field Emission Scanning Electron Microscopy (FE-SEM), X–ray diffraction (XRD), N₂ adsorption (BET), temperature-programmed desorption (NH₃-TPD) and thermogravimetric analysis (TGA). The Zr addition improved the performance of the CZZA catalyst as a structural promoter, leading to increased DME selectivity and total carbon conversion by enhancing active sites, surface area, and the synergistic interfaces between CuO and ZnO. The presence of silicon in the biomass, notably from the loofah sponge (0.016 wt %) and rice husks (8.3 wt %), also performed a pivotal role in the preparation of bio-HZSM-5. Furthermore, contrasted to the CZZA/com-ZSM-5 catalyst, the integration of CZZA with bio-HZSM-5-L bifunctional catalyst achieved the highest DME yield (12.1 %), DME selectivity (58.6%), CO₂ conversion (22.5%) at 280 °C and 30 bar. The payback time for 5 and 10-tons per day (5 and10-TPD) DME formation using the catalytic process of CO₂ from petrochemical refinery plant waste gas emissions was 2.98 and 2.44 years, respectively.

Keywords: Cost assessment, Dimethyl ether, low-cost bio-HZSM-5, CZZA catalyst, CO₂ hydrogenation

Procedia PDF Downloads 10
3772 Speech Motor Processing and Animal Sound Communication

Authors: Ana Cleide Vieira Gomes Guimbal de Aquino

Abstract:

Sound communication is present in most vertebrates, from fish, mainly in species that live in murky waters, to some species of reptiles, anuran amphibians, birds, and mammals, including primates. There are, in fact, relevant similarities between human language and animal sound communication, and among these similarities are the vocalizations called calls. The first specific call in human babies is crying, which has a characteristic prosodic contour and is motivated most of the time by the need for food and by affecting the puppy-caregiver interaction, with a view to communicating the necessities and food requests and guaranteeing the survival of the species. The present work aims to articulate speech processing in the motor context with aspects of the project entitled emotional states and vocalization: a comparative study of the prosodic contours of crying in human and non-human animals. First, concepts of speech motor processing and general aspects of speech evolution will be presented to relate these two approaches to animal sound communication.

Keywords: speech motor processing, animal communication, animal behaviour, language acquisition

Procedia PDF Downloads 89
3771 Scheduling Jobs with Stochastic Processing Times or Due Dates on a Server to Minimize the Number of Tardy Jobs

Authors: H. M. Soroush

Abstract:

The problem of scheduling products and services for on-time deliveries is of paramount importance in today’s competitive environments. It arises in many manufacturing and service organizations where it is desirable to complete jobs (products or services) with different weights (penalties) on or before their due dates. In such environments, schedules should frequently decide whether to schedule a job based on its processing time, due-date, and the penalty for tardy delivery to improve the system performance. For example, it is common to measure the weighted number of late jobs or the percentage of on-time shipments to evaluate the performance of a semiconductor production facility or an automobile assembly line. In this paper, we address the problem of scheduling a set of jobs on a server where processing times or due-dates of jobs are random variables and fixed weights (penalties) are imposed on the jobs’ late deliveries. The goal is to find the schedule that minimizes the expected weighted number of tardy jobs. The problem is NP-hard to solve; however, we explore three scenarios of the problem wherein: (i) both processing times and due-dates are stochastic; (ii) processing times are stochastic and due-dates are deterministic; and (iii) processing times are deterministic and due-dates are stochastic. We prove that special cases of these scenarios are solvable optimally in polynomial time, and introduce efficient heuristic methods for the general cases. Our computational results show that the heuristics perform well in yielding either optimal or near optimal sequences. The results also demonstrate that the stochasticity of processing times or due-dates can affect scheduling decisions. Moreover, the proposed problem is general in the sense that its special cases reduce to some new and some classical stochastic single machine models.

Keywords: number of late jobs, scheduling, single server, stochastic

Procedia PDF Downloads 497
3770 Climate Change Scenario Phenomenon in Malaysia: A Case Study in MADA Area

Authors: Shaidatul Azdawiyah Abdul Talib, Wan Mohd Razi Idris, Liew Ju Neng, Tukimat Lihan, Muhammad Zamir Abdul Rasid

Abstract:

Climate change has received great attention worldwide due to the impact of weather causing extreme events. Rainfall and temperature are crucial weather components associated with climate change. In Malaysia, increasing temperatures and changes in rainfall distribution patterns lead to drought and flood events involving agricultural areas, especially rice fields. Muda Agricultural Development Authority (MADA) is the largest rice growing area among the 10 granary areas in Malaysia and has faced floods and droughts in the past due to changing climate. Changes in rainfall and temperature patter affect rice yield. Therefore, trend analysis is important to identify changes in temperature and rainfall patterns as it gives an initial overview for further analysis. Six locations across the MADA area were selected based on the availability of meteorological station (MetMalaysia) data. Historical data (1991 to 2020) collected from MetMalaysia and future climate projection by multi-model ensemble of climate model from CMIP5 (CNRM-CM5, GFDL-CM3, MRI-CGCM3, NorESM1-M and IPSL-CM5A-LR) have been analyzed using Mann-Kendall test to detect the time series trend, together with standardized precipitation anomaly, rainfall anomaly index, precipitation concentration index and temperature anomaly. Future projection data were analyzed based on 3 different periods; early century (2020 – 2046), middle century (2047 – 2073) and late-century (2074 – 2099). Results indicate that the MADA area does encounter extremely wet and dry conditions, leading to drought and flood events in the past. The Mann-Kendall (MK) trend analysis test discovered a significant increasing trend (p < 0.05) in annual rainfall (z = 0.40; s = 15.12) and temperature (z = 0.61; s = 0.04) during the historical period. Similarly, for both RCP 4.5 and RCP 8.5 scenarios, a significant increasing trend (p < 0.05) was found for rainfall (RCP 4.5: z = 0.15; s = 2.55; RCP 8.5: z = 0.41; s = 8.05;) and temperature (RCP 4.5: z = 0.84; s = 0.02; RCP 8.5: z = 0.94; s = 0.05). Under the RCP 4.5 scenario, the average temperature is projected to increase up to 1.6 °C in early century, 2.0 °C in the middle century and 2.4 °C in the late century. In contrast, under RCP 8.5 scenario, the average temperature is projected to increase up to 1.8 °C in the early century, 3.1 °C in the middle century and 4.3 °C in late century. Drought is projected to occur in 2038 and 2043 (early century); 2052 and 2069 (middle century); and 2095, 2097 to 2099 (late century) under RCP 4.5 scenario. As for RCP 8.5 scenario, drought is projected to occur in 2021, 2031 and 2034 (early century); and 2069 (middle century). No drought is projected to occur in the late century under the RCP 8.5 scenario. Thus, this information can be used for the analysis of the impact of climate change scenarios on rice growth and yield besides other crops found in MADA area. Additionally, this study, it would be helpful for researchers and decision-makers in developing applicable adaptation and mitigation strategies to reduce the impact of climate change.

Keywords: climate projection, drought, flood, rainfall, RCP 4.5, RCP 8.5, temperature

Procedia PDF Downloads 77
3769 A Review on Aluminium Metal Matric Composites

Authors: V. Singh, S. Singh, S. S. Garewal

Abstract:

Metal matrix composites with aluminum as the matrix material have been heralded as the next great development in advanced engineering materials. Aluminum metal matrix composites (AMMC) refer to the class of light weight high performance material systems. Properties of AMMCs can be tailored to the demands of different industrial applications by suitable combinations of matrix, reinforcement and processing route. AMMC finds its application in automotive, aerospace, defense, sports and structural areas. This paper presents an overview of AMMC material systems on aspects relating to processing, types and applications with case studies.

Keywords: aluminum metal matrix composites, applications of aluminum metal matrix composites, lighting material processing of aluminum metal matrix composites

Procedia PDF Downloads 465
3768 Optimization of Processing Parameters of Acrylonitrile–Butadiene–Styrene Sheets Integrated by Taguchi Method

Authors: Fatemeh Sadat Miri, Morteza Ehsani, Seyed Farshid Hosseini

Abstract:

The present research is concerned with the optimization of extrusion parameters of ABS sheets by the Taguchi experimental design method. In this design method, three parameters of % recycling ABS, processing temperature and degassing time on mechanical properties, hardness, HDT, and color matching of ABS sheets were investigated. The variations of this research are the dosage of recycling ABS, processing temperature, and degassing time. According to experimental test data, the highest level of tensile strength and HDT belongs to the sample with 5% recycling ABS, processing temperature of 230°C, and degassing time of 3 hours. Additionally, the minimum level of MFI and color matching belongs to this sample, too. The present results are in good agreement with the Taguchi method. Based on the outcomes of the Taguchi design method, degassing time has the most effect on the mechanical properties of ABS sheets.

Keywords: ABS, process optimization, Taguchi, mechanical properties

Procedia PDF Downloads 73
3767 Semiautomatic Calculation of Ejection Fraction Using Echocardiographic Image Processing

Authors: Diana Pombo, Maria Loaiza, Mauricio Quijano, Alberto Cadena, Juan Pablo Tello

Abstract:

In this paper, we present a semi-automatic tool for calculating ejection fraction from an echocardiographic video signal which is derived from a database in DICOM format, of Clinica de la Costa - Barranquilla. Described in this paper are each of the steps and methods used to find the respective calculation that includes acquisition and formation of the test samples, processing and finally the calculation of the parameters to obtain the ejection fraction. Two imaging segmentation methods were compared following a methodological framework that is similar only in the initial stages of processing (process of filtering and image enhancement) and differ in the end when algorithms are implemented (Active Contour and Region Growing Algorithms). The results were compared with the measurements obtained by two different medical specialists in cardiology who calculated the ejection fraction of the study samples using the traditional method, which consists of drawing the region of interest directly from the computer using echocardiography equipment and a simple equation to calculate the desired value. The results showed that if the quality of video samples are good (i.e., after the pre-processing there is evidence of an improvement in the contrast), the values provided by the tool are substantially close to those reported by physicians; also the correlation between physicians does not vary significantly.

Keywords: echocardiography, DICOM, processing, segmentation, EDV, ESV, ejection fraction

Procedia PDF Downloads 426
3766 Accelerating Side Channel Analysis with Distributed and Parallelized Processing

Authors: Kyunghee Oh, Dooho Choi

Abstract:

Although there is no theoretical weakness in a cryptographic algorithm, Side Channel Analysis can find out some secret data from the physical implementation of a cryptosystem. The analysis is based on extra information such as timing information, power consumption, electromagnetic leaks or even sound which can be exploited to break the system. Differential Power Analysis is one of the most popular analyses, as computing the statistical correlations of the secret keys and power consumptions. It is usually necessary to calculate huge data and takes a long time. It may take several weeks for some devices with countermeasures. We suggest and evaluate the methods to shorten the time to analyze cryptosystems. Our methods include distributed computing and parallelized processing.

Keywords: DPA, distributed computing, parallelized processing, side channel analysis

Procedia PDF Downloads 428
3765 Influence of Processing Parameters on the Reliability of Sieving as a Particle Size Distribution Measurements

Authors: Eseldin Keleb

Abstract:

In the pharmaceutical industry particle size distribution is an important parameter for the characterization of pharmaceutical powders. The powder flowability, reactivity and compatibility, which have a decisive impact on the final product, are determined by particle size and size distribution. Therefore, the aim of this study was to evaluate the influence of processing parameters on the particle size distribution measurements. Different Size fractions of α-lactose monohydrate and 5% polyvinylpyrrolidone were prepared by wet granulation and were used for the preparation of samples. The influence of sieve load (50, 100, 150, 200, 250, 300, and 350 g), processing time (5, 10, and 15 min), sample size ratios (high percentage of small and large particles), type of disturbances (vibration and shaking) and process reproducibility have been investigated. Results obtained showed that a sieve load of 50 g produce the best separation, a further increase in sample weight resulted in incomplete separation even after the extension of the processing time for 15 min. Performing sieving using vibration was rapider and more efficient than shaking. Meanwhile between day reproducibility showed that particle size distribution measurements are reproducible. However, for samples containing 70% fines or 70% large particles, which processed at optimized parameters, the incomplete separation was always observed. These results indicated that sieving reliability is highly influenced by the particle size distribution of the sample and care must be taken for samples with particle size distribution skewness.

Keywords: sieving, reliability, particle size distribution, processing parameters

Procedia PDF Downloads 613
3764 In vitro Method to Evaluate the Effect of Steam-Flaking on the Quality of Common Cereal Grains

Authors: Wanbao Chen, Qianqian Yao, Zhenming Zhou

Abstract:

Whole grains with intact pericarp are largely resistant to digestion by ruminants because entire kernels are not conducive to bacterial attachment. But processing methods makes the starch more accessible to microbes, and increases the rate and extent of starch degradation in the rumen. To estimate the feasibility of applying a steam-flaking as the processing technique of grains for ruminants, cereal grains (maize, wheat, barley and sorghum) were processed by steam-flaking (steam temperature 105°C, heating time, 45 min). And chemical analysis, in vitro gas production, volatile fatty acid concentrations, and energetic values were adopted to evaluate the effects of steam-flaking. In vitro cultivation was conducted for 48h with the rumen fluid collected from steers fed a total mixed ration consisted of 40% hay and 60% concentrates. The results showed that steam-flaking processing had a significant effect on the contents of neutral detergent fiber and acid detergent fiber (P < 0.01). The concentration of starch gelatinization degree in all grains was also great improved in steam-flaking grains, as steam-flaking processing disintegrates the crystal structure of cereal starch, which may subsequently facilitate absorption of moisture and swelling. Theoretical maximum gas production after steam-flaking processing showed no great difference. However, compared with intact grains, total gas production at 48 h and the rate of gas production were significantly (P < 0.01) increased in all types of grain. Furthermore, there was no effect of steam-flaking processing on total volatile fatty acid, but a decrease in the ratio between acetate and propionate was observed in the current in vitro fermentation. The present study also found that steam-flaking processing increased (P < 0.05) organic matter digestibility and energy concentration of the grains. The collective findings of the present study suggest that steam-flaking processing of grains could improve their rumen fermentation and energy utilization by ruminants. In conclusion, the utilization of steam-flaking would be practical to improve the quality of common cereal grains.

Keywords: cereal grains, gas production, in vitro rumen fermentation, steam-flaking processing

Procedia PDF Downloads 270
3763 Agile Real-Time Field Programmable Gate Array-Based Image Processing System for Drone Imagery in Digital Agriculture

Authors: Sabiha Shahid Antora, Young Ki Chang

Abstract:

Along with various farm management technologies, imagery is an important tool that facilitates crop assessment, monitoring, and management. As a consequence, drone imaging technology is playing a vital role to capture the state of the entire field for yield mapping, crop scouting, weed detection, and so on. Although it is essential to inspect the cultivable lands in real-time for making rapid decisions regarding field variable inputs to combat stresses and diseases, drone imagery is still evolving in this area of interest. Cost margin and post-processing complexions of the image stream are the main challenges of imaging technology. Therefore, this proposed project involves the cost-effective field programmable gate array (FPGA) based image processing device that would process the image stream in real-time as well as providing the processed output to support on-the-spot decisions in the crop field. As a result, the real-time FPGA-based image processing system would reduce operating costs while minimizing a few intermediate steps to deliver scalable field decisions.

Keywords: real-time, FPGA, drone imagery, image processing, crop monitoring

Procedia PDF Downloads 113
3762 Effects of Different Thermal Processing Routes and Their Parameters on the Formation of Voids in PA6 Bonded Aluminum Joints

Authors: Muhammad Irfan, Guillermo Requena, Jan Haubrich

Abstract:

Adhesively bonded aluminum joints are common in automotive and aircraft industries and are one of the enablers of lightweight construction to minimize the carbon emissions during transportation for a sustainable life. This study is focused on the effects of two thermal processing routes, i.e., by direct and induction heating, and their parameters on void formation in PA6 bonded aluminum EN-AW6082 joints. The joints were characterized microanalytically as well as by lap shear experiments. The aging resistance of the joints was studied by accelerated aging tests at 80°C hot water. It was found that the processing of single lap joints by direct heating in a convection oven causes the formation of a large number of voids in the bond line. The formation of voids in the convection oven was due to longer processing times and was independent of any surface pretreatments of the metal as well as the processing temperature. However, when processing at low temperatures, a large number of small-sized voids were observed under the optical microscope, and they were larger in size but reduced in numbers at higher temperatures. An induction heating process was developed, which not only successfully reduced or eliminated the voids in PA6 bonded joints but also reduced the processing times for joining significantly. Consistent with the trend in direct heating, longer processing times and higher temperatures in induction heating also led to an increased formation of voids in the bond line. Subsequent single lap shear tests revealed that the increasing void contents led to a 21% reduction in lap shear strengths (i.e., from ~47 MPa for induction heating to ~37 MPa for direct heating). Also, there was a 17% reduction in lap shear strengths when the consolidation temperature was raised from 220˚C to 300˚C during induction heating. However, below a certain threshold of void contents, there was no observable effect on the lap shear strengths as well as on hydrothermal aging resistance of the joints consolidated by the induction heating process.

Keywords: adhesive, aluminium, convection oven, induction heating, mechanical properties, nylon6 (PA6), pretreatment, void

Procedia PDF Downloads 122
3761 Optoelectronic Hardware Architecture for Recurrent Learning Algorithm in Image Processing

Authors: Abdullah Bal, Sevdenur Bal

Abstract:

This paper purposes a new type of hardware application for training of cellular neural networks (CNN) using optical joint transform correlation (JTC) architecture for image feature extraction. CNNs require much more computation during the training stage compare to test process. Since optoelectronic hardware applications offer possibility of parallel high speed processing capability for 2D data processing applications, CNN training algorithm can be realized using Fourier optics technique. JTC employs lens and CCD cameras with laser beam that realize 2D matrix multiplication and summation in the light speed. Therefore, in the each iteration of training, JTC carries more computation burden inherently and the rest of mathematical computation realized digitally. The bipolar data is encoded by phase and summation of correlation operations is realized using multi-object input joint images. Overlapping properties of JTC are then utilized for summation of two cross-correlations which provide less computation possibility for training stage. Phase-only JTC does not require data rearrangement, electronic pre-calculation and strict system alignment. The proposed system can be incorporated simultaneously with various optical image processing or optical pattern recognition techniques just in the same optical system.

Keywords: CNN training, image processing, joint transform correlation, optoelectronic hardware

Procedia PDF Downloads 506
3760 Design and Evaluation of a Fully-Automated Fluidized Bed Dryer for Complete Drying of Paddy

Authors: R. J. Pontawe, R. C. Martinez, N. T. Asuncion, R. V. Villacorte

Abstract:

Drying of high moisture paddy remains a major problem in the Philippines, especially during inclement weather condition. To alleviate the problem, mechanical dryers were used like a flat bed and recirculating batch-type dryers. However, drying to 14% (wet basis) final moisture content is long which takes 10-12 hours and tedious which is not the ideal for handling high moisture paddy. Fully-automated pilot-scale fluidized bed drying system with 500 kilograms per hour capacity was evaluated using a high moisture paddy. The developed fluidized bed dryer was evaluated using four drying temperatures and two variations in fluidization time at a constant airflow, static pressure and tempering period. Complete drying of paddy with ≥28% (w.b.) initial MC was attained after 2 passes of fluidized-bed drying at 2 minutes exposure to 70 °C drying temperature and 4.9 m/s superficial air velocity, followed by 60 min ambient air tempering period (30 min without ventilation and 30 min with air ventilation) for a total drying time of 2.07 h. Around 82% from normal mechanical drying time was saved at 70 °C drying temperature. The drying cost was calculated to be P0.63 per kilogram of wet paddy. Specific heat energy consumption was only 2.84 MJ/kg of water removed. The Head Rice Yield recovery of the dried paddy passed the Philippine Agricultural Engineering Standards. Sensory evaluation showed that the color and taste of the samples dried in the fluidized bed dryer were comparable to air dried paddy. The optimum drying parameters of using fluidized bed dryer is 70 oC drying temperature at 2 min fluidization time, 4.9 m/s superficial air velocity, 10.16 cm grain depth and 60 min ambient air tempering period.

Keywords: drying, fluidized bed dryer, head rice yield, paddy

Procedia PDF Downloads 325
3759 Development of Standard Thai Appetizer in Rattanakosin Era‘s Standard: Case Study of Thai Steamed Dumpling

Authors: Nunyong Fuengkajornfung, Pattama Hirunyophat, Tidarat Sanphom

Abstract:

The objectives of this research were: To study of the recipe standard of Thai steamed dumpling, to study the ratio of modified starch in Thai steamed dumpling, to study chemical elements analyzing and Escherichia coli in Thai steamed dumpling. The experimental processes were designed in two stages as follows: To study the recipe standard of Thai steamed dumpling and to study the ratio of rice flour: modify starch by three levels 90:10, 73:30, and 50:50. The evaluation test used 9 Points Hedonic Scale method by the sensory evaluation test such as color, smell, taste, texture and overall liking. An experimental by Randomized Complete Block Design (RCBD). The statistics used in data analyses were means, standard deviation, one-way ANOVA and Duncan’s New Multiple Range Test. Regression equation, at a statistically significant level of .05. The results showed that the recipe standard was studied from three recipes by the sensory evaluation test such as color, odor, taste, spicy, texture and total acceptance. The result showed that the recipe standard of second was suitably to development. The ratio of rice flour: modified starch had 3 levels 90:10, 73:30, and 50:50 which the process condition of 50:50 had well scores (like moderately to like very much; used 9 Points Hedonic Scale method for the sensory test). Chemical elements analyzing, it showed that moisture 58.63%, fat 5.45%, protein 4.35%, carbohydrate 30.45%, and Ash 1.12%. The Escherichia coli is not found in lab testing.

Keywords: Thai snack in Rattanakosin era, Thai steamed dumpling, modify starch, recipe standard

Procedia PDF Downloads 324