Search results for: squid processing by-products
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3650

Search results for: squid processing by-products

740 Anaerobic Co-Digestion of Sewage Sludge and Bagasse for Biogas Recovery

Authors: Raouf Ahmed Mohamed Hassan

Abstract:

In Egypt, the excess sewage sludge from wastewater Treatment Plants (WWTPs) is rapidly increasing due to the continuous increase of population, urban planning and industrial developments. Also, cane bagasses constitute an important component of Urban Solid Waste (USW), especially at the south of Egypt, which are difficult to degrade under normal composting conditions. These wastes need to be environmentally managed to reduce the negative impacts of its application or disposal. In term of biogas recovery, the anaerobic digestion of sewage sludge or bagasse separately is inefficient, due to the presence of nutrients and minerals. Also, the Carbone-Nitrogen Ratio (C/N) play an important role, sewage sludge has a ratio varies from 6-16, where cane bagasse has a ratio around 150, whereas the suggested optimum C/N ratio for anaerobic digestion is in the range of 20 to 30. The anaerobic co-digestion is presented as a successful methodology that combines several biodegradable organic substrates able to decrease the amount of output wastes by biodegradation, sharing processing facilities, reducing operating costs, while enabling recovery of biogas. This paper presents the study of co-digestion of sewage sludge from wastewater treatment plants as a type of organic wastes and bagasse as agriculture wastes. Laboratory-scale mesophilic and thermophilic digesters were operated with varied hydraulic retention times. Different percentage of sludge and bagasse are investigated based on the total solids (TS). Before digestion, the bagasse was subjected to grinding pretreatment and soaked in distilled water (water pretreatment). The effect of operating parameters (mixing, temperature) is investigated in order to optimize the process in the biogas production. The yield and the composition of biogas from the different experiments were evaluated and the cumulative curves were estimated. The conducted tests did show that there is a good potential to using the co-digestion of wastewater sludge and bagasse for biogas production.

Keywords: co-digestion, sewage sludge, bagasse, mixing, mesophilic, thermophilic

Procedia PDF Downloads 474
739 A Fast Optimizer for Large-scale Fulfillment Planning based on Genetic Algorithm

Authors: Choonoh Lee, Seyeon Park, Dongyun Kang, Jaehyeong Choi, Soojee Kim, Younggeun Kim

Abstract:

Market Kurly is the first South Korean online grocery retailer that guarantees same-day, overnight shipping. More than 1.6 million customers place an average of 4.7 million orders and add 3 to 14 products into a cart per month. The company has sold almost 30,000 kinds of various products in the past 6 months, including food items, cosmetics, kitchenware, toys for kids/pets, and even flowers. The company is operating and expanding multiple dry, cold, and frozen fulfillment centers in order to store and ship these products. Due to the scale and complexity of the fulfillment, pick-pack-ship processes are planned and operated in batches, and thus, the planning that decides the batch of the customers’ orders is a critical factor in overall productivity. This paper introduces a metaheuristic optimization method that reduces the complexity of batch processing in a fulfillment center. The method is an iterative genetic algorithm with heuristic creation and evolution strategies; it aims to group similar orders into pick-pack-ship batches to minimize the total number of distinct products. With a well-designed approach to create initial genes, the method produces streamlined plans, up to 13.5% less complex than the actual plans carried out in the company’s fulfillment centers in the previous months. Furthermore, our digital-twin simulations show that the optimized plans can reduce 3% of operation time for packing, which is the most complex and time-consuming task in the process. The optimization method implements a multithreading design on the Spring framework to support the company’s warehouse management systems in near real-time, finding a solution for 4,000 orders within 5 to 7 seconds on an AWS c5.2xlarge instance.

Keywords: fulfillment planning, genetic algorithm, online grocery retail, optimization

Procedia PDF Downloads 55
738 Electroencephalogram during Natural Reading: Theta and Alpha Rhythms as Analytical Tools for Assessing a Reader’s Cognitive State

Authors: D. Zhigulskaya, V. Anisimov, A. Pikunov, K. Babanova, S. Zuev, A. Latyshkova, K. Сhernozatonskiy, A. Revazov

Abstract:

Electrophysiology of information processing in reading is certainly a popular research topic. Natural reading, however, has been relatively poorly studied, despite having broad potential applications for learning and education. In the current study, we explore the relationship between text categories and spontaneous electroencephalogram (EEG) while reading. Thirty healthy volunteers (mean age 26,68 ± 1,84) participated in this study. 15 Russian-language texts were used as stimuli. The first text was used for practice and was excluded from the final analysis. The remaining 14 were opposite pairs of texts in one of 7 categories, the most important of which were: interesting/boring, fiction/non-fiction, free reading/reading with an instruction, reading a text/reading a pseudo text (consisting of strings of letters that formed meaningless words). Participants had to read the texts sequentially on an Apple iPad Pro. EEG was recorded from 12 electrodes simultaneously with eye movement data via ARKit Technology by Apple. EEG spectral amplitude was analyzed in Fz for theta-band (4-8 Hz) and in C3, C4, P3, and P4 for alpha-band (8-14 Hz) using the Friedman test. We found that reading an interesting text was accompanied by an increase in theta spectral amplitude in Fz compared to reading a boring text (3,87 µV ± 0,12 and 3,67 µV ± 0,11, respectively). When instructions are given for reading, we see less alpha activity than during free reading of the same text (3,34 µV ± 0,20 and 3,73 µV ± 0,28, respectively, for C4 as the most representative channel). The non-fiction text elicited less activity in the alpha band (C4: 3,60 µV ± 0,25) than the fiction text (C4: 3,66 µV ± 0,26). A significant difference in alpha spectral amplitude was also observed between the regular text (C4: 3,64 µV ± 0,29) and the pseudo text (C4: 3,38 µV ± 0,22). These results suggest that some brain activity we see on EEG is sensitive to particular features of the text. We propose that changes in theta and alpha bands during reading may serve as electrophysiological tools for assessing the reader’s cognitive state as well as his or her attitude to the text and the perceived information. These physiological markers have prospective practical value for developing technological solutions and biofeedback systems for reading in particular and for education in general.

Keywords: EEG, natural reading, reader's cognitive state, theta-rhythm, alpha-rhythm

Procedia PDF Downloads 58
737 Intrusion Detection in SCADA Systems

Authors: Leandros A. Maglaras, Jianmin Jiang

Abstract:

The protection of the national infrastructures from cyberattacks is one of the main issues for national and international security. The funded European Framework-7 (FP7) research project CockpitCI introduces intelligent intrusion detection, analysis and protection techniques for Critical Infrastructures (CI). The paradox is that CIs massively rely on the newest interconnected and vulnerable Information and Communication Technology (ICT), whilst the control equipment, legacy software/hardware, is typically old. Such a combination of factors may lead to very dangerous situations, exposing systems to a wide variety of attacks. To overcome such threats, the CockpitCI project combines machine learning techniques with ICT technologies to produce advanced intrusion detection, analysis and reaction tools to provide intelligence to field equipment. This will allow the field equipment to perform local decisions in order to self-identify and self-react to abnormal situations introduced by cyberattacks. In this paper, an intrusion detection module capable of detecting malicious network traffic in a Supervisory Control and Data Acquisition (SCADA) system is presented. Malicious data in a SCADA system disrupt its correct functioning and tamper with its normal operation. OCSVM is an intrusion detection mechanism that does not need any labeled data for training or any information about the kind of anomaly is expecting for the detection process. This feature makes it ideal for processing SCADA environment data and automates SCADA performance monitoring. The OCSVM module developed is trained by network traces off line and detects anomalies in the system real time. The module is part of an IDS (intrusion detection system) developed under CockpitCI project and communicates with the other parts of the system by the exchange of IDMEF messages that carry information about the source of the incident, the time and a classification of the alarm.

Keywords: cyber-security, SCADA systems, OCSVM, intrusion detection

Procedia PDF Downloads 513
736 Phelipanche Ramosa (L. - Pomel) Control in Field Tomato Crop

Authors: G. Disciglio, F. Lops, A. Carlucci, G. Gatta, A. Tarantino, L. Frabboni, F. Carriero, F. Cibelli, M. L. Raimondo, E. Tarantino

Abstract:

The Phelipanche ramosa is is an important crop whose cultivation in the Mediterranean basin is severely contained the phitoparasitic weed Phelipanche ramose. The semiarid regions of the world are considered the main center of this parasitic weed, where heavy infestation is due to the ability to produce high numbers of seeds (up to 500,000 per plant), that remain viable for extended period (more than 19 years). In this paper 12 treatments of parasitic weed control including chemical, agronomic, biological and biotechnological methods have been carried out. In 2014 a trial was performed at Foggia (southern Italy). on processing tomato (cv Docet), grown in field infested by Phelipanche ramosa, Tomato seedlings were transplant on May 5, 2014 on a clay-loam soil (USDA) fertilized by 100 kg ha-1 of N; 60 kg ha-1 of P2O5 and 20 kg ha-1 of S. Afterwards, top dressing was performed with 70 kg ha-1 of N. The randomized block design with 3 replicates was adopted. During the growing cycle of the tomato, at 56-78 and 92 days after transplantation, the number of parasitic shoots emerged in each pot was detected. At harvesting, on August 18, the major quantity-quality yield parameters were determined (marketable yield, mean weight, dry matter, pH, soluble solids and color of fruits). All data were subjected to analysis of variance (ANOVA), using the JMP software (SAS Institute Inc., Cary, NC, USA), and for comparison of means was used Tukey's test. Each treatment studied did not provide complete control against Phelipanche ramosa. However among the 12 tested methods, Fusarium, gliphosate, radicon biostimulant and Red Setter tomato cv (improved genotypes obtained by Tilling technology) proved to mitigate the virulence of the attacks of Phelipanche ramose. It is assumed that these effects can be improved by combining some of these treatments each other, especially for a gradual and continuing reduction of the “seed bank” of the parasite in the soil.

Keywords: control methods, Phelipanche ramosa, tomato crop, mediterranean basin

Procedia PDF Downloads 541
735 Sustainable Membranes Based on 2D Materials for H₂ Separation and Purification

Authors: Juan A. G. Carrio, Prasad Talluri, Sergio G. Echeverrigaray, Antonio H. Castro Neto

Abstract:

Hydrogen as a fuel and environmentally pleasant energy carrier is part of this transition towards low-carbon systems. The extensive deployment of hydrogen production, purification and transport infrastructures still represents significant challenges. Independent of the production process, the hydrogen generally is mixed with light hydrocarbons and other undesirable gases that need to be removed to obtain H₂ with the required purity for end applications. In this context, membranes are one of the simplest, most attractive, sustainable, and performant technologies enabling hydrogen separation and purification. They demonstrate high separation efficiencies and low energy consumption levels in operation, which is a significant leap compared to current energy-intensive options technologies. The unique characteristics of 2D laminates have given rise to a diversity of research on their potential applications in separation systems. Specifically, it is already known in the scientific literature that graphene oxide-based membranes present the highest reported selectivity of H₂ over other gases. This work explores the potential of a new type of 2D materials-based membranes in separating H₂ from CO₂ and CH₄. We have developed nanostructured composites based on 2D materials that have been applied in the fabrication of membranes to maximise H₂ selectivity and permeability, for different gas mixtures, by adjusting the membranes' characteristics. Our proprietary technology does not depend on specific porous substrates, which allows its integration in diverse separation modules with different geometries and configurations, looking to address the technical performance required for industrial applications and economic viability. The tuning and precise control of the processing parameters allowed us to control the thicknesses of the membranes below 100 nanometres to provide high permeabilities. Our results for the selectivity of new nanostructured 2D materials-based membranes are in the range of the performance reported in the available literature around 2D materials (such as graphene oxide) applied to hydrogen purification, which validates their use as one of the most promising next-generation hydrogen separation and purification solutions.

Keywords: membranes, 2D materials, hydrogen purification, nanocomposites

Procedia PDF Downloads 91
734 Tracking the Effect of Ibutilide on Amplitude and Frequency of Fibrillatory Intracardiac Electrograms Using the Regression Analysis

Authors: H. Hajimolahoseini, J. Hashemi, D. Redfearn

Abstract:

Background: Catheter ablation is an effective therapy for symptomatic atrial fibrillation (AF). The intracardiac electrocardiogram (IEGM) collected during this procedure contains precious information that has not been explored to its full capacity. Novel processing techniques allow looking at these recordings from different perspectives which can lead to improved therapeutic approaches. In our previous study, we showed that variation in amplitude measured through Shannon Entropy could be used as an AF recurrence risk stratification factor in patients who received Ibutilide before the electrograms were recorded. The aim of this study is to further investigate the effect of Ibutilide on characteristics of the recorded signals from the left atrium (LA) of a patient with persistent AF before and after administration of the drug. Methods: The IEGMs collected from different intra-atrial sites of 12 patients were studied and compared before and after Ibutilide administration. First, the before and after Ibutilide IEGMs that were recorded within a Euclidian distance of 3 mm in LA were selected as pairs for comparison. For every selected pair of IEGMs, the Probability Distribution Function (PDF) of the amplitude in time domain and magnitude in frequency domain was estimated using the regression analysis. The PDF represents the relative likelihood of a variable falling within a specific range of values. Results: Our observations showed that in time domain, the PDF of amplitudes was fitted to a Gaussian distribution while in frequency domain, it was fitted to a Rayleigh distribution. Our observations also revealed that after Ibutilide administration, the IEGMs would have significantly narrower short-tailed PDFs both in time and frequency domains. Conclusion: This study shows that the PDFs of the IEGMs before and after administration of Ibutilide represents significantly different properties, both in time and frequency domains. Hence, by fitting the PDF of IEGMs in time domain to a Gaussian distribution or in frequency domain to a Rayleigh distribution, the effect of Ibutilide can easily be tracked using the statistics of their PDF (e.g., standard deviation) while this is difficult through the waveform of IEGMs itself.

Keywords: atrial fibrillation, catheter ablation, probability distribution function, time-frequency characteristics

Procedia PDF Downloads 141
733 Management Methods of Food Losses in Polish Processing Plants

Authors: Beata Bilska, Marzena Tomaszewska, Danuta Kolozyn-Krajewska

Abstract:

Food loss and food waste are a global problem of the modern economy. The research undertaken aimed to analyze how food is handled in catering establishments when it comes to food waste and to demonstrate the main ways of management with foods/dishes not served to consumers. A survey study was conducted from January to June 2019. The selection of catering establishments participating in the study was deliberate. The study included establishments located only in Mazowieckie Voivodeship (Poland). Forty-two completed questionnaires were collected. In some questions, answers were based on a 5-point scale of 1 to 5 (from "always" / "every day" to "never"). The survey also included closed questions with a suggested cafeteria of answers. The respondents stated that in their workplaces, dishes served cold and hot ready meals are discarded every day or almost every day (23.7% and 20.5% of answers respectively). A procedure most frequently used for dealing with dishes not served to consumers on a given day is their storage at a cool temperature until the following day. In the research, 1/5 of respondents admitted that consumers "always" or "usually" leave uneaten meals on their plates, and over 41% "sometimes" do so. It was found additionally that food not used in the foodservice sector is most often thrown into a public container for rubbish. Most often thrown into the public container (with communal trash) were: expired products (80.0%), plate waste (80.0%) and inedible products (fruit and vegetable peels, eggshells) (77.5%). Most frequently into the container dedicated only to food waste were thrown out used deep-frying oil (62.5%). 10% of respondents indicated that inedible products in their workplaces are allocated for animal feeds. Food waste in the foodservice sector remains an insufficiently studied issue, as owners of these objects are often unwilling to disclose data about the subject. Incorrect ways of management with foods not served to consumers were observed. There is a need to develop educational activities for employees and management in the context of food waste management in the foodservice sector.

Keywords: food waste, inedible products, plate waste, used deep-frying oil

Procedia PDF Downloads 100
732 A Location-Based Search Approach According to Users’ Application Scenario

Authors: Shih-Ting Yang, Chih-Yun Lin, Ming-Yu Li, Jhong-Ting Syue, Wei-Ming Huang

Abstract:

Global positioning system (GPS) has become increasing precise in recent years, and the location-based service (LBS) has developed rapidly. Take the example of finding a parking lot (such as Parking apps). The location-based service can offer immediate information about a nearby parking lot, including the information about remaining parking spaces. However, it cannot provide expected search results according to the requirement situations of users. For that reason, this paper develops a “Location-based Search Approach according to Users’ Application Scenario” according to the location-based search and demand determination to help users obtain the information consistent with their requirements. The “Location-based Search Approach based on Users’ Application Scenario” of this paper consists of one mechanism and three kernel modules. First, in the Information Pre-processing Mechanism (IPM), this paper uses the cosine theorem to categorize the locations of users. Then, in the Information Category Evaluation Module (ICEM), the kNN (k-Nearest Neighbor) is employed to classify the browsing records of users. After that, in the Information Volume Level Determination Module (IVLDM), this paper makes a comparison between the number of users’ clicking the information at different locations and the average number of users’ clicking the information at a specific location, so as to evaluate the urgency of demand; then, the two-dimensional space is used to estimate the application situations of users. For the last step, in the Location-based Search Module (LBSM), this paper compares all search results and the average number of characters of the search results, categorizes the search results with the Manhattan Distance, and selects the results according to the application scenario of users. Additionally, this paper develops a Web-based system according to the methodology to demonstrate practical application of this paper. The application scenario-based estimate and the location-based search are used to evaluate the type and abundance of the information expected by the public at specific location, so that information demanders can obtain the information consistent with their application situations at specific location.

Keywords: data mining, knowledge management, location-based service, user application scenario

Procedia PDF Downloads 87
731 Portable System for the Acquisition and Processing of Electrocardiographic Signals to Obtain Different Metrics of Heart Rate Variability

Authors: Daniel F. Bohorquez, Luis M. Agudelo, Henry H. León

Abstract:

Heart rate variability (HRV) is defined as the temporary variation between heartbeats or RR intervals (distance between R waves in an electrocardiographic signal). This distance is currently a recognized biomarker. With the analysis of the distance, it is possible to assess the sympathetic and parasympathetic nervous systems. These systems are responsible for the regulation of the cardiac muscle. The analysis allows health specialists and researchers to diagnose various pathologies based on this variation. For the acquisition and analysis of HRV taken from a cardiac electrical signal, electronic equipment and analysis software that work independently are currently used. This complicates and delays the process of interpretation and diagnosis. With this delay, the health condition of patients can be put at greater risk. This can lead to an untimely treatment. This document presents a single portable device capable of acquiring electrocardiographic signals and calculating a total of 19 HRV metrics. This reduces the time required, resulting in a timelier intervention. The device has an electrocardiographic signal acquisition card attached to a microcontroller capable of transmitting the cardiac signal wirelessly to a mobile device. In addition, a mobile application was designed to analyze the cardiac waveform. The device calculates the RR and different metrics. The application allows a user to visualize in real-time the cardiac signal and the 19 metrics. The information is exported to a cloud database for remote analysis. The study was performed under controlled conditions in the simulated hospital of the Universidad de la Sabana, Colombia. A total of 60 signals were acquired and analyzed. The device was compared against two reference systems. The results show a strong level of correlation (r > 0.95, p < 0.05) between the 19 metrics compared. Therefore, the use of the portable system evaluated in clinical scenarios controlled by medical specialists and researchers is recommended for the evaluation of the condition of the cardiac system.

Keywords: biological signal análisis, heart rate variability (HRV), HRV metrics, mobile app, portable device.

Procedia PDF Downloads 150
730 The Impact of Gender Difference on Crop Productivity: The Case of Decha Woreda, Ethiopia

Authors: Getinet Gezahegn Gebre

Abstract:

The study examined the impact of gender differences on Crop productivity in Decha woreda of south west Kafa zone, located 140 Km from Jimma Town and 460 km south west of Addis Ababa, between Bonga town and Omo River. The specific objectives were to assess the extent to which the agricultural production system is gender oriented, to examine access and control over productive resources, and to estimate men’s and women’s productivity in agriculture. Cross-sectional data collected from a total of 140 respondents were used in this study, whereby 65 were female headed and 75 were male headed households. The data were analyzed by using Statistical Package for Social Science (SPSS). Descriptive statistics such as frequency, mean, percentage, t-test, and chi-square were used to summarize and compare the information between the two groups. Moreover, Cobb-Douglas(CD) production function was to estimate the productivity difference in agriculture between male and female headed households. Results of the study showed that male headed households (MHH) own more productive resources such as land, livestock, labor, and other agricultural inputs as compared to female headed households (FHH). Moreover, the estimate of CD production function shows that livestock, herbicide use, land size, and male labor were statistically significant for MHH, while livestock, land size, herbicides use and female labor were significant variables for FHH. The crop productivity difference between MHH and FHH was about 68.83% in the study area. However, if FHH had equal access to the inputs as MHH, the gross value of the output would be higher by 23.58% for FHH. This might suggest that FHH would be more productive than MHH if they had equal access to inputs as MHH. Based on the results obtained, the following policy implication can be drawn: accessing FHH to inputs that increase the productivity of agriculture, such as herbicides, livestock, and male labor; increasing the productivity of land; and introducing technologies that reduce the time and energy of women, especially for inset processing.

Keywords: gender difference, crop, productivity, efficiency

Procedia PDF Downloads 59
729 Exploring Bidirectional Encoder Representations from the Transformers’ Capabilities to Detect English Preposition Errors

Authors: Dylan Elliott, Katya Pertsova

Abstract:

Preposition errors are some of the most common errors created by L2 speakers. In addition, improving error correction and detection methods remains an open issue in the realm of Natural Language Processing (NLP). This research investigates whether the bidirectional encoder representations from the transformers model (BERT) have the potential to correct preposition errors accurately enough to be useful in error correction software. This research finds that BERT performs strongly when the scope of its error correction is limited to preposition choice. The researchers used an open-source BERT model and over three hundred thousand edited sentences from Wikipedia, tagged for part of speech, where only a preposition edit had occurred. To test BERT’s ability to detect errors, a technique known as multi-level masking was used to generate suggestions based on sentence context for every prepositional environment in the test data. These suggestions were compared with the original errors in the data and their known corrections to evaluate BERT’s performance. The suggestions were further analyzed to determine if BERT more often agreed with the judgements of the Wikipedia editors. Both the untrained and fined-tuned models were compared. Finetuning led to a greater rate of error-detection which significantly improved recall, but lowered precision due to an increase in false positives or falsely flagged errors. However, in most cases, these false positives were not errors in preposition usage but merely cases where more than one preposition was possible. Furthermore, when BERT correctly identified an error, the model largely agreed with the Wikipedia editors, suggesting that BERT’s ability to detect misused prepositions is better than previously believed. To evaluate to what extent BERT’s false positives were grammatical suggestions, we plan to do a further crowd-sourcing study to test the grammaticality of BERT’s suggested sentence corrections against native speakers’ judgments.

Keywords: BERT, grammatical error correction, preposition error detection, prepositions

Procedia PDF Downloads 116
728 Processing and Evaluation of Jute Fiber Reinforced Hybrid Composites

Authors: Mohammad W. Dewan, Jahangir Alam, Khurshida Sharmin

Abstract:

Synthetic fibers (carbon, glass, aramid, etc.) are generally utilized to make composite materials for better mechanical and thermal properties. However, they are expensive and non-biodegradable. In the perspective of Bangladesh, jute fibers are available, inexpensive, and comprising good mechanical properties. The improved properties (i.e., low cost, low density, eco-friendly) of natural fibers have made them a promising reinforcement in hybrid composites without sacrificing mechanical properties. In this study, jute and e-glass fiber reinforced hybrid composite materials are fabricated utilizing hand lay-up followed by a compression molding technique. Room temperature cured two-part epoxy resin is used as a matrix. Approximate 6-7 mm thick composite panels are fabricated utilizing 17 layers of woven glass and jute fibers with different fiber layering sequences- only jute, only glass, glass, and jute alternatively (g/j/g/j---) and 4 glass - 9 jute – 4 glass (4g-9j-4g). The fabricated composite panels are analyzed through fiber volume calculation, tensile test, bending test, and water absorption test. The hybridization of jute and glass fiber results in better tensile, bending, and water absorption properties than only jute fiber-reinforced composites, but inferior properties as compared to only glass fiber reinforced composites. Among different fiber layering sequences, 4g-9j-4g fibers layering sequence resulted in better tensile, bending, and water absorption properties. The effect of chemical treatment on the woven jute fiber and chopped glass microfiber infusion are also investigated in this study. Chemically treated jute fiber and 2 wt. % chopped glass microfiber infused hybrid composite shows about 12% improvements in flexural strength as compared to untreated and no micro-fiber infused hybrid composite panel. However, fiber chemical treatment and micro-filler do not have a significant effect on tensile strength.

Keywords: compression molding, chemical treatment, hybrid composites, mechanical properties

Procedia PDF Downloads 124
727 The Impact of Sign Language on Generating and Maintaining a Mental Image

Authors: Yi-Shiuan Chiu

Abstract:

Deaf signers have been found to have better mental image performance than hearing nonsigners. The goal of this study was to investigate the ability to generate mental images, to maintain them, and to manipulate them in deaf signers of Taiwanese Sign Language (TSL). In the visual image task, participants first memorized digits formed in a cell of 4 × 5 grids. After presenting a cue of Chinese digit character shown on the top of a blank cell, participants had to form a corresponding digit. When showing a probe, which was a grid containing a red circle, participants had to decide as quickly as possible whether the probe would have been covered by the mental image of the digit. The ISI (interstimulus interval) between cue and probe was manipulated. In experiment 1, 24 deaf signers and 24 hearing nonsigners were asked to perform image generation tasks (ISI: 200, 400 ms) and image maintenance tasks (ISI: 800, 2000 ms). The results showed that deaf signers had had an enhanced ability to generate and maintain a mental image. To explore the process of mental image, in experiment 2, 30 deaf signers and 30 hearing nonsigners were asked to do visual searching when maintaining a mental image. Between a digit image cue and a red circle probe, participants were asked to search a visual search task to see if a target triangle apex was directed to the right or left. When there was only one triangle in the searching task, the results showed that both deaf signers and hearing non-signers had similar visual searching performance in which the searching targets in the mental image locations got facilitates. However, deaf signers could maintain better and faster mental image performance than nonsigners. In experiment 3, we increased the number of triangles to 4 to raise the difficulty of the visual search task. The results showed that deaf participants performed more accurately in visual search and image maintenance tasks. The results suggested that people may use eye movements as a mnemonic strategy to maintain the mental image. And deaf signers had enhanced abilities to resist the interference of eye movements in the situation of fewer distractors. In sum, these findings suggested that deaf signers had enhanced mental image processing.

Keywords: deaf signers, image maintain, mental image, visual search

Procedia PDF Downloads 129
726 River Network Delineation from Sentinel 1 Synthetic Aperture Radar Data

Authors: Christopher B. Obida, George A. Blackburn, James D. Whyatt, Kirk T. Semple

Abstract:

In many regions of the world, especially in developing countries, river network data are outdated or completely absent, yet such information is critical for supporting important functions such as flood mitigation efforts, land use and transportation planning, and the management of water resources. In this study, a method was developed for delineating river networks using Sentinel 1 imagery. Unsupervised classification was applied to multi-temporal Sentinel 1 data to discriminate water bodies from other land covers then the outputs were combined to generate a single persistent water bodies product. A thinning algorithm was then used to delineate river centre lines, which were converted into vector features and built into a topologically structured geometric network. The complex river system of the Niger Delta was used to compare the performance of the Sentinel-based method against alternative freely available water body products from United States Geological Survey, European Space Agency and OpenStreetMap and a river network derived from a Shuttle Rader Topography Mission Digital Elevation Model. From both raster-based and vector-based accuracy assessments, it was found that the Sentinel-based river network products were superior to the comparator data sets by a substantial margin. The geometric river network that was constructed permitted a flow routing analysis which is important for a variety of environmental management and planning applications. The extracted network will potentially be applied for modelling dispersion of hydrocarbon pollutants in Ogoniland, a part of the Niger Delta. The approach developed in this study holds considerable potential for generating up to date, detailed river network data for the many countries where such data are deficient.

Keywords: Sentinel 1, image processing, river delineation, large scale mapping, data comparison, geometric network

Procedia PDF Downloads 110
725 Enhanced Disk-Based Databases towards Improved Hybrid in-Memory Systems

Authors: Samuel Kaspi, Sitalakshmi Venkatraman

Abstract:

In-memory database systems are becoming popular due to the availability and affordability of sufficiently large RAM and processors in modern high-end servers with the capacity to manage large in-memory database transactions. While fast and reliable in-memory systems are still being developed to overcome cache misses, CPU/IO bottlenecks and distributed transaction costs, disk-based data stores still serve as the primary persistence. In addition, with the recent growth in multi-tenancy cloud applications and associated security concerns, many organisations consider the trade-offs and continue to require fast and reliable transaction processing of disk-based database systems as an available choice. For these organizations, the only way of increasing throughput is by improving the performance of disk-based concurrency control. This warrants a hybrid database system with the ability to selectively apply an enhanced disk-based data management within the context of in-memory systems that would help improve overall throughput. The general view is that in-memory systems substantially outperform disk-based systems. We question this assumption and examine how a modified variation of access invariance that we call enhanced memory access, (EMA) can be used to allow very high levels of concurrency in the pre-fetching of data in disk-based systems. We demonstrate how this prefetching in disk-based systems can yield close to in-memory performance, which paves the way for improved hybrid database systems. This paper proposes a novel EMA technique and presents a comparative study between disk-based EMA systems and in-memory systems running on hardware configurations of equivalent power in terms of the number of processors and their speeds. The results of the experiments conducted clearly substantiate that when used in conjunction with all concurrency control mechanisms, EMA can increase the throughput of disk-based systems to levels quite close to those achieved by in-memory system. The promising results of this work show that enhanced disk-based systems facilitate in improving hybrid data management within the broader context of in-memory systems.

Keywords: in-memory database, disk-based system, hybrid database, concurrency control

Procedia PDF Downloads 387
724 Level Set Based Extraction and Update of Lake Contours Using Multi-Temporal Satellite Images

Authors: Yindi Zhao, Yun Zhang, Silu Xia, Lixin Wu

Abstract:

The contours and areas of water surfaces, especially lakes, often change due to natural disasters and construction activities. It is an effective way to extract and update water contours from satellite images using image processing algorithms. However, to produce optimal water surface contours that are close to true boundaries is still a challenging task. This paper compares the performances of three different level set models, including the Chan-Vese (CV) model, the signed pressure force (SPF) model, and the region-scalable fitting (RSF) energy model for extracting lake contours. After experiment testing, it is indicated that the RSF model, in which a region-scalable fitting (RSF) energy functional is defined and incorporated into a variational level set formulation, is superior to CV and SPF, and it can get desirable contour lines when there are “holes” in the regions of waters, such as the islands in the lake. Therefore, the RSF model is applied to extracting lake contours from Landsat satellite images. Four temporal Landsat satellite images of the years of 2000, 2005, 2010, and 2014 are used in our study. All of them were acquired in May, with the same path/row (121/036) covering Xuzhou City, Jiangsu Province, China. Firstly, the near infrared (NIR) band is selected for water extraction. Image registration is conducted on NIR bands of different temporal images for information update, and linear stretching is also done in order to distinguish water from other land cover types. Then for the first temporal image acquired in 2000, lake contours are extracted via the RSF model with initialization of user-defined rectangles. Afterwards, using the lake contours extracted the previous temporal image as the initialized values, lake contours are updated for the current temporal image by means of the RSF model. Meanwhile, the changed and unchanged lakes are also detected. The results show that great changes have taken place in two lakes, i.e. Dalong Lake and Panan Lake, and RSF can actually extract and effectively update lake contours using multi-temporal satellite image.

Keywords: level set model, multi-temporal image, lake contour extraction, contour update

Procedia PDF Downloads 336
723 E4D-MP: Time-Lapse Multiphysics Simulation and Joint Inversion Toolset for Large-Scale Subsurface Imaging

Authors: Zhuanfang Fred Zhang, Tim C. Johnson, Yilin Fang, Chris E. Strickland

Abstract:

A variety of geophysical techniques are available to image the opaque subsurface with little or no contact with the soil. It is common to conduct time-lapse surveys of different types for a given site for improved results of subsurface imaging. Regardless of the chosen survey methods, it is often a challenge to process the massive amount of survey data. The currently available software applications are generally based on the one-dimensional assumption for a desktop personal computer. Hence, they are usually incapable of imaging the three-dimensional (3D) processes/variables in the subsurface of reasonable spatial scales; the maximum amount of data that can be inverted simultaneously is often very small due to the capability limitation of personal computers. Presently, high-performance or integrating software that enables real-time integration of multi-process geophysical methods is needed. E4D-MP enables the integration and inversion of time-lapsed large-scale data surveys from geophysical methods. Using the supercomputing capability and parallel computation algorithm, E4D-MP is capable of processing data across vast spatiotemporal scales and in near real time. The main code and the modules of E4D-MP for inverting individual or combined data sets of time-lapse 3D electrical resistivity, spectral induced polarization, and gravity surveys have been developed and demonstrated for sub-surface imaging. E4D-MP provides capability of imaging the processes (e.g., liquid or gas flow, solute transport, cavity development) and subsurface properties (e.g., rock/soil density, conductivity) critical for successful control of environmental engineering related efforts such as environmental remediation, carbon sequestration, geothermal exploration, and mine land reclamation, among others.

Keywords: gravity survey, high-performance computing, sub-surface monitoring, electrical resistivity tomography

Procedia PDF Downloads 125
722 Revalidation and Hormonization of Existing IFCC Standardized Hepatic, Cardiac, and Thyroid Function Tests by Precison Optimization and External Quality Assurance Programs

Authors: Junaid Mahmood Alam

Abstract:

Revalidating and harmonizing clinical chemistry analytical principles and optimizing methods through quality control programs and assessments is the preeminent means to attain optimal outcome within the clinical laboratory services. Present study reports revalidation of our existing IFCC regularized analytical methods, particularly hepatic and thyroid function tests, by optimization of precision analyses and processing through external and internal quality assessments and regression determination. Parametric components of hepatic (Bilirubin ALT, γGT, ALP), cardiac (LDH, AST, Trop I) and thyroid/pituitary (T3, T4, TSH, FT3, FT4) function tests were used to validate analytical techniques on automated chemistry and immunological analyzers namely Hitachi 912, Cobas 6000 e601, Cobas c501, Cobas e411 with UV kinetic, colorimetric dry chemistry principles and Electro-Chemiluminescence immunoassay (ECLi) techniques. Process of validation and revalidation was completed with evaluating and assessing the precision analyzed Preci-control data of various instruments plotting against each other with regression analyses R2. Results showed that: Revalidation and optimization of respective parameters that were accredited through CAP, CLSI and NEQAPP assessments depicted 99.0% to 99.8% optimization, in addition to the methodology and instruments used for analyses. Regression R2 analysis of BilT was 0.996, whereas that of ALT, ALP, γGT, LDH, AST, Trop I, T3, T4, TSH, FT3, and FT4 exhibited R2 0.998, 0.997, 0.993, 0.967, 0.970, 0.980, 0.976, 0.996, 0.997, 0.997, and R2 0.990, respectively. This confirmed marked harmonization of analytical methods and instrumentations thus revalidating optimized precision standardization as per IFCC recommended guidelines. It is concluded that practices of revalidating and harmonizing the existing or any new services should be followed by all clinical laboratories, especially those associated with tertiary care hospital. This is will ensure deliverance of standardized, proficiency tested, optimized services for prompt and better patient care that will guarantee maximum patients’ confidence.

Keywords: revalidation, standardized, IFCC, CAP, harmonized

Procedia PDF Downloads 232
721 Molecular Topology and TLC Retention Behaviour of s-Triazines: QSRR Study

Authors: Lidija R. Jevrić, Sanja O. Podunavac-Kuzmanović, Strahinja Z. Kovačević

Abstract:

Quantitative structure-retention relationship (QSRR) analysis was used to predict the chromatographic behavior of s-triazine derivatives by using theoretical descriptors computed from the chemical structure. Fundamental basis of the reported investigation is to relate molecular topological descriptors with chromatographic behavior of s-triazine derivatives obtained by reversed-phase (RP) thin layer chromatography (TLC) on silica gel impregnated with paraffin oil and applied ethanol-water (φ = 0.5-0.8; v/v). Retention parameter (RM0) of 14 investigated s-triazine derivatives was used as dependent variable while simple connectivity index different orders were used as independent variables. The best QSRR model for predicting RM0 value was obtained with simple third order connectivity index (3χ) in the second-degree polynomial equation. Numerical values of the correlation coefficient (r=0.915), Fisher's value (F=28.34) and root mean square error (RMSE = 0.36) indicate that model is statistically significant. In order to test the predictive power of the QSRR model leave-one-out cross-validation technique has been applied. The parameters of the internal cross-validation analysis (r2CV=0.79, r2adj=0.81, PRESS=1.89) reflect the high predictive ability of the generated model and it confirms that can be used to predict RM0 value. Multivariate classification technique, hierarchical cluster analysis (HCA), has been applied in order to group molecules according to their molecular connectivity indices. HCA is a descriptive statistical method and it is the most frequently used for important area of data processing such is classification. The HCA performed on simple molecular connectivity indices obtained from the 2D structure of investigated s-triazine compounds resulted in two main clusters in which compounds molecules were grouped according to the number of atoms in the molecule. This is in agreement with the fact that these descriptors were calculated on the basis of the number of atoms in the molecule of the investigated s-triazine derivatives.

Keywords: s-triazines, QSRR, chemometrics, chromatography, molecular descriptors

Procedia PDF Downloads 367
720 An Evaluation of the Influence of Corn Cob Ash on the Strength Parameters of Lateritic SoiLs

Authors: O. A. Apampa, Y. A. Jimoh

Abstract:

The paper reports the investigation of Corn Cob Ash as a chemical stabilizing agent for laterite soils. Corn cob feedstock was obtained from Maya, a rural community in the derived savannah agro-ecological zone of South-Western Nigeria and burnt to ashes of pozzolanic quality. Reddish brown silty clayey sand material characterized as AASHTO A-2-6(3) lateritic material was obtained from a borrow pit in Abeokuta and subjected to strength characterization tests according to BS 1377: 2000. The soil was subsequently mixed with CCA in varying percentages of 0-7.5% at 1.5% intervals. The influence of CCA stabilized soil was determined for the Atterberg limits, compaction characteristics, CBR and the unconfined compression strength. The tests were repeated on laterite cement-soil mixture in order to establish a basis for comparison. The result shows a similarity in the compaction characteristics of soil-cement and soil-CCA. With increasing addition of binder from 1.5% to 7.5%, Maximum Dry Density progressively declined while the OMC steadily increased. For the CBR, the maximum positive impact was observed at 1.5% CCA addition at a value of 85% compared to the control value of 65% for the cement stabilization, but declined steadily thereafter with increasing addition of CCA, while that of soil-cement continued to increase with increasing addition of cement beyond 1.5% though at a relatively slow rate. Similar behavior was observed in the UCS values for the soil-CCA mix, increasing from a control value of 0.4 MN/m2 to 1.0 MN/m2 at 1.5% CCA and declining thereafter, while that for soil-cement continued to increase with increasing cement addition, but at a slower rate. This paper demonstrates that CCA is effective for chemical stabilization of a typical Nigerian AASHTO A-2-6 lateritic soil at maximum stabilizer content limit of 1.5% and therefore recommends its use as a way of finding further application for agricultural waste products and achievement of environmental sustainability in line with the ideals of the millennium development goals because of the economic and technical feasibility of the processing of the cobs from corn.

Keywords: corn cob ash, pozzolan, cement, laterite, stabilizing agent, cation exchange capacity

Procedia PDF Downloads 268
719 Corn Flakes Produced from Different Cultivars of Zea Mays as a Functional Product

Authors: Milenko Košutić, Jelena Filipović, Zvonko Nježić

Abstract:

Extrusion technology is thermal processing that is applied to improve the nutritional, hygienic, and physical-chemical characteristics of the raw material. Overall, the extrusion process is an efficient method for the production of a wide range of food products. It combines heat, pressure, and shear to transform raw materials into finished goods with desired textures, shapes, and nutritional profiles. The extruded products’ quality is remarkably dependent upon feed material composition, barrel temperature profile, feed moisture content, screw speed, and other extrusion system parameters. Given consumer expectations for snack foods, a high expansion index and low bulk density, in addition to crunchy texture and uniform microstructure, are desired. This paper investigates the effects of simultaneous different types of corn (white corn, yellow corn, red corn, and black corn) addition and different screw speed (350, 500, 650 rpm) on the physical, technological, and functional properties of flakes products. Black corn flour and screw speed at 350 rpm positively influenced physical, technological characteristics, mineral composition, and antioxidant properties of flake products with the best total score analysis of 0,59. Overall, the combination of Tukey's HSD test and PCA enables a comprehensive analysis of the observed corn products, allowing researchers to identify them. This research aims to analyze the influence of different types of corn flour (white corn, yellow corn, red corn, and black corn) on the nutritive and sensory properties of the product (quality, texture, and color), as well as the acceptance of the new product by consumers on the territory of Novi Sad. The presented data point that investigated corn flakes from black corn flour at 350 rpm is a product with good physical-technological and functional properties due to a higher level of antioxidant activity.

Keywords: corn types, flakes product, nutritive quality, acceptability

Procedia PDF Downloads 21
718 Enhancing Health Information Management with Smart Rings

Authors: Bhavishya Ramchandani

Abstract:

A little electronic device that is worn on the finger is called a smart ring. It incorporates mobile technology and has features that make it simple to use the device. These gadgets, which resemble conventional rings and are usually made to fit on the finger, are outfitted with features including access management, gesture control, mobile payment processing, and activity tracking. A poor sleep pattern, an irregular schedule, and bad eating habits are all part of the problems with health that a lot of people today are facing. Diets lacking fruits, vegetables, legumes, nuts, and whole grains are common. Individuals in India also experience metabolic issues. In the medical field, smart rings will help patients with problems relating to stomach illnesses and the incapacity to consume meals that are tailored to their bodies' needs. The smart ring tracks all bodily functions, including blood sugar and glucose levels, and presents the information instantly. Based on this data, the ring generates what the body will find to be perfect insights and a workable site layout. In addition, we conducted focus groups and individual interviews as part of our core approach and discussed the difficulties they're having maintaining the right diet, as well as whether or not the smart ring will be beneficial to them. However, everyone was very enthusiastic about and supportive of the concept of using smart rings in healthcare, and they believed that these rings may assist them in maintaining their health and having a well-balanced diet plan. This response came from the primary data, and also working on the Emerging Technology Canvas Analysis of smart rings in healthcare has led to a significant improvement in our understanding of the technology's application in the medical field. It is believed that there will be a growing demand for smart health care as people become more conscious of their health. The majority of individuals will finally utilize this ring after three to four years when demand for it will have increased. Their daily lives will be significantly impacted by it.

Keywords: smart ring, healthcare, electronic wearable, emerging technology

Procedia PDF Downloads 32
717 Algorithm for Automatic Real-Time Electrooculographic Artifact Correction

Authors: Norman Sinnigen, Igor Izyurov, Marina Krylova, Hamidreza Jamalabadi, Sarah Alizadeh, Martin Walter

Abstract:

Background: EEG is a non-invasive brain activity recording technique with a high temporal resolution that allows the use of real-time applications, such as neurofeedback. However, EEG data are susceptible to electrooculographic (EOG) and electromyography (EMG) artifacts (i.e., jaw clenching, teeth squeezing and forehead movements). Due to their non-stationary nature, these artifacts greatly obscure the information and power spectrum of EEG signals. Many EEG artifact correction methods are too time-consuming when applied to low-density EEG and have been focusing on offline processing or handling one single type of EEG artifact. A software-only real-time method for correcting multiple types of EEG artifacts of high-density EEG remains a significant challenge. Methods: We demonstrate an improved approach for automatic real-time EEG artifact correction of EOG and EMG artifacts. The method was tested on three healthy subjects using 64 EEG channels (Brain Products GmbH) and a sampling rate of 1,000 Hz. Captured EEG signals were imported in MATLAB with the lab streaming layer interface allowing buffering of EEG data. EMG artifacts were detected by channel variance and adaptive thresholding and corrected by using channel interpolation. Real-time independent component analysis (ICA) was applied for correcting EOG artifacts. Results: Our results demonstrate that the algorithm effectively reduces EMG artifacts, such as jaw clenching, teeth squeezing and forehead movements, and EOG artifacts (horizontal and vertical eye movements) of high-density EEG while preserving brain neuronal activity information. The average computation time of EOG and EMG artifact correction for 80 s (80,000 data points) 64-channel data is 300 – 700 ms depending on the convergence of ICA and the type and intensity of the artifact. Conclusion: An automatic EEG artifact correction algorithm based on channel variance, adaptive thresholding, and ICA improves high-density EEG recordings contaminated with EOG and EMG artifacts in real-time.

Keywords: EEG, muscle artifacts, ocular artifacts, real-time artifact correction, real-time ICA

Procedia PDF Downloads 139
716 Comparison Of Virtual Non-Contrast To True Non-Contrast Images Using Dual Layer Spectral Computed Tomography

Authors: O’Day Luke

Abstract:

Purpose: To validate virtual non-contrast reconstructions generated from dual-layer spectral computed tomography (DL-CT) data as an alternative for the acquisition of a dedicated true non-contrast dataset during multiphase contrast studies. Material and methods: Thirty-three patients underwent a routine multiphase clinical CT examination, using Dual-Layer Spectral CT, from March to August 2021. True non-contrast (TNC) and virtual non-contrast (VNC) datasets, generated from both portal venous and arterial phase imaging were evaluated. For every patient in both true and virtual non-contrast datasets, a region-of-interest (ROI) was defined in aorta, liver, fluid (i.e. gallbladder, urinary bladder), kidney, muscle, fat and spongious bone, resulting in 693 ROIs. Differences in attenuation for VNC and TNV images were compared, both separately and combined. Consistency between VNC reconstructions obtained from the arterial and portal venous phase was evaluated. Results: Comparison of CT density (HU) on the VNC and TNC images showed a high correlation. The mean difference between TNC and VNC images (excluding bone results) was 5.5 ± 9.1 HU and > 90% of all comparisons showed a difference of less than 15 HU. For all tissues but spongious bone, the mean absolute difference between TNC and VNC images was below 10 HU. VNC images derived from the arterial and the portal venous phase showed a good correlation in most tissue types. The aortic attenuation was somewhat dependent however on which dataset was used for reconstruction. Bone evaluation with VNC datasets continues to be a problem, as spectral CT algorithms are currently poor in differentiating bone and iodine. Conclusion: Given the increasing availability of DL-CT and proven accuracy of virtual non-contrast processing, VNC is a promising tool for generating additional data during routine contrast-enhanced studies. This study shows the utility of virtual non-contrast scans as an alternative for true non-contrast studies during multiphase CT, with potential for dose reduction, without loss of diagnostic information.

Keywords: dual-layer spectral computed tomography, virtual non-contrast, true non-contrast, clinical comparison

Procedia PDF Downloads 112
715 Design of Traffic Counting Android Application with Database Management System and Its Comparative Analysis with Traditional Counting Methods

Authors: Muhammad Nouman, Fahad Tiwana, Muhammad Irfan, Mohsin Tiwana

Abstract:

Traffic congestion has been increasing significantly in major metropolitan areas as a result of increased motorization, urbanization, population growth and changes in the urban density. Traffic congestion compromises efficiency of transport infrastructure and causes multiple traffic concerns; including but not limited to increase of travel time, safety hazards, air pollution, and fuel consumption. Traffic management has become a serious challenge for federal and provincial governments, as well as exasperated commuters. Effective, flexible, efficient and user-friendly traffic information/database management systems characterize traffic conditions by making use of traffic counts for storage, processing, and visualization. While, the emerging data collection technologies continue to proliferate, its accuracy can be guaranteed through the comparison of observed data with the manual handheld counters. This paper presents the design of tablet based manual traffic counting application and framework for development of traffic database management system for Pakistan. The database management system comprises of three components including traffic counting android application; establishing online database and its visualization using Google maps. Oracle relational database was chosen to develop the data structure whereas structured query language (SQL) was adopted to program the system architecture. The GIS application links the data from the database and projects it onto a dynamic map for traffic conditions visualization. The traffic counting device and example of a database application in the real-world problem provided a creative outlet to visualize the uses and advantages of a database management system in real time. Also, traffic data counts by means of handheld tablet/ mobile application can be used for transportation planning and forecasting.

Keywords: manual count, emerging data sources, traffic information quality, traffic surveillance, traffic counting device, android; data visualization, traffic management

Procedia PDF Downloads 169
714 Effect of Steam Explosion of Crop Residues on Chemical Compositions and Efficient Energy Values

Authors: Xin Wu, Yongfeng Zhao, Qingxiang Meng

Abstract:

In China, quite low proportion of crop residues were used as feedstuff because of its poor palatability and low digestibility. Steam explosion is a physical and chemical feed processing technology which has great potential to improve sapidity and digestibility of crop residues. To investigate the effect of the steam explosion on chemical compositions and efficient energy values, crop residues (rice straw, wheat straw and maize stover) were processed by steam explosion (steam temperature 120-230°C, steam pressure 2-26kg/cm², 40min). Steam-exploded crop residues were regarded as treatment groups and untreated ones as control groups, nutritive compositions were analyzed and effective energy values were calculated by prediction model in INRA (1988, 2010) for both groups. Results indicated that the interaction between treatment and variety has a significant effect on chemical compositions of crop residues. Steam explosion treatment of crop residues decreased neutral detergent fiber (NDF) significantly (P < 0.01), and compared with untreated material, NDF content of rice straw, wheat straw, and maize stover lowered 21.46%, 32.11%, 28.34% respectively. Acid detergent lignin (ADL) of crop residues increased significantly after the steam explosion (P < 0.05). The content of crude protein (CP), ether extract (EE) and Ash increased significantly after steam explosion (P < 0.05). Moreover, predicted effective energy values of each steam-exploded residue were higher than that of untreated ones. The digestible energy (DE), metabolizable energy (ME), net energy for maintenance (NEm) and net energy for gain (NEg)of steam-exploded rice straw were 3.06, 2.48, 1.48and 0.29 MJ/kg respectively and increased 46.21%, 46.25%, 49.56% and 110.92% compared with untreated ones(P < 0.05). Correspondingly, the energy values of steam-exploded wheat straw were 2.18, 1.76, 1.03 and 0.15 MJ/kg, which were 261.78%, 261.29%, 274.59% and 1014.69% greater than that of wheat straw (P < 0.05). The above predicted energy values of steam exploded maize stover were 5.28, 4.30, 2.67 and 0.82 MJ/kg and raised 109.58%, 107.71%, 122.57% and 332.64% compared with the raw material(P < 0.05). In conclusion, steam explosion treatment could significantly decrease NDF content, increase ADL, CP, EE, Ash content and effective energy values of crop residues. The effect of steam explosion was much more obvious for wheat straw than the other two kinds of residues under the same condition.

Keywords: chemical compositions, crop residues, efficient energy values, steam explosion

Procedia PDF Downloads 222
713 Effects of Sintering Temperature on Microstructure and Mechanical Properties of Nanostructured Ni-17Cr Alloy

Authors: B. J. Babalola, M. B. Shongwe

Abstract:

Spark Plasma Sintering technique is a novel processing method that produces limited grain growth and highly dense variety of materials; alloys, superalloys, and carbides just to mention a few. However, initial particle size and spark plasma sintering parameters are factors which influence the grain growth and mechanical properties of sintered materials. Ni-Cr alloys are regarded as the most promising alloys for aerospace turbine blades, owing to the fact that they meet the basic requirements of desirable mechanical strength at high temperatures and good resistance to oxidation. The conventional method of producing this alloy often results in excessive grain growth and porosity levels that are detrimental to its mechanical properties. The effect of sintering temperature was evaluated on the microstructure and mechanical properties of the nanostructured Ni-17Cr alloy. Nickel and chromium powder were milled using high energy ball milling independently for 30 hours, milling speed of 400 revs/min and ball to powder ratio (BPR) of 10:1. The milled powders were mixed in the composition of Nickel having 83 wt % and chromium, 17 wt %. This was sintered at varied temperatures from 800°C, 900°C, 1000°C, 1100°C and 1200°C. The structural characteristics such as porosity, grain size, fracture surface and hardness were analyzed by scan electron microscopy and X-ray diffraction, Archimedes densitometry, micro-hardness tester. The corresponding results indicated an increase in the densification and hardness property of the alloy as the temperature increases. The residual porosity of the alloy reduces with respect to the sintering temperature and in contrast, the grain size was enhanced. The study of the mechanical properties, including hardness, densification shows that optimum properties were obtained for the sintering temperature of 1100°C. The advantages of high sinterability of Ni-17Cr alloy using milled powders and microstructural details were discussed.

Keywords: densification, grain growth, milling, nanostructured materials, sintering temperature

Procedia PDF Downloads 375
712 Development and Validation of a Carbon Dioxide TDLAS Sensor for Studies on Fermented Dairy Products

Authors: Lorenzo Cocola, Massimo Fedel, Dragiša Savić, Bojana Danilović, Luca Poletto

Abstract:

An instrument for the detection and evaluation of gaseous carbon dioxide in the headspace of closed containers has been developed in the context of Packsensor Italian-Serbian joint project. The device is based on Tunable Diode Laser Absorption Spectroscopy (TDLAS) with a Wavelength Modulation Spectroscopy (WMS) technique in order to accomplish a non-invasive measurement inside closed containers of fermented dairy products (yogurts and fermented cheese in cups and bottles). The purpose of this instrument is the continuous monitoring of carbon dioxide concentration during incubation and storage of products over a time span of the whole shelf life of the product, in the presence of different microorganisms. The instrument’s optical front end has been designed to be integrated in a thermally stabilized incubator. An embedded computer provides processing of spectral artifacts and storage of an arbitrary set of calibration data allowing a properly calibrated measurement on many samples (cups and bottles) of different shapes and sizes commonly found in the retail distribution. A calibration protocol has been developed in order to be able to calibrate the instrument on the field also on containers which are notoriously difficult to seal properly. This calibration protocol is described and evaluated against reference measurements obtained through an industry standard (sampling) carbon dioxide metering technique. Some sets of validation test measurements on different containers are reported. Two test recordings of carbon dioxide concentration evolution are shown as an example of instrument operation. The first demonstrates the ability to monitor a rapid yeast growth in a contaminated sample through the increase of headspace carbon dioxide. Another experiment shows the dissolution transient with a non-saturated liquid medium in presence of a carbon dioxide rich headspace atmosphere.

Keywords: TDLAS, carbon dioxide, cups, headspace, measurement

Procedia PDF Downloads 290
711 Primary School Students’ Modeling Processes: Crime Problem

Authors: Neslihan Sahin Celik, Ali Eraslan

Abstract:

As a result of PISA (Program for International Student Assessments) survey that tests how well students can apply the knowledge and skills they have learned at school to real-life challenges, the new and redesigned mathematics education programs in many countries emphasize the necessity for the students to face complex and multifaceted problem situations and gain experience in this sense allowing them to develop new skills and mathematical thinking to prepare them for their future life after school. At this point, mathematical models and modeling approaches can be utilized in the analysis of complex problems which represent real-life situations in which students can actively participate. In particular, model eliciting activities that bring about situations which allow the students to create solutions to problems and which involve mathematical modeling must be used right from primary school years, allowing them to face such complex, real-life situations from early childhood period. A qualitative study was conducted in a university foundation primary school in the city center of a big province in 2013-2014 academic years. The participants were 4th grade students in a primary school. After a four-week preliminary study applied to a fourth-grade classroom, three students included in the focus group were selected using criterion sampling technique. A focus group of three students was videotaped as they worked on the Crime Problem. The conversation of the group was transcribed, examined with students’ written work and then analyzed through the lens of Blum and Ferri’s modeling processing cycle. The results showed that primary fourth-grade students can successfully work with model eliciting problem while they encounter some difficulties in the modeling processes. In particular, they developed new ideas based on different assumptions, identified the patterns among variables and established a variety of models. On the other hand, they had trouble focusing on problems and occasionally had breaks in the process.

Keywords: primary school, modeling, mathematical modeling, crime problem

Procedia PDF Downloads 375