Search results for: binary images processing.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2732

Search results for: binary images processing.

92 Numerical Study of Bubbling Fluidized Beds Operating at Sub-atmospheric Conditions

Authors: Lanka Dinushke Weerasiri, Subrat Das, Daniel Fabijanic, William Yang

Abstract:

Fluidization at vacuum pressure has been a topic that is of growing research interest. Several industrial applications (such as drying, extractive metallurgy, and chemical vapor deposition (CVD)) can potentially take advantage of vacuum pressure fluidization. Particularly, the fine chemical industry requires processing under safe conditions for thermolabile substances, and reduced pressure fluidized beds offer an alternative. Fluidized beds under vacuum conditions provide optimal conditions for treatment of granular materials where the reduced gas pressure maintains an operational environment outside of flammability conditions. The fluidization at low-pressure is markedly different from the usual gas flow patterns of atmospheric fluidization. The different flow regimes can be characterized by the dimensionless Knudsen number. Nevertheless, hydrodynamics of bubbling vacuum fluidized beds has not been investigated to author’s best knowledge. In this work, the two-fluid numerical method was used to determine the impact of reduced pressure on the fundamental properties of a fluidized bed. The slip flow model implemented by Ansys Fluent User Defined Functions (UDF) was used to determine the interphase momentum exchange coefficient. A wide range of operating pressures was investigated (1.01, 0.5, 0.25, 0.1 and 0.03 Bar). The gas was supplied by a uniform inlet at 1.5Umf and 2Umf. The predicted minimum fluidization velocity (Umf) shows excellent agreement with the experimental data. The results show that the operating pressure has a notable impact on the bed properties and its hydrodynamics. Furthermore, it also shows that the existing Gorosko correlation that predicts bed expansion is not applicable under reduced pressure conditions.

Keywords: Computational fluid dynamics, fluidized bed, gas-solid flow, vacuum pressure, slip flow, minimum fluidization velocity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 705
91 The Application of a Neural Network in the Reworking of Accu-Chek to Wrist Bands to Monitor Blood Glucose in the Human Body

Authors: J. K Adedeji, O. H Olowomofe, C. O Alo, S.T Ijatuyi

Abstract:

The issue of high blood sugar level, the effects of which might end up as diabetes mellitus, is now becoming a rampant cardiovascular disorder in our community. In recent times, a lack of awareness among most people makes this disease a silent killer. The situation calls for urgency, hence the need to design a device that serves as a monitoring tool such as a wrist watch to give an alert of the danger a head of time to those living with high blood glucose, as well as to introduce a mechanism for checks and balances. The neural network architecture assumed 8-15-10 configuration with eight neurons at the input stage including a bias, 15 neurons at the hidden layer at the processing stage, and 10 neurons at the output stage indicating likely symptoms cases. The inputs are formed using the exclusive OR (XOR), with the expectation of getting an XOR output as the threshold value for diabetic symptom cases. The neural algorithm is coded in Java language with 1000 epoch runs to bring the errors into the barest minimum. The internal circuitry of the device comprises the compatible hardware requirement that matches the nature of each of the input neurons. The light emitting diodes (LED) of red, green, and yellow colors are used as the output for the neural network to show pattern recognition for severe cases, pre-hypertensive cases and normal without the traces of diabetes mellitus. The research concluded that neural network is an efficient Accu-Chek design tool for the proper monitoring of high glucose levels than the conventional methods of carrying out blood test.

Keywords: Accu-Chek, diabetes, neural network, pattern recognition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1551
90 Effects of Reclaimed Agro-Industrial Wastewater for Long-Term Irrigation of Herbaceous Crops on Soil Chemical Properties

Authors: E. Tarantino, G. Disciglio, G. Gatta, L. Frabboni, A. Libutti, A. Tarantino

Abstract:

Worldwide, about two-thirds of industrial and domestic wastewater effluent is discharged without treatment, which can cause contamination and eutrophication of the water. In particular, for Mediterranean countries, irrigation with treated wastewater would mitigate the water stress and support the agricultural sector. Changing global weather patterns will make the situation worse, due to increased susceptibility to drought, which can cause major environmental, social, and economic problems. The study was carried out in open field in an intensive agricultural area of the Apulian region in Southern Italy where freshwater resources are often scarce. As well as providing a water resource, irrigation with treated wastewater represents a significant source of nutrients for soil–plant systems. However, the use of wastewater might have further effects on soil. This study thus investigated the long-term impact of irrigation with reclaimed agro-industrial wastewater on the chemical characteristics of the soil. Two crops (processing tomato and broccoli) were cultivated in succession in Stornarella (Foggia) over four years from 2012 to 2016 using two types of irrigation water: groundwater and tertiary treated agro-industrial wastewater that had undergone an activated sludge process, sedimentation filtration, and UV radiation. Chemical analyses were performed on the irrigation waters and soil samples. The treated wastewater was characterised by high levels of several chemical parameters including TSS, EC, COD, BOD5, Na+, Ca2+, Mg2+, NH4-N, PO4-P, K+, SAR and CaCO3, as compared with the groundwater. However, despite these higher levels, the mean content of several chemical parameters in the soil did not show relevant differences between the irrigation treatments, in terms of the chemical features of the soil.

Keywords: Agro-industrial wastewater, broccoli, long-term re-use, tomato.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1109
89 An Integrated Experimental and Numerical Approach to Develop an Electronic Instrument to Study Apple Bruise Damage

Authors: Paula Pascoal-Faria, Rúben Pereira, Elodie Pinto, Miguel Belbut, Ana Rosa, Inês Sousa, Nuno Alves

Abstract:

Apple bruise damage from harvesting, handling, transporting and sorting is considered to be the major source of reduced fruit quality, resulting in loss of profits for the entire fruit industry. The three factors which can physically cause fruit bruising are vibration, compression load and impact, the latter being the most common source of bruise damage. Therefore, prediction of the level of damage, stress distribution and deformation of the fruits under external force has become a very important challenge. In this study, experimental and numerical methods were used to better understand the impact caused when an apple is dropped from different heights onto a plastic surface and a conveyor belt. Results showed that the extent of fruit damage is significantly higher for plastic surface, being dependent on the height. In order to support the development of a biomimetic electronic device for the determination of fruit damage, the mechanical properties of the apple fruit were determined using mechanical tests. Preliminary results showed different values for the Young’s modulus according to the zone of the apple tested. Along with the mechanical characterization of the apple fruit, the development of the first two prototypes is discussed and the integration of the results obtained to construct the final element model of the apple is presented. This work will help to reduce significantly the bruise damage of fruits or vegetables during the entire processing which will allow the introduction of exportation destines and consequently an increase in the economic profits in this sector.

Keywords: Apple, fruit damage, impact during crop and post-crop, mechanical characterization of the apple, numerical evaluation of fruit bruise damage, electronic device.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1513
88 Author Profiling: Prediction of Learners’ Gender on a MOOC Platform Based on Learners’ Comments

Authors: Tahani Aljohani, Jialin Yu, Alexandra. I. Cristea

Abstract:

The more an educational system knows about a learner, the more personalised interaction it can provide, which leads to better learning. However, asking a learner directly is potentially disruptive, and often ignored by learners. Especially in the booming realm of MOOC Massive Online Learning platforms, only a very low percentage of users disclose demographic information about themselves. Thus, in this paper, we aim to predict learners’ demographic characteristics, by proposing an approach using linguistically motivated Deep Learning Architectures for Learner Profiling, particularly targeting gender prediction on a FutureLearn MOOC platform. Additionally, we tackle here the difficult problem of predicting the gender of learners based on their comments only – which are often available across MOOCs. The most common current approaches to text classification use the Long Short-Term Memory (LSTM) model, considering sentences as sequences. However, human language also has structures. In this research, rather than considering sentences as plain sequences, we hypothesise that higher semantic - and syntactic level sentence processing based on linguistics will render a richer representation. We thus evaluate, the traditional LSTM versus other bleeding edge models, which take into account syntactic structure, such as tree-structured LSTM, Stack-augmented Parser-Interpreter Neural Network (SPINN) and the Structure-Aware Tag Augmented model (SATA). Additionally, we explore using different word-level encoding functions. We have implemented these methods on Our MOOC dataset, which is the most performant one comparing with a public dataset on sentiment analysis that is further used as a cross-examining for the models' results.

Keywords: Deep learning, data mining, gender predication, MOOCs.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1294
87 The Effect of Porous Alkali Activated Material Composition on Buffer Capacity in Bioreactors

Authors: G. Bumanis, D. Bajare

Abstract:

With demand for primary energy continuously growing, search for renewable and efficient energy sources has been high on agenda of our society. One of the most promising energy sources is biogas technology. Residues coming from dairy industry and milk processing could be used in biogas production; however, low efficiency and high cost impede wide application of such technology. One of the main problems is management and conversion of organic residues through the anaerobic digestion process which is characterized by acidic environment due to the low whey pH (<6) whereas additional pH control system is required. Low buffering capacity of whey is responsible for the rapid acidification in biological treatments; therefore alkali activated material is a promising solution of this problem. Alkali activated material is formed using SiO2 and Al2O3 rich materials under highly alkaline solution. After material structure forming process is completed, free alkalis remain in the structure of materials which are available for leaching and could provide buffer capacity potential. In this research porous alkali activated material was investigated. Highly porous material structure ensures gradual leaching of alkalis during time which is important in biogas digestion process. Research of mixture composition and SiO2/Na2O and SiO2/Al2O ratio was studied to test the buffer capacity potential of alkali activated material. This research has proved that by changing molar ratio of components it is possible to obtain a material with different buffer capacity, and this novel material was seen to have considerable potential for using it in processes where buffer capacity and pH control is vitally important.

Keywords: Alkaline material, buffer capacity, biogas production.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2014
86 Analysis of Linked in Series Servers with Blocking, Priority Feedback Service and Threshold Policy

Authors: Walenty Oniszczuk

Abstract:

The use of buffer thresholds, blocking and adequate service strategies are well-known techniques for computer networks traffic congestion control. This motivates the study of series queues with blocking, feedback (service under Head of Line (HoL) priority discipline) and finite capacity buffers with thresholds. In this paper, the external traffic is modelled using the Poisson process and the service times have been modelled using the exponential distribution. We consider a three-station network with two finite buffers, for which a set of thresholds (tm1 and tm2) is defined. This computer network behaves as follows. A task, which finishes its service at station B, gets sent back to station A for re-processing with probability o. When the number of tasks in the second buffer exceeds a threshold tm2 and the number of task in the first buffer is less than tm1, the fed back task is served under HoL priority discipline. In opposite case, for fed backed tasks, “no two priority services in succession" procedure (preventing a possible overflow in the first buffer) is applied. Using an open Markovian queuing schema with blocking, priority feedback service and thresholds, a closed form cost-effective analytical solution is obtained. The model of servers linked in series is very accurate. It is derived directly from a twodimensional state graph and a set of steady-state equations, followed by calculations of main measures of effectiveness. Consequently, efficient expressions of the low computational cost are determined. Based on numerical experiments and collected results we conclude that the proposed model with blocking, feedback and thresholds can provide accurate performance estimates of linked in series networks.

Keywords: Blocking, Congestion control, Feedback, Markov chains, Performance evaluation, Threshold-base networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1248
85 Achieving Sustainable Agriculture with Treated Municipal Wastewater

Authors: Reshu Yadav, Himanshu Joshi, S. K.Tripathi

Abstract:

A pilot field study was conducted at the Jagjeetpur Municipal Sewage treatment plant situated in the Haridwar town in Uttarakhand state, India. The objectives of the present study were to study the effect of treated wastewater on the production of various paddy varieties (Sharbati, PR-114, PB-1, Menaka, PB1121 and PB 1509) and the emission of GHG gases (CO2, CH4 and N2O) as compared to the same varieties grown in the control plots irrigated with fresh water. Of late, the concept of water footprint assessment has emerged, which explains enumeration of various types of water footprints of an agricultural entity from its production to processing stages. Paddy, the most water demanding staple crop of Uttarakhand state, displayed a high green water footprint value of 2474.12 m3/ Ton. Most of the wastewater irrigated varieties displayed up to 6% increase in production, except Menaka and PB-1121, which showed a reduction in production (6% and 3% respectively), due to pest and insect infestation. The treated wastewater was observed to be rich in Nitrogen (55.94 mg/ml Nitrate), Phosphorus (54.24 mg/ml) and Potassium (9.78 mg/ml), thus rejuvenating the soil quality and not requiring any external nutritional supplements. A Percentage increase of GHG gases of irrigation with treated municipal wastewater as compared to control plots was observed as 0.4% - 8.6% (CH4), 1.1% - 9.2% (CO2), and 0.07% - 5.8% (N2O). The variety, Sharbati, displayed maximum production (5.5 ton/ha) and emerged as the most resistant variety against pests and insects. The emission values of CH4, CO2 and N2O were 729.31 mg/m2/d, 322.10 mg/m2/d and 400.21 mg/m2/d in water stagnant condition. This study highlighted a successful possibility of reuse of wastewater for non-potable purposes offering the potential for exploiting this resource that can replace or reduce the existing use of fresh water sources in agriculture sector.

Keywords: Greenhouse gases, nutrients, water footprint, wastewater irrigation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1805
84 Improving Air Temperature Prediction with Artificial Neural Networks

Authors: Brian A. Smith, Ronald W. McClendon, Gerrit Hoogenboom

Abstract:

The mitigation of crop loss due to damaging freezes requires accurate air temperature prediction models. Previous work established that the Ward-style artificial neural network (ANN) is a suitable tool for developing such models. The current research focused on developing ANN models with reduced average prediction error by increasing the number of distinct observations used in training, adding additional input terms that describe the date of an observation, increasing the duration of prior weather data included in each observation, and reexamining the number of hidden nodes used in the network. Models were created to predict air temperature at hourly intervals from one to 12 hours ahead. Each ANN model, consisting of a network architecture and set of associated parameters, was evaluated by instantiating and training 30 networks and calculating the mean absolute error (MAE) of the resulting networks for some set of input patterns. The inclusion of seasonal input terms, up to 24 hours of prior weather information, and a larger number of processing nodes were some of the improvements that reduced average prediction error compared to previous research across all horizons. For example, the four-hour MAE of 1.40°C was 0.20°C, or 12.5%, less than the previous model. Prediction MAEs eight and 12 hours ahead improved by 0.17°C and 0.16°C, respectively, improvements of 7.4% and 5.9% over the existing model at these horizons. Networks instantiating the same model but with different initial random weights often led to different prediction errors. These results strongly suggest that ANN model developers should consider instantiating and training multiple networks with different initial weights to establish preferred model parameters.

Keywords: Decision support systems, frost protection, fruit, time-series prediction, weather modeling

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2649
83 Assets Integrity Management in Oil and Gas Production Facilities Through Corrosion Mitigation and Inspection Strategy: A Case Study of Sarir Oilfield

Authors: Iftikhar Ahmad, Youssef Elkezza

Abstract:

Sarir oilfield is in North Africa. It has facilities of oil and gas production. The assets of the Sarir oilfield can be divided into five following categories, namely: (i) Well bore and wellheads; (ii) Vessels such as separators, desalters, and gas processing facilities; (iii) Pipelines including all flow lines, trunk lines, and shipping lines; (iv) storage tanks; (v) Other assets such as turbines and compressors, etc. The nature of the petroleum industry recognizes the potential human, environmental and financial consequences that can result from failing to maintain the integrity of wellheads, vessels, tanks, pipelines, and other assets. The importance of effective asset integrity management increases as the industry infrastructure continues to age. The primary objective of assets integrity management (AIM) is to maintain assets in a fit-for-service condition while extending their remaining life in the most reliable, safe, and cost-effective manner. Corrosion management is one of the important aspects of successful asset integrity management. It covers corrosion mitigation, monitoring, inspection, and risk evaluation. External corrosion on pipelines, well bores, buried assets, and bottoms of tanks is controlled with a combination of coatings by cathodic protection, while the external corrosion on surface equipment, wellheads, and storage tanks is controlled by coatings. The periodic cleaning of the pipeline by pigging helps in the prevention of internal corrosion. Further, internal corrosion of pipelines is prevented by chemical treatment and controlled operations. This paper describes the integrity management system used in the Sarir oil field for its oil and gas production facilities based on standard practices of corrosion mitigation and inspection.

Keywords: Assets integrity management, corrosion prevention in oilfield assets, corrosion management in oilfield, corrosion prevention and inspection activities.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 98
82 A Sentence-to-Sentence Relation Network for Recognizing Textual Entailment

Authors: Isaac K. E. Ampomah, Seong-Bae Park, Sang-Jo Lee

Abstract:

Over the past decade, there have been promising developments in Natural Language Processing (NLP) with several investigations of approaches focusing on Recognizing Textual Entailment (RTE). These models include models based on lexical similarities, models based on formal reasoning, and most recently deep neural models. In this paper, we present a sentence encoding model that exploits the sentence-to-sentence relation information for RTE. In terms of sentence modeling, Convolutional neural network (CNN) and recurrent neural networks (RNNs) adopt different approaches. RNNs are known to be well suited for sequence modeling, whilst CNN is suited for the extraction of n-gram features through the filters and can learn ranges of relations via the pooling mechanism. We combine the strength of RNN and CNN as stated above to present a unified model for the RTE task. Our model basically combines relation vectors computed from the phrasal representation of each sentence and final encoded sentence representations. Firstly, we pass each sentence through a convolutional layer to extract a sequence of higher-level phrase representation for each sentence from which the first relation vector is computed. Secondly, the phrasal representation of each sentence from the convolutional layer is fed into a Bidirectional Long Short Term Memory (Bi-LSTM) to obtain the final sentence representations from which a second relation vector is computed. The relations vectors are combined and then used in then used in the same fashion as attention mechanism over the Bi-LSTM outputs to yield the final sentence representations for the classification. Experiment on the Stanford Natural Language Inference (SNLI) corpus suggests that this is a promising technique for RTE.

Keywords: Deep neural models, natural language inference, recognizing textual entailment, sentence-to-sentence relation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1415
81 The Quality Assessment of Seismic Reflection Survey Data Using Statistical Analysis: A Case Study of Fort Abbas Area, Cholistan Desert, Pakistan

Authors: U. Waqas, M. F. Ahmed, A. Mehmood, M. A. Rashid

Abstract:

In geophysical exploration surveys, the quality of acquired data holds significant importance before executing the data processing and interpretation phases. In this study, 2D seismic reflection survey data of Fort Abbas area, Cholistan Desert, Pakistan was taken as test case in order to assess its quality on statistical bases by using normalized root mean square error (NRMSE), Cronbach’s alpha test (α) and null hypothesis tests (t-test and F-test). The analysis challenged the quality of the acquired data and highlighted the significant errors in the acquired database. It is proven that the study area is plain, tectonically least affected and rich in oil and gas reserves. However, subsurface 3D modeling and contouring by using acquired database revealed high degrees of structural complexities and intense folding. The NRMSE had highest percentage of residuals between the estimated and predicted cases. The outcomes of hypothesis testing also proved the biasness and erraticness of the acquired database. Low estimated value of alpha (α) in Cronbach’s alpha test confirmed poor reliability of acquired database. A very low quality of acquired database needs excessive static correction or in some cases, reacquisition of data is also suggested which is most of the time not feasible on economic grounds. The outcomes of this study could be used to assess the quality of large databases and to further utilize as a guideline to establish database quality assessment models to make much more informed decisions in hydrocarbon exploration field.

Keywords: Data quality, null hypothesis, seismic lines, seismic reflection survey.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 557
80 Sperm Whale Signal Analysis: Comparison using the Auto Regressive model and the Daubechies 15 Wavelets Transform

Authors: Olivier Adam, Maciej Lopatka, Christophe Laplanche, Jean-François Motsch

Abstract:

This article presents the results using a parametric approach and a Wavelet Transform in analysing signals emitting from the sperm whale. The extraction of intrinsic characteristics of these unique signals emitted by marine mammals is still at present a difficult exercise for various reasons: firstly, it concerns non-stationary signals, and secondly, these signals are obstructed by interfering background noise. In this article, we compare the advantages and disadvantages of both methods: Auto Regressive models and Wavelet Transform. These approaches serve as an alternative to the commonly used estimators which are based on the Fourier Transform for which the hypotheses necessary for its application are in certain cases, not sufficiently proven. These modern approaches provide effective results particularly for the periodic tracking of the signal's characteristics and notably when the signal-to-noise ratio negatively effects signal tracking. Our objectives are twofold. Our first goal is to identify the animal through its acoustic signature. This includes recognition of the marine mammal species and ultimately of the individual animal (within the species). The second is much more ambitious and directly involves the intervention of cetologists to study the sounds emitted by marine mammals in an effort to characterize their behaviour. We are working on an approach based on the recordings of marine mammal signals and the findings from this data result from the Wavelet Transform. This article will explore the reasons for using this approach. In addition, thanks to the use of new processors, these algorithms once heavy in calculation time can be integrated in a real-time system.

Keywords: Autoregressive model, Daubechies Wavelet, Fourier Transform, marine mammals, signal processing, spectrogram, sperm whale, Wavelet Transform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1959
79 Radon-222 Concentration and Potential Risk to Workers of Al-Jalamid Phosphate Mines, North Province, Saudi Arabia

Authors: El-Said. I. Shabana, Mohammad S. Tayeb, Maher M. T. Qutub, Abdulraheem A. Kinsara

Abstract:

Usually, phosphate deposits contain 238U and 232Th in addition to their decay products. Due to their different pathways in the environment, the 238U/232Th activity concentration ratio usually found to be greater than unity in phosphate sediments. The presence of these radionuclides creates a potential need to control exposure of workers in the mining and processing activities of the phosphate minerals in accordance with IAEA safety standards. The greatest dose to workers comes from exposure to radon, especially 222Rn from the uranium series, and has to be controlled. In this regard, radon (222Rn) was measured in the atmosphere (indoor and outdoor) of Al-Jalamid phosphate-mines working area using a portable radon-measurement instrument RAD7, in a purpose of radiation protection. Radon was measured in 61 sites inside the open phosphate mines, the phosphate upgrading facility (offices and rooms of the workers, and in some open-air sites) and in the dwellings of the workers residence-village that lies at about 3 km from the mines working area. The obtained results indicated that the average indoor radon concentration was about 48.4 Bq/m3. Inside the upgrading facility, the average outdoor concentrations were 10.8 and 9.7 Bq/m3 in the concentrate piles and crushing areas, respectively. It was 12.3 Bq/m3 in the atmosphere of the open mines. These values are comparable with the global average values. Based on the average values, the annual effective dose due to radon inhalation was calculated and risk estimates have been done. The average annual effective dose to workers due to the radon inhalation was estimated by 1.32 mSv. The potential excess risk of lung cancer mortality that could be attributed to radon, when considering the lifetime exposure, was estimated by 53.0x10-4. The results have been discussed in detail.

Keywords: Dosimetry, environmental monitoring, phosphate deposits, radiation protection, radon-22.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1340
78 Analysis of Noise Level Effects on Signal-Averaged Electrocardiograms

Authors: Chun-Cheng Lin

Abstract:

Noise level has critical effects on the diagnostic performance of signal-averaged electrocardiogram (SAECG), because the true starting and end points of QRS complex would be masked by the residual noise and sensitive to the noise level. Several studies and commercial machines have used a fixed number of heart beats (typically between 200 to 600 beats) or set a predefined noise level (typically between 0.3 to 1.0 μV) in each X, Y and Z lead to perform SAECG analysis. However different criteria or methods used to perform SAECG would cause the discrepancies of the noise levels among study subjects. According to the recommendations of 1991 ESC, AHA and ACC Task Force Consensus Document for the use of SAECG, the determinations of onset and offset are related closely to the mean and standard deviation of noise sample. Hence this study would try to perform SAECG using consistent root-mean-square (RMS) noise levels among study subjects and analyze the noise level effects on SAECG. This study would also evaluate the differences between normal subjects and chronic renal failure (CRF) patients in the time-domain SAECG parameters. The study subjects were composed of 50 normal Taiwanese and 20 CRF patients. During the signal-averaged processing, different RMS noise levels were adjusted to evaluate their effects on three time domain parameters (1) filtered total QRS duration (fQRSD), (2) RMS voltage of the last QRS 40 ms (RMS40), and (3) duration of the low amplitude signals below 40 μV (LAS40). The study results demonstrated that the reduction of RMS noise level can increase fQRSD and LAS40 and decrease the RMS40, and can further increase the differences of fQRSD and RMS40 between normal subjects and CRF patients. The SAECG may also become abnormal due to the reduction of RMS noise level. In conclusion, it is essential to establish diagnostic criteria of SAECG using consistent RMS noise levels for the reduction of the noise level effects.

Keywords: Signal-averaged electrocardiogram, Ventricular latepotentials, Chronic renal failure, Noise level effects.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1751
77 Application of Ultrasonic Assisted Machining Technique for Glass-Ceramic Milling

Authors: S. Y. Lin, C. H. Kuan, C. H. She, W. T. Wang

Abstract:

In this study, ultrasonic assisted machining (UAM) technique is applied in side-surface milling experiment for glass-ceramic workpiece material. The tungsten carbide cutting-tool with diamond coating is used in conjunction with two kinds of cooling/lubrication mediums such as water-soluble (WS) cutting fluid and minimum quantity lubricant (MQL). Full factorial process parameter combinations on the milling experiments are planned to investigate the effect of process parameters on cutting performance. From the experimental results, it tries to search for the better process parameter combination which the edge-indentation and the surface roughness are acceptable. In the machining experiments, ultrasonic oscillator was used to excite a cutting-tool along the radial direction producing a very small amplitude of vibration frequency of 20KHz to assist the machining process. After processing, toolmaker microscope was used to detect the side-surface morphology, edge-indentation and cutting tool wear under different combination of cutting parameters, and analysis and discussion were also conducted for experimental results. The results show that the main leading parameters to edge-indentation of glass ceramic are cutting depth and feed rate. In order to reduce edge-indentation, it needs to use lower cutting depth and feed rate. Water-soluble cutting fluid provides a better cooling effect in the primary cutting area; it may effectively reduce the edge-indentation and improve the surface morphology of the glass ceramic. The use of ultrasonic assisted technique can effectively enhance the surface finish cleanness and reduce cutting tool wear and edge-indentation. 

Keywords: Glass-ceramic, ultrasonic assisted machining, cutting performance, edge-indentation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2759
76 Modified Scaling-Free CORDIC Based Pipelined Parallel MDC FFT and IFFT Architecture for Radix 2^2 Algorithm

Authors: C. Paramasivam, K. B. Jayanthi

Abstract:

An innovative approach to develop modified scaling free CORDIC based two parallel pipelined Multipath Delay Commutator (MDC) FFT and IFFT architectures for radix 22 FFT algorithm is presented. Multipliers and adders are the most important data paths in FFT and IFFT architectures. Multipliers occupy high area and consume more power. In order to optimize the area and power overhead, modified scaling-free CORDIC based complex multiplier is utilized in the proposed design. In general twiddle factor values are stored in RAM block. In the proposed work, modified scaling-free CORDIC based twiddle factor generator unit is used to generate the twiddle factor and efficient switching units are used. In addition to this, four point FFT operations are performed without complex multiplication which helps to reduce area and power in the last two stages of the pipelined architectures. The design proposed in this paper is based on multipath delay commutator method. The proposed design can be extended to any radix 2n based FFT/IFFT algorithm to improve the throughput. The work is synthesized using Synopsys design Compiler using TSMC 90-nm library. The proposed method proves to be better compared to the reference design in terms of area, throughput and power consumption. The comparative analysis of the proposed design with Xilinx FPGA platform is also discussed in the paper.

Keywords: Coordinate Rotational Digital Computer(CORDIC), Complex multiplier, Fast Fourier transform (FFT), Inverse fast Fourier transform (IFFT), Multipath delay Commutator (MDC), modified scaling free CORDIC, complex multiplier, pipelining, parallel processing, radix-2^2.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1765
75 Bacteriological Quality of Commercially Prepared Fermented Ogi (Akamu) Sold in Some Parts of South Eastern Nigeria

Authors: Alloysius C. Ogodo, Ositadinma C. Ugbogu, Uzochukwu G. Ekeleme

Abstract:

Food poisoning and infection by bacteria are of public health significance to both developing and developed countries. Samples of ogi (akamu) prepared from white and yellow variety of maize sold in Uturu and Okigwe were analyzed together with the laboratory prepared ogi for bacterial quality using the standard microbiological methods. The analyses showed that both white and yellow variety had total bacterial counts (cfu/g) of 4.0 ×107 and 3.9 x 107 for the laboratory prepared ogi while the commercial ogi had 5.2 x 107 and 4.9 x107, 4.9 x107 and 4.5 x107, 5.4 x107 and 5.0 x107 for Eke-Okigwe, Up-gate and Nkwo-Achara market respectively. The Staphylococcal counts ranged from 2.0 x 102 to 5.0 x102 and 1.0 x 102 to 4.0 x102 for the white and yellow variety from the different markets while Staphylococcal growth was not recorded on the laboratory prepared ogi. The laboratory prepared ogi had no Coliform growth while the commercially prepared ogi had counts of 0.5 x103 to 1.6 x 103 for white variety and 0.3 x 103 to 1.1 x103 for yellow variety respectively. The Lactic acid bacterial count of 3.5x106 and 3.0x106 was recorded for the laboratory ogi while the commercially prepared ogi ranged from 3.2x106 to 4.2x106 (white variety) and 3.0 x106 to 3.9 x106 (yellow). The presence of bacteria isolates from the commercial and laboratory fermented ogi showed that Lactobacillus sp, Leuconostoc sp and Citrobacter sp were present in all the samples, Micrococcus sp and Klebsiella sp were isolated from Eke- Okigwe and ABSU-up-gate markets varieties respectively, E. coli and Staphylococcus sp were present in Eke-Okigwe and Nkwo- Achara markets while Salmonella sp were isolated from the three markets. Hence, there are chances of contracting food borne diseases from commercially prepared ogi. Therefore, there is the need for sanitary measures in the production of fermented cereals so as to minimize the rate of food borne pathogens during processing and storage.

Keywords: Bacterial quality, fermentation, maize, Ogi.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3419
74 Simulated Annealing Algorithm for Data Aggregation Trees in Wireless Sensor Networks and Comparison with Genetic Algorithm

Authors: Ladan Darougaran, Hossein Shahinzadeh, Hajar Ghotb, Leila Ramezanpour

Abstract:

In ad hoc networks, the main issue about designing of protocols is quality of service, so that in wireless sensor networks the main constraint in designing protocols is limited energy of sensors. In fact, protocols which minimize the power consumption in sensors are more considered in wireless sensor networks. One approach of reducing energy consumption in wireless sensor networks is to reduce the number of packages that are transmitted in network. The technique of collecting data that combines related data and prevent transmission of additional packages in network can be effective in the reducing of transmitted packages- number. According to this fact that information processing consumes less power than information transmitting, Data Aggregation has great importance and because of this fact this technique is used in many protocols [5]. One of the Data Aggregation techniques is to use Data Aggregation tree. But finding one optimum Data Aggregation tree to collect data in networks with one sink is a NP-hard problem. In the Data Aggregation technique, related information packages are combined in intermediate nodes and form one package. So the number of packages which are transmitted in network reduces and therefore, less energy will be consumed that at last results in improvement of longevity of network. Heuristic methods are used in order to solve the NP-hard problem that one of these optimization methods is to solve Simulated Annealing problems. In this article, we will propose new method in order to build data collection tree in wireless sensor networks by using Simulated Annealing algorithm and we will evaluate its efficiency whit Genetic Algorithm.

Keywords: Data aggregation, wireless sensor networks, energy efficiency, simulated annealing algorithm, genetic algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1644
73 Insights into Smoothies with High Levels of Fibre and Polyphenols: Factors Influencing Chemical, Rheological and Sensory Properties

Authors: Dongxiao Sun-Waterhouse, Shiji Nair, Reginald Wibisono, Sandhya S. Wadhwa, Carl Massarotto, Duncan I. Hedderley, Jing Zhou, Sara R. Jaeger, Virginia Corrigan

Abstract:

Attempts to add fibre and polyphenols (PPs) into popular beverages present challenges related to the properties of finished products such as smoothies. Consumer acceptability, viscosity and phenolic composition of smoothies containing high levels of fruit fibre (2.5-7.5 g per 300 mL serve) and PPs (250-750 mg per 300 mL serve) were examined. The changes in total extractable PP, vitamin C content, and colour of selected smoothies over a storage stability trial (4°C, 14 days) were compared. A set of acidic aqueous model beverages were prepared to further examine the effect of two different heat treatments on the stability and extractability of PPs. Results show that overall consumer acceptability of high fibre and PP smoothies was low, with average hedonic scores ranging from 3.9 to 6.4 (on a 1-9 scale). Flavour, texture and overall acceptability decreased as fibre and polyphenol contents increased, with fibre content exerting a stronger effect. Higher fibre content resulted in greater viscosity, with an elevated PP content increasing viscosity only slightly. The presence of fibre also aided the stability and extractability of PPs after heating. A reduction of extractable PPs, vitamin C content and colour intensity of smoothies was observed after a 14-day storage period at 4°C. Two heat treatments (75°C for 45 min or 85°C for 1 min) that are normally used for beverage production, did not cause significant reduction of total extracted PPs. It is clear that high levels of added fibre and PPs greatly influence the consumer appeal of smoothies, suggesting the need to develop novel formulation and processing methods if a satisfactory functional beverage is to be developed incorporating these ingredients.

Keywords: Apple fibre, apple and blackcurrant polyphenols, consumer acceptability, functional foods, stability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4272
72 Analysis of Surface Hardness, Surface Roughness, and Near Surface Microstructure of AISI 4140 Steel Worked with Turn-Assisted Deep Cold Rolling Process

Authors: P. R. Prabhu, S. M. Kulkarni, S. S. Sharma, K. Jagannath, Achutha Kini U.

Abstract:

In the present study, response surface methodology has been used to optimize turn-assisted deep cold rolling process of AISI 4140 steel. A regression model is developed to predict surface hardness and surface roughness using response surface methodology and central composite design. In the development of predictive model, deep cold rolling force, ball diameter, initial roughness of the workpiece, and number of tool passes are considered as model variables. The rolling force and the ball diameter are the significant factors on the surface hardness and ball diameter and numbers of tool passes are found to be significant for surface roughness. The predicted surface hardness and surface roughness values and the subsequent verification experiments under the optimal operating conditions confirmed the validity of the predicted model. The absolute average error between the experimental and predicted values at the optimal combination of parameter settings for surface hardness and surface roughness is calculated as 0.16% and 1.58% respectively. Using the optimal processing parameters, the surface hardness is improved from 225 to 306 HV, which resulted in an increase in the near surface hardness by about 36% and the surface roughness is improved from 4.84µm to 0.252 µm, which resulted in decrease in the surface roughness by about 95%. The depth of compression is found to be more than 300µm from the microstructure analysis and this is in correlation with the results obtained from the microhardness measurements. Taylor hobson talysurf tester, micro vickers hardness tester, optical microscopy and X-ray diffractometer are used to characterize the modified surface layer. 

Keywords: Surface hardness, response surface methodology, microstructure, central composite design, deep cold rolling, surface roughness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1754
71 A Hybrid Multi-Criteria Hotel Recommender System Using Explicit and Implicit Feedbacks

Authors: Ashkan Ebadi, Adam Krzyzak

Abstract:

Recommender systems, also known as recommender engines, have become an important research area and are now being applied in various fields. In addition, the techniques behind the recommender systems have been improved over the time. In general, such systems help users to find their required products or services (e.g. books, music) through analyzing and aggregating other users’ activities and behavior, mainly in form of reviews, and making the best recommendations. The recommendations can facilitate user’s decision making process. Despite the wide literature on the topic, using multiple data sources of different types as the input has not been widely studied. Recommender systems can benefit from the high availability of digital data to collect the input data of different types which implicitly or explicitly help the system to improve its accuracy. Moreover, most of the existing research in this area is based on single rating measures in which a single rating is used to link users to items. This paper proposes a highly accurate hotel recommender system, implemented in various layers. Using multi-aspect rating system and benefitting from large-scale data of different types, the recommender system suggests hotels that are personalized and tailored for the given user. The system employs natural language processing and topic modelling techniques to assess the sentiment of the users’ reviews and extract implicit features. The entire recommender engine contains multiple sub-systems, namely users clustering, matrix factorization module, and hybrid recommender system. Each sub-system contributes to the final composite set of recommendations through covering a specific aspect of the problem. The accuracy of the proposed recommender system has been tested intensively where the results confirm the high performance of the system.

Keywords: Tourism, hotel recommender system, hybrid, implicit features.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1852
70 Integration of Big Data to Predict Transportation for Smart Cities

Authors: Sun-Young Jang, Sung-Ah Kim, Dongyoun Shin

Abstract:

The Intelligent transportation system is essential to build smarter cities. Machine learning based transportation prediction could be highly promising approach by delivering invisible aspect visible. In this context, this research aims to make a prototype model that predicts transportation network by using big data and machine learning technology. In detail, among urban transportation systems this research chooses bus system.  The research problem that existing headway model cannot response dynamic transportation conditions. Thus, bus delay problem is often occurred. To overcome this problem, a prediction model is presented to fine patterns of bus delay by using a machine learning implementing the following data sets; traffics, weathers, and bus statues. This research presents a flexible headway model to predict bus delay and analyze the result. The prototyping model is composed by real-time data of buses. The data are gathered through public data portals and real time Application Program Interface (API) by the government. These data are fundamental resources to organize interval pattern models of bus operations as traffic environment factors (road speeds, station conditions, weathers, and bus information of operating in real-time). The prototyping model is designed by the machine learning tool (RapidMiner Studio) and conducted tests for bus delays prediction. This research presents experiments to increase prediction accuracy for bus headway by analyzing the urban big data. The big data analysis is important to predict the future and to find correlations by processing huge amount of data. Therefore, based on the analysis method, this research represents an effective use of the machine learning and urban big data to understand urban dynamics.

Keywords: Big data, bus headway prediction, machine learning, public transportation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1518
69 The Digital Microscopy in Organ Transplantation: Ergonomics of the Tele-Pathological Evaluation of Renal, Liver and Pancreatic Grafts

Authors: C. S. Mammas, A. Lazaris, A. S. Mamma-Graham, G. Kostopanagiotou, C. Lemonidou, J. Mantas, E. Patsouris

Abstract:

Introduction: The process to build a better safety culture, methods of error analysis, and preventive measures, starts with an understanding of the effects when human factors engineering refer to remote microscopic diagnosis in surgery and specially in organ transplantation for the remote evaluation of the grafts. It has been estimated that even in well-organized transplant systems an average of 8% to 14% of the grafts (G) that arrive at the recipient hospitals may be considered as diseased, injured, damaged or improper for transplantation. Digital microscopy adds information on a microscopic level about the grafts in Organ Transplant (OT), and may lead to a change in their management. Such a method will reduce the possibility that a diseased G, will arrive at the recipient hospital for implantation. Aim: Ergonomics of Digital Microscopy (DM) based on virtual slides, on Telemedicine Systems (TS) for Tele-Pathological (TPE) evaluation of the grafts (G) in organ transplantation (OT). Material and Methods: By experimental simulation, the ergonomics of DM for microscopic TPE of Renal Graft (RG), Liver Graft (LG) and Pancreatic Graft (PG) tissues is analyzed. In fact, this corresponded to the ergonomics of digital microscopy for TPE in OT by applying Virtual Slide (VS) system for graft tissue image capture, for remote diagnoses of possible microscopic inflammatory and/or neoplastic lesions. Experimentation included: a. Development of an OTE-TS similar Experimental Telemedicine System (Exp.-TS), b. Simulation of the integration of TS with the VS based microscopic TPE of RG, LG and PG applying DM. Simulation of the DM based TPE was performed by 2 specialists on a total of 238 human Renal Graft (RG), 172 Liver Graft (LG) and 108 Pancreatic Graft (PG) tissues digital microscopic images for inflammatory and neoplastic lesions on four electronic spaces of the four used TS. Results: Statistical analysis of specialist‘s answers about the ability to diagnose accurately the diseased RG, LG and PG tissues on the electronic space among four TS (A,B,C,D) showed that DM on TS for TPE in OT is elaborated perfectly on the ES of a Desktop, followed by the ES of the applied Exp.-TS. Tablet and Mobile-Phone ES seem significantly risky for the application of DM in OT (p<.001). Conclusion: To make the largest reduction in errors and adverse events referring to the quality of the grafts, it will take application of human factors engineering to procurement, design, audit, and aware ness-raising activities. Consequently, it will take an investment in new training, people, and other changes to management activities for DM in OT. The simulating VS based TPE with DM of RG, LG and PG tissues after retrieval; seem feasible and reliable and dependable on the size of the electronic space of the applied TS, for remote prevention of diseased grafts from being retrieved and/or sent to the recipient hospital and for post-grafting and pre-transplant planning.

Keywords: Organ Transplantation, Tele-Pathology, Digital Microscopy, Virtual Slides.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1853
68 Analytical Slope Stability Analysis Based on the Statistical Characterization of Soil Shear Strength

Authors: Bernardo C. P. Albuquerque, Darym J. F. Campos

Abstract:

Increasing our ability to solve complex engineering problems is directly related to the processing capacity of computers. By means of such equipments, one is able to fast and accurately run numerical algorithms. Besides the increasing interest in numerical simulations, probabilistic approaches are also of great importance. This way, statistical tools have shown their relevance to the modelling of practical engineering problems. In general, statistical approaches to such problems consider that the random variables involved follow a normal distribution. This assumption tends to provide incorrect results when skew data is present since normal distributions are symmetric about their means. Thus, in order to visualize and quantify this aspect, 9 statistical distributions (symmetric and skew) have been considered to model a hypothetical slope stability problem. The data modeled is the friction angle of a superficial soil in Brasilia, Brazil. Despite the apparent universality, the normal distribution did not qualify as the best fit. In the present effort, data obtained in consolidated-drained triaxial tests and saturated direct shear tests have been modeled and used to analytically derive the probability density function (PDF) of the safety factor of a hypothetical slope based on Mohr-Coulomb rupture criterion. Therefore, based on this analysis, it is possible to explicitly derive the failure probability considering the friction angle as a random variable. Furthermore, it is possible to compare the stability analysis when the friction angle is modelled as a Dagum distribution (distribution that presented the best fit to the histogram) and as a Normal distribution. This comparison leads to relevant differences when analyzed in light of the risk management.

Keywords: Statistical slope stability analysis, Skew distributions, Probability of failure, Functions of random variables.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1497
67 Loss Function Optimization for CNN-Based Fingerprint Anti-Spoofing

Authors: Yehjune Heo

Abstract:

As biometric systems become widely deployed, the security of identification systems can be easily attacked by various spoof materials. This paper contributes to finding a reliable and practical anti-spoofing method using Convolutional Neural Networks (CNNs) based on the types of loss functions and optimizers. The types of CNNs used in this paper include AlexNet, VGGNet, and ResNet. By using various loss functions including Cross-Entropy, Center Loss, Cosine Proximity, and Hinge Loss, and various loss optimizers which include Adam, SGD, RMSProp, Adadelta, Adagrad, and Nadam, we obtained significant performance changes. We realize that choosing the correct loss function for each model is crucial since different loss functions lead to different errors on the same evaluation. By using a subset of the Livdet 2017 database, we validate our approach to compare the generalization power. It is important to note that we use a subset of LiveDet and the database is the same across all training and testing for each model. This way, we can compare the performance, in terms of generalization, for the unseen data across all different models. The best CNN (AlexNet) with the appropriate loss function and optimizers result in more than 3% of performance gain over the other CNN models with the default loss function and optimizer. In addition to the highest generalization performance, this paper also contains the models with high accuracy associated with parameters and mean average error rates to find the model that consumes the least memory and computation time for training and testing. Although AlexNet has less complexity over other CNN models, it is proven to be very efficient. For practical anti-spoofing systems, the deployed version should use a small amount of memory and should run very fast with high anti-spoofing performance. For our deployed version on smartphones, additional processing steps, such as quantization and pruning algorithms, have been applied in our final model.

Keywords: Anti-spoofing, CNN, fingerprint recognition, loss function, optimizer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 362
66 Data Centers’ Temperature Profile Simulation Optimized by Finite Elements and Discretization Methods

Authors: José Alberto García Fernández, Zhimin Du, Xinqiao Jin

Abstract:

Nowadays, data center industry faces strong challenges for increasing the speed and data processing capacities while at the same time is trying to keep their devices a suitable working temperature without penalizing that capacity. Consequently, the cooling systems of this kind of facilities use a large amount of energy to dissipate the heat generated inside the servers, and developing new cooling techniques or perfecting those already existing would be a great advance in this type of industry. The installation of a temperature sensor matrix distributed in the structure of each server would provide the necessary information for collecting the required data for obtaining a temperature profile instantly inside them. However, the number of temperature probes required to obtain the temperature profiles with sufficient accuracy is very high and expensive. Therefore, other less intrusive techniques are employed where each point that characterizes the server temperature profile is obtained by solving differential equations through simulation methods, simplifying data collection techniques but increasing the time to obtain results. In order to reduce these calculation times, complicated and slow computational fluid dynamics simulations are replaced by simpler and faster finite element method simulations which solve the Burgers‘ equations by backward, forward and central discretization techniques after simplifying the energy and enthalpy conservation differential equations. The discretization methods employed for solving the first and second order derivatives of the obtained Burgers‘ equation after these simplifications are the key for obtaining results with greater or lesser accuracy regardless of the characteristic truncation error.

Keywords: Burgers’ equations, CFD simulation, data center, discretization methods, FEM simulation, temperature profile.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 456
65 In situ Real-Time Multivariate Analysis of Methanolysis Monitoring of Sunflower Oil Using FTIR

Authors: Pascal Mwenge, Tumisang Seodigeng

Abstract:

The combination of world population and the third industrial revolution led to high demand for fuels. On the other hand, the decrease of global fossil 8fuels deposits and the environmental air pollution caused by these fuels has compounded the challenges the world faces due to its need for energy. Therefore, new forms of environmentally friendly and renewable fuels such as biodiesel are needed. The primary analytical techniques for methanolysis yield monitoring have been chromatography and spectroscopy, these methods have been proven reliable but are more demanding, costly and do not provide real-time monitoring. In this work, the in situ monitoring of biodiesel from sunflower oil using FTIR (Fourier Transform Infrared) has been studied; the study was performed using EasyMax Mettler Toledo reactor equipped with a DiComp (Diamond) probe. The quantitative monitoring of methanolysis was performed by building a quantitative model with multivariate calibration using iC Quant module from iC IR 7.0 software. 15 samples of known concentrations were used for the modelling which were taken in duplicate for model calibration and cross-validation, data were pre-processed using mean centering and variance scale, spectrum math square root and solvent subtraction. These pre-processing methods improved the performance indexes from 7.98 to 0.0096, 11.2 to 3.41, 6.32 to 2.72, 0.9416 to 0.9999, RMSEC, RMSECV, RMSEP and R2Cum, respectively. The R2 value of 1 (training), 0.9918 (test), 0.9946 (cross-validation) indicated the fitness of the model built. The model was tested against univariate model; small discrepancies were observed at low concentration due to unmodelled intermediates but were quite close at concentrations above 18%. The software eliminated the complexity of the Partial Least Square (PLS) chemometrics. It was concluded that the model obtained could be used to monitor methanol of sunflower oil at industrial and lab scale.

Keywords: Biodiesel, calibration, chemometrics, FTIR, methanolysis, multivariate analysis, transesterification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 859
64 In vitro Effects of Salvia officinalis on Bovine Spermatozoa

Authors: Eva Tvrdá, Boris Botman, Marek Halenár, Tomáš Slanina, Norbert Lukáč

Abstract:

In vitro storage and processing of animal semen represents a risk factor to spermatozoa vitality, potentially leading to reduced fertility. A variety of substances isolated from natural sources may exhibit protective or antioxidant properties on the spermatozoon, thus extending the lifespan of stored ejaculates. This study compared the ability of different concentrations of the Salvia officinalis extract on the motility, mitochondrial activity, viability and reactive oxygen species (ROS) production by bovine spermatozoa during different time periods (0, 2, 6 and 24 h) of in vitro culture. Spermatozoa motility was assessed using the Computer-assisted sperm analysis (CASA) system. Cell viability was examined using the metabolic activity MTT assay, the eosin-nigrosin staining technique was used to evaluate the sperm viability and ROS generation was quantified using luminometry. The CASA analysis revealed that the motility in the experimental groups supplemented with 0.5-2 µg/mL Salvia extract was significantly lower in comparison with the control (P<0.05; Time 24 h). At the same time, a long-term exposure of spermatozoa to concentrations ranging between 0.05 µg/mL and 2 µg/mL had a negative impact on the mitochondrial metabolism (P<0.05; Time 24 h). The viability staining revealed that 0.001-1 µg/mL Salvia extract had no effects on bovine male gametes, however 2 µg/mL Salvia had a persisting negative effect on spermatozoa (P<0.05). Furthermore 0.05-2 µg/mL Salvia exhibited an immediate ROS-promoting effect on the sperm culture (P>0.05; Time 0 h and 2 h), which remained significant throughout the entire in vitro culture (P<0.05; Time 24 h). Our results point out to the necessity to examine specific effects the biomolecules present in Salvia officinalis may have individually or collectively on the in vitro sperm vitality and oxidative profile.

Keywords: Bulls, CASA, MTT test, reactive oxygen species, sage, Salvia officinalis, spermatozoa.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1302
63 Identification of Flexographic-printed Newspapers with NIR Spectral Imaging

Authors: Raimund Leitner, Susanne Rosskopf

Abstract:

Near-infrared (NIR) spectroscopy is a widely used method for material identification for laboratory and industrial applications. While standard spectrometers only allow measurements at one sampling point at a time, NIR Spectral Imaging techniques can measure, in real-time, both the size and shape of an object as well as identify the material the object is made of. The online classification and sorting of recovered paper with NIR Spectral Imaging (SI) is used with success in the paper recycling industry throughout Europe. Recently, the globalisation of the recycling material streams caused that water-based flexographic-printed newspapers mainly from UK and Italy appear also in central Europe. These flexo-printed newspapers are not sufficiently de-inkable with the standard de-inking process originally developed for offset-printed paper. This de-inking process removes the ink from recovered paper and is the fundamental processing step to produce high-quality paper from recovered paper. Thus, the flexo-printed newspapers are a growing problem for the recycling industry as they reduce the quality of the produced paper if their amount exceeds a certain limit within the recovered paper material. This paper presents the results of a research project for the development of an automated entry inspection system for recovered paper that was jointly conducted by CTR AG (Austria) and PTS Papiertechnische Stiftung (Germany). Within the project an NIR SI prototype for the identification of flexo-printed newspaper has been developed. The prototype can identify and sort out flexoprinted newspapers in real-time and achieves a detection accuracy for flexo-printed newspaper of over 95%. NIR SI, the technology the prototype is based on, allows the development of inspection systems for incoming goods in a paper production facility as well as industrial sorting systems for recovered paper in the recycling industry in the near future.

Keywords: spectral imaging, imaging spectroscopy, NIR, waterbasedflexographic, flexo-printed, recovered paper, real-time classification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1502