Search results for: matrix means
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2327

Search results for: matrix means

317 Establishing Econometric Modeling Equations for Lumpy Skin Disease Outbreaks in the Nile Delta of Egypt under Current Climate Conditions

Authors: Abdelgawad, Salah El-Tahawy

Abstract:

This paper aimed to establish econometrical equation models for the Nile delta region in Egypt, which will represent a basement for future predictions of Lumpy skin disease outbreaks and its pathway in relation to climate change. Data of lumpy skin disease (LSD) outbreaks were collected from the cattle farms located in the provinces representing the Nile delta region during 1 January, 2015 to December, 2015. The obtained results indicated that there was a significant association between the degree of the LSD outbreaks and the investigated climate factors (temperature, wind speed, and humidity) and the outbreaks peaked during the months of June, July, and August and gradually decreased to the lowest rate in January, February, and December. The model obtained depicted that the increment of these climate factors were associated with evidently increment on LSD outbreaks on the Nile Delta of Egypt. The model validation process was done by the root mean square error (RMSE) and means bias (MB) which compared the number of LSD outbreaks expected with the number of observed outbreaks and estimated the confidence level of the model. The value of RMSE was 1.38% and MB was 99.50% confirming that this established model described the current association between the LSD outbreaks and the change on climate factors and also can be used as a base for predicting the of LSD outbreaks depending on the climatic change on the future.

Keywords: LSD, climate factors, econometric models, Nile Delta.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 913
316 Artificial Intelligence: A Comprehensive and Systematic Literature Review of Applications and Comparative Technologies

Authors: Z. M. Najmi

Abstract:

Over the years, the question around Artificial Intelligence has always been one with many answers. Whether by means of use in business and industry or complicated algorithmic programming, management of these technologies has always been the core focus. More recently, technologies have been questioned in industry and society alike as to whether they have improved human-centred design, assisted choices and objectives, and had a hand in systematic processes across the board. With these questions the answer may lie within AI technologies, and the steps needed in removing common human error. Elements such as Machine Learning, Deep Learning, Recommender Systems and Natural Language Processing will all be features to consider moving forward. Our previous intervention with AI applications has resulted in increased productivity, however, raised concerns for the continuation of traditional human-centred occupations. Emerging technologies such as Augmented Reality and Virtual Reality have all played a part in this during AI’s prominent rise. As mentioned, AI has been constantly under the microscope; the benefits and drawbacks may seem endless is wide, but AI is something we must take notice of and adapt into our everyday lives. The aim of this paper is to give an overview of the technologies surrounding A.I. and its’ related technologies. A comprehensive review has been written as a timeline of the developing events and key points in the history of Artificial Intelligence. This research is gathered entirely from secondary research, academic statements of knowledge and gathered to produce an understanding of the timeline of AI.

Keywords: Artificial Intelligence, Deep Learning, Augmented Reality, Reinforcement Learning, Machine Learning, Supervised Learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 494
315 Granting Saudi Women the Right to Drive in the Eyes of Qatari Media

Authors: Rasha A. Salameh

Abstract:

This research attempts to evaluate the treatment provided by the Qatari media to the decision to allow Saudi women to drive, and then activate this decision after a few months, that is, within the time frame between September 26, 2017 until June 30, 2018. This is through asking several questions, including whether the political dispute between Qatar and Saudi Arabia has cast a shadow over this handling, and if these Qatari media handlings are used to criticize the Saudi regime for delaying this step. Here emerges one of the research hypotheses that says that the coverage did not have the required professionalism, due to the fact that the decision and its activation took place in light of the political stalemate between Qatar and the Kingdom of Saudi Arabia, which requires testing the media framing and agenda theories to know to what extent they apply to this case. The research dealt with a sample of five Qatari media read in this sample: Al-Jazeera Net, The New Arab Newspaper, Al-Sharq Newspaper, The Arab Newspaper, and Al-Watan Newspaper. The results showed that most of the authors who covered the decision to allow Saudi women to drive a car did not achieve a balance in their writing, and that almost half of them did not have objectivity, and this indicates the proof of the hypothesis that there is a defect in the professional competence in covering the decision to allow Saudi women to drive cars by means of Qatari media, and the researcher attributes this result to the political position between Qatar and Saudi Arabia, in addition to the fact that the Arab media in most of them are characterized by a low ceiling of freedom, and most of them are identical in their position with the position of the regime’s official view.

Keywords: Saudi women, stereotypes, hate speech, framing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 704
314 Designing Social Care Policies in the Long Term: A Study Using Regression, Clustering and Backpropagation Neural Nets

Authors: Sotirios Raptis

Abstract:

Linking social needs to social classes using different criteria may lead to social services misuse. The paper discusses using ML and Neural Networks (NNs) in linking public services in Scotland in the long term and advocates, this can result in a reduction of the services cost connecting resources needed in groups for similar services. The paper combines typical regression models with clustering and cross-correlation as complementary constituents to predict the demand. Insurance companies and public policymakers can pack linked services such as those offered to the elderly or to low-income people in the longer term. The work is based on public data from 22 services offered by Public Health Services (PHS) Scotland and from the Scottish Government (SG) from 1981 to 2019 that are broken into 110 years series called factors and uses Linear Regression (LR), Autoregression (ARMA) and 3 types of back-propagation (BP) Neural Networks (BPNN) to link them under specific conditions. Relationships found were between smoking related healthcare provision, mental health-related health services, and epidemiological weight in Primary 1(Education) Body Mass Index (BMI) in children. Primary component analysis (PCA) found 11 significant factors while C-Means (CM) clustering gave 5 major factors clusters.

Keywords: Probability, cohorts, data frames, services, prediction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 388
313 Investigating the Regulation System of the Synchronous Motor Excitation Mode Serving as a Reactive Power Source

Authors: Baghdasaryan Marinka, Ulikyan Azatuhi

Abstract:

The efficient usage of the compensation abilities of the electrical drive synchronous motors used in production processes can essentially improve the technical and economic indices of the process.  Reducing the flows of the reactive electrical energy due to the compensation of reactive power allows to significantly reduce the load losses of power in the electrical networks. As a result of analyzing the scientific works devoted to the issues of regulating the excitation of the synchronous motors, the need for comprehensive investigation and estimation of the excitation mode has been substantiated. By means of the obtained transmission functions, in the Simulink environment of the software package MATLAB, the transition processes of the excitation mode have been studied. As a result of obtaining and estimating the graph of the Nyquist plot and the transient process, the necessity of developing the Proportional-Integral-Derivative (PID) regulator has been justified. The transient processes of the system of the PID regulator have been investigated, and the amplitude–phase characteristics of the system have been estimated. The analysis of the obtained results has shown that the regulation indices of the developed system have been improved. The developed system can be successfully applied for regulating the excitation voltage of different-power synchronous motors, operating with a changing load, ensuring a value of the power coefficient close to 1.

Keywords: Transient process, synchronous motor, excitation mode, regulator, reactive power.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 645
312 Increasing the Resilience of Cyber Physical Systems in Smart Grid Environments using Dynamic Cells

Authors: Andrea Tundis, Carlos García Cordero, Rolf Egert, Alfredo Garro, Max Mühlhäuser

Abstract:

Resilience is an important system property that relies on the ability of a system to automatically recover from a degraded state so as to continue providing its services. Resilient systems have the means of detecting faults and failures with the added capability of automatically restoring their normal operations. Mastering resilience in the domain of Cyber-Physical Systems is challenging due to the interdependence of hybrid hardware and software components, along with physical limitations, laws, regulations and standards, among others. In order to overcome these challenges, this paper presents a modeling approach, based on the concept of Dynamic Cells, tailored to the management of Smart Grids. Additionally, a heuristic algorithm that works on top of the proposed modeling approach, to find resilient configurations, has been defined and implemented. More specifically, the model supports a flexible representation of Smart Grids and the algorithm is able to manage, at different abstraction levels, the resource consumption of individual grid elements on the presence of failures and faults. Finally, the proposal is evaluated in a test scenario where the effectiveness of such approach, when dealing with complex scenarios where adequate solutions are difficult to find, is shown.

Keywords: Cyber-physical systems, energy management, optimization, smart grids, self-healing, resilience, security.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1025
311 Student Attitude towards Entrepreneurship: A South African and Dutch Comparison

Authors: Natanya Meyer, Johann Landsberg

Abstract:

Unemployment among the youth is a significant problem in South Africa. Large corporations and the public sector simply cannot create enough jobs. Too many youths in South Africa currently do not consider entrepreneurship as an option in order to become independent. Unlike the youth of the Netherlands, South African youth prefer to find employment in the public or private sector. The Netherlands has a much lower unemployment rate than South Africa and the Dutch are generally very entrepreneurial. From early on, entrepreneurship is considered a desirable career option in the Netherlands. The purpose of this study was to determine whether there is a difference in the perceptions of some Dutch and South African students in terms of unemployment and entrepreneurship. Questionnaires were distributed to students at the North West University's Vaal Triangle campus in Vanderbijlpark in Gauteng, South Africa and the Technical University of Delft in the Netherlands. A descriptive statistical analysis approach was followed and the means for the independent questions were calculated. The results demonstrate that the Dutch students are not as concerned about unemployment after completion of their studies as this is not as significant a problem as it is in South Africa. Both groups had positive responses towards the posed questions, but the South African group felt more strongly about the issues. Both groups of students felt that there was a need for more practical entrepreneurship training. The South African education system should focus on practical entrepreneurship training from a young age.

Keywords: Entrepreneurship development, entrepreneurship development programmes, entrepreneurship intention, Netherlands, South Africa, unemployment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1884
310 Development of EPID-based Real time Dose Verification for Dynamic IMRT

Authors: Todsaporn Fuangrod, Daryl J. O'Connor, Boyd MC McCurdy, Peter B. Greer

Abstract:

An electronic portal image device (EPID) has become a method of patient-specific IMRT dose verification for radiotherapy. Research studies have focused on pre and post-treatment verification, however, there are currently no interventional procedures using EPID dosimetry that measure the dose in real time as a mechanism to ensure that overdoses do not occur and underdoses are detected as soon as is practically possible. As a result, an EPID-based real time dose verification system for dynamic IMRT was developed and was implemented with MATLAB/Simulink. The EPID image acquisition was set to continuous acquisition mode at 1.4 images per second. The system defined the time constraint gap, or execution gap at the image acquisition time, so that every calculation must be completed before the next image capture is completed. In addition, the <=-evaluation method was used for dose comparison, with two types of comparison processes; individual image and cumulative dose comparison monitored. The outputs of the system are the <=-map, the percent of <=<1, and mean-<= versus time, all in real time. Two strategies were used to test the system, including an error detection test and a clinical data test. The system can monitor the actual dose delivery compared with the treatment plan data or previous treatment dose delivery that means a radiation therapist is able to switch off the machine when the error is detected.

Keywords: real-time dose verification, EPID dosimetry, simulation, dynamic IMRT

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2148
309 A Cognitive Architectural Approach to the Institutional Roles of Agent Societies

Authors: Antônio Carlos da Rocha Costa

Abstract:

This paper concerns a formal model to help the simulation of agent societies where institutional roles and institutional links can be specified operationally. That is, this paper concerns institutional roles that can be specified in terms of a minimal behavioral capability that an agent should have in order to enact that role and, thus, to perform the set of institutional functions that role is responsible for. Correspondingly, the paper concerns institutional links that can be specified in terms of a minimal interactional capability that two agents should have in order to, while enacting the two institutional roles that are linked by that institutional link, perform for each other the institutional functions supported by that institutional link. The paper proposes a cognitive architecture approach to institutional roles and institutional links, that is, an approach in which a institutional role is seen as an abstract cognitive architecture that should be implemented by any concrete agent (or set of concrete agents) that enacts the institutional role, and in which institutional links are seen as interactions between the two abstract cognitive agents that model the two linked institutional roles. We introduce a cognitive architecture for such purpose, called the Institutional BCC (IBCC) model, which lifts Yoav Shoham-s BCC (Beliefs-Capabilities-Commitments) agent architecture to social contexts. We show how the resulting model can be taken as a means for a cognitive architecture account of institutional roles and institutional links of agent societies. Finally, we present an example of a generic scheme for certain fragments of the social organization of agent societies, where institutional roles and institutional links are given in terms of the model.

Keywords: Simulation of agent societies, institutional roles, cognitive architecture of institutional roles.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1378
308 Hiding Data in Images Using PCP

Authors: Souvik Bhattacharyya, Gautam Sanyal

Abstract:

In recent years, everything is trending toward digitalization and with the rapid development of the Internet technologies, digital media needs to be transmitted conveniently over the network. Attacks, misuse or unauthorized access of information is of great concern today which makes the protection of documents through digital media a priority problem. This urges us to devise new data hiding techniques to protect and secure the data of vital significance. In this respect, steganography often comes to the fore as a tool for hiding information. Steganography is a process that involves hiding a message in an appropriate carrier like image or audio. It is of Greek origin and means "covered or hidden writing". The goal of steganography is covert communication. Here the carrier can be sent to a receiver without any one except the authenticated receiver only knows existence of the information. Considerable amount of work has been carried out by different researchers on steganography. In this work the authors propose a novel Steganographic method for hiding information within the spatial domain of the gray scale image. The proposed approach works by selecting the embedding pixels using some mathematical function and then finds the 8 neighborhood of the each selected pixel and map each bit of the secret message in each of the neighbor pixel coordinate position in a specified manner. Before embedding a checking has been done to find out whether the selected pixel or its neighbor lies at the boundary of the image or not. This solution is independent of the nature of the data to be hidden and produces a stego image with minimum degradation.

Keywords: Cover Image, LSB, Pixel Coordinate Position (PCP), Stego Image.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1782
307 Comparison between Deterministic and Probabilistic Stability Analysis, Featuring Consequent Risk Assessment

Authors: Isabela Moreira Queiroz

Abstract:

Slope stability analyses are largely carried out by deterministic methods and evaluated through a single security factor. Although it is known that the geotechnical parameters can present great dispersal, such analyses are considered fixed and known. The probabilistic methods, in turn, incorporate the variability of input key parameters (random variables), resulting in a range of values of safety factors, thus enabling the determination of the probability of failure, which is an essential parameter in the calculation of the risk (probability multiplied by the consequence of the event). Among the probabilistic methods, there are three frequently used methods in geotechnical society: FOSM (First-Order, Second-Moment), Rosenblueth (Point Estimates) and Monte Carlo. This paper presents a comparison between the results from deterministic and probabilistic analyses (FOSM method, Monte Carlo and Rosenblueth) applied to a hypothetical slope. The end was held to evaluate the behavior of the slope and consequent risk analysis, which is used to calculate the risk and analyze their mitigation and control solutions. It can be observed that the results obtained by the three probabilistic methods were quite close. It should be noticed that the calculation of the risk makes it possible to list the priority to the implementation of mitigation measures. Therefore, it is recommended to do a good assessment of the geological-geotechnical model incorporating the uncertainty in viability, design, construction, operation and closure by means of risk management. 

Keywords: Probabilistic methods, risk assessment, risk management, slope stability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1703
306 Design, Modeling and Fabrication of a Tactile Sensor and Display System for Application in Laparoscopic Surgery

Authors: M. Ramezanifard, J. Dargahi, S. Najarian, N. Narayanan

Abstract:

One of the major disadvantages of the minimally invasive surgery (MIS) is the lack of tactile feedback to the surgeon. In order to identify and avoid any damage to the grasped complex tissue by endoscopic graspers, it is important to measure the local softness of tissue during MIS. One way to display the measured softness to the surgeon is a graphical method. In this paper, a new tactile sensor has been reported. The tactile sensor consists of an array of four softness sensors, which are integrated into the jaws of a modified commercial endoscopic grasper. Each individual softness sensor consists of two piezoelectric polymer Polyvinylidene Fluoride (PVDF) films, which are positioned below a rigid and a compliant cylinder. The compliant cylinder is fabricated using a micro molding technique. The combination of output voltages from PVDF films is used to determine the softness of the grasped object. The theoretical analysis of the sensor is also presented. A method has been developed with the aim of reproducing the tactile softness to the surgeon by using a graphical method. In this approach, the proposed system, including the interfacing and the data acquisition card, receives signals from the array of softness sensors. After the signals are processed, the tactile information is displayed by means of a color coding method. It is shown that the degrees of softness of the grasped objects/tissues can be visually differentiated and displayed on a monitor.

Keywords: Minimally invasive surgery, Robotic surgery, Sensor, Softness, Tactile.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1671
305 Recycled Cellulosic Fibers and Lignocellulosic Aggregates for Sustainable Building Materials

Authors: N. Stevulova, I. Schwarzova, V. Hospodarova, J. Junak, J. Briancin

Abstract:

Sustainability is becoming a priority for developers and the use of environmentally friendly materials is increasing. Nowadays, the application of raw materials from renewable sources to building materials has gained a significant interest in this research area. Lignocellulosic aggregates and cellulosic fibers are coming from many different sources such as wood, plants and waste. They are promising alternative materials to replace synthetic, glass and asbestos fibers as reinforcement in inorganic matrix of composites. Natural fibers are renewable resources so their cost is relatively low in comparison to synthetic fibers. With the consideration of environmental consciousness, natural fibers are biodegradable so their using can reduce CO2 emissions in the building materials production. The use of cellulosic fibers in cementitious matrices have gained importance because they make the composites lighter at high fiber content, they have comparable cost - performance ratios to similar building materials and they could be processed from waste paper, thus expanding the opportunities for waste utilization in cementitious materials. The main objective of this work is to find out the possibility of using different wastes: hemp hurds as waste of hemp stem processing and recycled fibers obtained from waste paper for making cement composite products such as mortars based on cellulose fibers. This material was made of cement mortar containing organic filler based on hemp hurds and recycled waste paper. In addition, the effects of fibers and their contents on some selected physical and mechanical properties of the fiber-cement plaster composites have been investigated. In this research organic material have used to mortars as 2.0, 5.0 and 10.0 % replacement of cement weight. Reference sample is made for comparison of physical and mechanical properties of cement composites based on recycled cellulosic fibers and lignocellulosic aggregates. The prepared specimens were tested after 28 days of curing in order to investigate density, compressive strength and water absorbability. Scanning Electron Microscopy examination was also carried out.

Keywords: Hemp hurds, organic filler, recycled paper, sustainable building materials.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2025
304 On the Early Development of Dispersion in Flow through a Tube with Wall Reactions

Authors: M. W. Lau, C. O. Ng

Abstract:

This is a study on numerical simulation of the convection-diffusion transport of a chemical species in steady flow through a small-diameter tube, which is lined with a very thin layer made up of retentive and absorptive materials. The species may be subject to a first-order kinetic reversible phase exchange with the wall material and irreversible absorption into the tube wall. Owing to the velocity shear across the tube section, the chemical species may spread out axially along the tube at a rate much larger than that given by the molecular diffusion; this process is known as dispersion. While the long-time dispersion behavior, well described by the Taylor model, has been extensively studied in the literature, the early development of the dispersion process is by contrast much less investigated. By early development, that means a span of time, after the release of the chemical into the flow, that is shorter than or comparable to the diffusion time scale across the tube section. To understand the early development of the dispersion, the governing equations along with the reactive boundary conditions are solved numerically using the Flux Corrected Transport Algorithm (FCTA). The computation has enabled us to investigate the combined effects on the early development of the dispersion coefficient due to the reversible and irreversible wall reactions. One of the results is shown that the dispersion coefficient may approach its steady-state limit in a short time under the following conditions: (i) a high value of Damkohler number (say Da ≥ 10); (ii) a small but non-zero value of absorption rate (say Γ* ≤ 0.5).

Keywords: Dispersion coefficient, early development of dispersion, FCTA, wall reactions.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1312
303 High Specific Speed in Circulating Water Pump Can Cause Cavitation, Noise and Vibration

Authors: Chandra Gupt Porwal

Abstract:

Excessive vibration means increased wear, increased repair efforts, bad product selection & quality and high energy consumption. This may be sometimes experienced by cavitation or suction/discharge recirculation which could occur only when net positive suction head available NPSHA drops below the net positive suction head required NPSHR. Cavitation can cause axial surging, if it is excessive, will damage mechanical seals, bearings, possibly other pump components frequently, and shorten the life of the impeller. Efforts have been made to explain Suction Energy (SE), Specific Speed (Ns), Suction Specific Speed (Nss), NPSHA, NPSHR & their significance, possible reasons of cavitation /internal recirculation, its diagnostics and remedial measures to arrest and prevent cavitation in this paper. A case study is presented by the author highlighting that the root cause of unwanted noise and vibration is due to cavitation, caused by high specific speeds or inadequate net- positive suction head available which results in damages to material surfaces of impeller & suction bells and degradation of machine performance, its capacity and efficiency too. Author strongly recommends revisiting the technical specifications of CW pumps to provide sufficient NPSH margin ratios >1.5, for future projects and Nss be limited to 8500 - 9000 for cavitation free operation.

Keywords: Best efficiency point (BEP), Net positive suction head NPSHA, NPSHR, Specific Speed NS, Suction Specific Speed Nss.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4990
302 Detection of Defects in CFRP by Ultrasonic IR Thermographic Method

Authors: W. Swiderski

Abstract:

In the paper introduced the diagnostic technique making possible the research of internal structures in composite materials reinforced fibres using in different applications. The main reason of damages in structures of these materials is the changing distribution of load in constructions in the lifetime. Appearing defect is largely complicated because of the appearance of disturbing of continuity of reinforced fibres, binder cracks and loss of fibres adhesiveness from binders. Defect in composite materials is usually more complicated than in metals. At present, infrared thermography is the most effective method in non-destructive testing composite. One of IR thermography methods used in non-destructive evaluation is vibrothermography. The vibrothermography is not a new non-destructive method, but the new solution in this test is use ultrasonic waves to thermal stimulation of materials. In this paper, both modelling and experimental results which illustrate the advantages and limitations of ultrasonic IR thermography in inspecting composite materials will be presented. The ThermoSon computer program for computing 3D dynamic temperature distribuions in anisotropic layered solids with subsurface defects subject to ulrasonic stimulation was used to optimise heating parameters in the detection of subsurface defects in composite materials. The program allows for the analysis of transient heat conduction and ultrasonic wave propagation phenomena in solids. The experiments at MIAT were fulfilled by means of FLIR SC 7600 IR camera. Ultrasonic stimulation was performed with the frequency from 15 kHz to 30 kHz with maximum power up to 2 kW.

Keywords: Composite material, ultrasonic, infrared thermography, non-destructive testing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 801
301 Authenticity of Lipid and Soluble Sugar Profiles of Various Oat Cultivars (Avena sativa)

Authors: Marijana M. Ačanski, Kristian A. Pastor, Djura N. Vujić

Abstract:

The identification of lipid and soluble sugar components in flour samples of different cultivars belonging to common oat species (Avena sativa L.) was performed: spring oat, winter oat and hulless oat. Fatty acids were extracted from flour samples with n-hexane, and derivatized into volatile methyl esters, using TMSH (trimethylsulfonium hydroxide in methanol). Soluble sugars were then extracted from defatted and dried samples of oat flour with 96% ethanol, and further derivatized into corresponding TMS-oximes, using hydroxylamine hydrochloride solution and BSTFA (N,O-bis-(trimethylsilyl)-trifluoroacetamide). The hexane and ethanol extracts of each oat cultivar were analyzed using GC-MS system. Lipid and simple sugar compositions are very similar in all samples of investigated cultivars. Chemometric tool was applied to numeric values of automatically integrated surface areas of detected lipid and simple sugar components in their corresponding derivatized forms. Hierarchical cluster analysis shows a very high similarity between the investigated flour samples of oat cultivars, according to the fatty acid content (0.9955). Moderate similarity was observed according to the content of soluble sugars (0.50). These preliminary results support the idea of establishing methods for oat flour authentication, and provide the means for distinguishing oat flour samples, regardless of the variety, from flour samples made of other cereal species, just by lipid and simple sugar profile analysis.

Keywords: Authentication, chemometrics, GC-MS, lipid and soluble sugar composition, oat cultivars.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1338
300 Increasing the Forecasting Fidelity of Current Collection System Operating Capability by Means of Contact Pressure Simulation Modelling

Authors: Anton Golubkov, Gleb Ermachkov, Aleksandr Smerdin, Oleg Sidorov, Victor Philippov

Abstract:

Current collection quality is one of the limiting factors when increasing trains movement speed in the rail sector. With the movement speed growth, the impact forces on the current collector from the rolling stock and the aerodynamic influence increase, which leads to the spread in the contact pressure values, separation of the current collector head from the contact wire, contact arcing and excessive wear of the contact elements. The upcoming trend in resolving this issue is the use of the automatic control systems providing stabilization of the contact pressure value. The present paper considers the features of the contemporary automatic control systems of the current collector’s pressure; their major disadvantages have been stated. A scheme of current collector pressure automatic control has been proposed, distinguished by a proactive influence on undesirable effects. A mathematical model of contact strips wearing has been presented, obtained in accordance with the provisions of the central composition rotatable design program. The analysis of the obtained dependencies has been carried out. The procedures for determining the optimal current collector pressure on the contact wire and the pressure control principle in the pneumatic drive have been described.

Keywords: High-speed running, current collector, contact strip, mathematical model, contact pressure, program control, wear, life cycle.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 360
299 Performance Evaluation of a Limited Round-Robin System

Authors: Yoshiaki Shikata

Abstract:

Performance of a limited Round-Robin (RR) rule is studied in order to clarify the characteristics of a realistic sharing model of a processor. Under the limited RR rule, the processor allocates to each request a fixed amount of time, called a quantum, in a fixed order. The sum of the requests being allocated these quanta is kept below a fixed value. Arriving requests that cannot be allocated quanta because of such a restriction are queued or rejected. Practical performance measures, such as the relationship between the mean sojourn time, the mean number of requests, or the loss probability and the quantum size are evaluated via simulation. In the evaluation, the requested service time of an arriving request is converted into a quantum number. One of these quanta is included in an RR cycle, which means a series of quanta allocated to each request in a fixed order. The service time of the arriving request can be evaluated using the number of RR cycles required to complete the service, the number of requests receiving service, and the quantum size. Then an increase or decrease in the number of quanta that are necessary before service is completed is reevaluated at the arrival or departure of other requests. Tracking these events and calculations enables us to analyze the performance of our limited RR rule. In particular, we obtain the most suitable quantum size, which minimizes the mean sojourn time, for the case in which the switching time for each quantum is considered.

Keywords: Limited RR rule, quantum, processor sharing, sojourn time, performance measures, simulation, loss probability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1219
298 Numerical Modelling of Dust Propagation in the Atmosphere of Tbilisi City in Case of Western Background Light Air

Authors: N. Gigauri, V. Kukhalashvili, A. Surmava, L. Intskirveli, L. Gverdtsiteli

Abstract:

Tbilisi, a large city of the South Caucasus, is a junction point connecting Asia and Europe, Russia and republics of the Asia Minor. Over the last years, its atmosphere has been experienced an increasing anthropogenic load. Numerical modeling method is used for study of Tbilisi atmospheric air pollution. By means of 3D non-linear non-steady numerical model a peculiarity of city atmosphere pollution is investigated during background western light air. Dust concentration spatial and time changes are determined. There are identified the zones of high, average and less pollution, dust accumulation areas, transfer directions etc. By numerical modeling, there is shown that the process of air pollution by the dust proceeds in four stages, and they depend on the intensity of motor traffic, the micro-relief of the city, and the location of city mains. In the interval of time 06:00-09:00 the intensive growth, 09:00-15:00 a constancy or weak decrease, 18:00-21:00 an increase, and from 21:00 to 06:00 a reduction of the dust concentrations take place. The highly polluted areas are located in the vicinity of the city center and at some peripherical territories of the city, where the maximum dust concentration at 9PM is equal to 2 maximum allowable concentrations. The similar investigations conducted in case of various meteorological situations will enable us to compile the map of background urban pollution and to elaborate practical measures for ambient air protection.

Keywords: Numerical modelling, source of pollution, dust propagation, western light air.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 429
297 Quality Evaluation of Compressed MRI Medical Images for Telemedicine Applications

Authors: Seddeq E. Ghrare, Salahaddin M. Shreef

Abstract:

Medical image modalities such as computed tomography (CT), magnetic resonance imaging (MRI), ultrasound (US), X-ray are adapted to diagnose disease. These modalities provide flexible means of reviewing anatomical cross-sections and physiological state in different parts of the human body. The raw medical images have a huge file size and need large storage requirements. So it should be such a way to reduce the size of those image files to be valid for telemedicine applications. Thus the image compression is a key factor to reduce the bit rate for transmission or storage while maintaining an acceptable reproduction quality, but it is natural to rise the question of how much an image can be compressed and still preserve sufficient information for a given clinical application. Many techniques for achieving data compression have been introduced. In this study, three different MRI modalities which are Brain, Spine and Knee have been compressed and reconstructed using wavelet transform. Subjective and objective evaluation has been done to investigate the clinical information quality of the compressed images. For the objective evaluation, the results show that the PSNR which indicates the quality of the reconstructed image is ranging from (21.95 dB to 30.80 dB, 27.25 dB to 35.75 dB, and 26.93 dB to 34.93 dB) for Brain, Spine, and Knee respectively. For the subjective evaluation test, the results show that the compression ratio of 40:1 was acceptable for brain image, whereas for spine and knee images 50:1 was acceptable.

Keywords: Medical Image, Magnetic Resonance Imaging, Image Compression, Discrete Wavelet Transform, Telemedicine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2940
296 Sustainable Hydrogel Nanocomposites Based on Grafted Chitosan and Clay for Effective Adsorption of Cationic Dye

Authors: H. Ferfera-Harrar, T. Benhalima, D. Lerari

Abstract:

Contamination of water, due to the discharge of untreated industrial wastewaters into the ecosystem, has become a serious problem for many countries. In this study, bioadsorbents based on chitosan-g-poly(acrylamide) and montmorillonite (MMt) clay (CTS-g-PAAm/MMt) hydrogel nanocomposites were prepared via free‐radical grafting copolymerization and crosslinking of acrylamide monomer (AAm) onto natural polysaccharide chitosan (CTS) as backbone, in presence of various contents of MMt clay as nanofiller. Then, they were hydrolyzed to obtain highly functionalized pH‐sensitive nanomaterials with uppermost swelling properties. Their structure characterization was conducted by X-Ray Diffraction (XRD) and Scanning Electron Microscopy (SEM) analyses. The adsorption performances of the developed nanohybrids were examined for removal of methylene blue (MB) cationic dye from aqueous solutions. The factors affecting the removal of MB, such as clay content, pH medium, adsorbent dose, initial dye concentration and temperature were explored. The adsorption process was found to be highly pH dependent. From adsorption kinetic results, the prepared adsorbents showed remarkable adsorption capacity and fast adsorption rate, mainly more than 88% of MB removal efficiency was reached after 50 min in 200 mg L-1 of dye solution. In addition, the incorporating of various content of clay has enhanced adsorption capacity of CTS-g-PAAm matrix from 1685 to a highest value of 1749 mg g-1 for the optimized nanocomposite containing 2 wt.% of MMt. The experimental kinetic data were well described by the pseudo-second-order model, while the equilibrium data were represented perfectly by Langmuir isotherm model. The maximum Langmuir equilibrium adsorption capacity (qm) was found to increase from 2173 mg g−1 until 2221 mg g−1 by adding 2 wt.% of clay nanofiller. Thermodynamic parameters revealed the spontaneous and endothermic nature of the process. In addition, the reusability study revealed that these bioadsorbents could be well regenerated with desorption efficiency overhead 87% and without any obvious decrease of removal efficiency as compared to starting ones even after four consecutive adsorption/desorption cycles, which exceeded 64%. These results suggest that the optimized nanocomposites are promising as low cost bioadsorbents.

Keywords: Chitosan, clay, dye adsorption, hydrogels nanocomposites.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 962
295 Stochastic Simulation of Reaction-Diffusion Systems

Authors: Paola Lecca, Lorenzo Dematte

Abstract:

Reactiondiffusion systems are mathematical models that describe how the concentration of one or more substances distributed in space changes under the influence of local chemical reactions in which the substances are converted into each other, and diffusion which causes the substances to spread out in space. The classical representation of a reaction-diffusion system is given by semi-linear parabolic partial differential equations, whose general form is ÔêétX(x, t) = DΔX(x, t), where X(x, t) is the state vector, D is the matrix of the diffusion coefficients and Δ is the Laplace operator. If the solute move in an homogeneous system in thermal equilibrium, the diffusion coefficients are constants that do not depend on the local concentration of solvent and of solutes and on local temperature of the medium. In this paper a new stochastic reaction-diffusion model in which the diffusion coefficients are function of the local concentration, viscosity and frictional forces of solvent and solute is presented. Such a model provides a more realistic description of the molecular kinetics in non-homogenoeus and highly structured media as the intra- and inter-cellular spaces. The movement of a molecule A from a region i to a region j of the space is described as a first order reaction Ai k- → Aj , where the rate constant k depends on the diffusion coefficient. Representing the diffusional motion as a chemical reaction allows to assimilate a reaction-diffusion system to a pure reaction system and to simulate it with Gillespie-inspired stochastic simulation algorithms. The stochastic time evolution of the system is given by the occurrence of diffusion events and chemical reaction events. At each time step an event (reaction or diffusion) is selected from a probability distribution of waiting times determined by the specific speed of reaction and diffusion events. Redi is the software tool, developed to implement the model of reaction-diffusion kinetics and dynamics. It is a free software, that can be downloaded from http://www.cosbi.eu. To demonstrate the validity of the new reaction-diffusion model, the simulation results of the chaperone-assisted protein folding in cytoplasm obtained with Redi are reported. This case study is redrawing the attention of the scientific community due to current interests on protein aggregation as a potential cause for neurodegenerative diseases.

Keywords: Reaction-diffusion systems, Fick's law, stochastic simulation algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1708
294 A Software Framework for Predicting Oil-Palm Yield from Climate Data

Authors: Mohd. Noor Md. Sap, A. Majid Awan

Abstract:

Intelligent systems based on machine learning techniques, such as classification, clustering, are gaining wide spread popularity in real world applications. This paper presents work on developing a software system for predicting crop yield, for example oil-palm yield, from climate and plantation data. At the core of our system is a method for unsupervised partitioning of data for finding spatio-temporal patterns in climate data using kernel methods which offer strength to deal with complex data. This work gets inspiration from the notion that a non-linear data transformation into some high dimensional feature space increases the possibility of linear separability of the patterns in the transformed space. Therefore, it simplifies exploration of the associated structure in the data. Kernel methods implicitly perform a non-linear mapping of the input data into a high dimensional feature space by replacing the inner products with an appropriate positive definite function. In this paper we present a robust weighted kernel k-means algorithm incorporating spatial constraints for clustering the data. The proposed algorithm can effectively handle noise, outliers and auto-correlation in the spatial data, for effective and efficient data analysis by exploring patterns and structures in the data, and thus can be used for predicting oil-palm yield by analyzing various factors affecting the yield.

Keywords: Pattern analysis, clustering, kernel methods, spatial data, crop yield

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1939
293 Damage to Strawberries Caused by Simulated Transport

Authors: G. La Scalia, M. Enea, R. Micale, O. Corona, L. Settanni

Abstract:

The quality and condition of perishable products delivered to the market and their subsequent selling prices are directly affected by the care taken during harvesting and handling. Mechanical injury, in fact, occurs at all stages, from pre-harvest operations through post-harvest handling, packing and transport to the market. The main implications of this damage are the reduction of the product’s quality and economical losses related to the shelf life diminution. For most perishable products, the shelf life is relatively short and it is typically dictated by microbial growth related to the application of dynamic and static loads during transportation. This paper presents the correlation between vibration levels and microbiological growth on strawberries and woodland strawberries and detects the presence of volatile organic compounds (VOC) in order to develop an intelligent logistic unit capable of monitoring VOCs using a specific sensor system. Fresh fruits were exposed to vibrations by means of a vibrating table in a temperature-controlled environment. Microbiological analyses were conducted on samples, taken at different positions along the column of the crates. The values obtained were compared with control samples not exposed to vibrations and the results show that different positions along the column influence the development of bacteria, yeasts and filamentous fungi.

Keywords: Microbiological analysis, shelf life, transport damage, volatile organic compounds.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3088
292 Vitamin D Deficiency and Insufficiency in Postmenopausal Women with Obesity

Authors: Vladyslav Povoroznyuk, Anna Musiienko, Nataliia Dzerovych, Roksolana Povoroznyuk, Oksana Ivanyk

Abstract:

Deficiency and insufficiency of Vitamin D is a pandemic of the 21st century. Obesity patients have a lower level of vitamin D, but the literature data are contradictory. The purpose of this study is to investigate deficiency and insufficiency vitamin D in postmenopausal women with obesity. We examined 1007 women aged 50-89 years. Mean age was 65.74±8.61 years; mean height was 1.61±0.07 m; mean weight was 70.65±13.50 kg; mean body mass index was 27.27±4.86 kg/m2, and mean 25(OH) D levels in serum was 26.00±12.00 nmol/l. The women were divided into the following six groups depending on body mass index: I group – 338 women with normal body weight, II group – 16 women with insufficient body weight, III group – 382 women with excessive body weight, IV group – 199 women with obesity of class I, V group – 60 women with obesity of class II, and VI group – 12 women with obesity of class III. Level of 25(OH)D in serum was measured by means of an electrochemiluminescent method - Elecsys 2010 analyzer (Roche Diagnostics, Germany) and cobas test-systems. 34.4% of the examined women have deficiency of vitamin D and 31.4% insufficiency. Women with obesity of class I (23.60±10.24 ng/ml) and obese of class II (22.38±10.34 ng/ml) had significantly lower levels of 25 (OH) D compared to women with normal body weight (28.24±12.99 ng/ml), p=0.00003. In women with obesity, BMI significantly influences vitamin D level, and this influence does not depend on the season.

Keywords: Obesity, body mass index, vitamin D deficiency/insufficiency, postmenopausal women, age.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1024
291 Pragati Node Popularity (PNP) Approach to Identify Congestion Hot Spots in MPLS

Authors: E. Ramaraj, A. Padmapriya

Abstract:

In large Internet backbones, Service Providers typically have to explicitly manage the traffic flows in order to optimize the use of network resources. This process is often referred to as Traffic Engineering (TE). Common objectives of traffic engineering include balance traffic distribution across the network and avoiding congestion hot spots. Raj P H and SVK Raja designed the Bayesian network approach to identify congestion hors pots in MPLS. In this approach for every node in the network the Conditional Probability Distribution (CPD) is specified. Based on the CPD the congestion hot spots are identified. Then the traffic can be distributed so that no link in the network is either over utilized or under utilized. Although the Bayesian network approach has been implemented in operational networks, it has a number of well known scaling issues. This paper proposes a new approach, which we call the Pragati (means Progress) Node Popularity (PNP) approach to identify the congestion hot spots with the network topology alone. In the new Pragati Node Popularity approach, IP routing runs natively over the physical topology rather than depending on the CPD of each node as in Bayesian network. We first illustrate our approach with a simple network, then present a formal analysis of the Pragati Node Popularity approach. Our PNP approach shows that for any given network of Bayesian approach, it exactly identifies the same result with minimum efforts. We further extend the result to a more generic one: for any network topology and even though the network is loopy. A theoretical insight of our result is that the optimal routing is always shortest path routing with respect to some considerations of hot spots in the networks.

Keywords: Conditional Probability Distribution, Congestion hotspots, Operational Networks, Traffic Engineering.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1940
290 Estimation of the Bit Side Force by Using Artificial Neural Network

Authors: Mohammad Heidari

Abstract:

Horizontal wells are proven to be better producers because they can be extended for a long distance in the pay zone. Engineers have the technical means to forecast the well productivity for a given horizontal length. However, experiences have shown that the actual production rate is often significantly less than that of forecasted. It is a difficult task, if not impossible to identify the real reason why a horizontal well is not producing what was forecasted. Often the source of problem lies in the drilling of horizontal section such as permeability reduction in the pay zone due to mud invasion or snaky well patterns created during drilling. Although drillers aim to drill a constant inclination hole in the pay zone, the more frequent outcome is a sinusoidal wellbore trajectory. The two factors, which play an important role in wellbore tortuosity, are the inclination and side force at bit. A constant inclination horizontal well can only be drilled if the bit face is maintained perpendicular to longitudinal axis of bottom hole assembly (BHA) while keeping the side force nil at the bit. This approach assumes that there exists no formation force at bit. Hence, an appropriate BHA can be designed if bit side force and bit tilt are determined accurately. The Artificial Neural Network (ANN) is superior to existing analytical techniques. In this study, the neural networks have been employed as a general approximation tool for estimation of the bit side forces. A number of samples are analyzed with ANN for parameters of bit side force and the results are compared with exact analysis. Back Propagation Neural network (BPN) is used to approximation of bit side forces. Resultant low relative error value of the test indicates the usability of the BPN in this area.

Keywords: Artificial Neural Network, BHA, Horizontal Well, Stabilizer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1948
289 Increasing Fishery Economic Added Value through Post Fishing Program: Cold Storage Program

Authors: Indrijuli Magsari Putri, Dicky R. Munaf

Abstract:

The purpose of this paper is to guide the effort in improving the economic added value of Indonesian fisheries product through post fishing program, which is cold storage program. Indonesia's fisheries potential has been acknowledged by the world. FAO (2009) stated that Indonesia is one of the tenth highest producers of fishery products in the world. Based on BPS (Statistics Indonesia data), the national fisheries production in 2011 reached 5.714 million tons, which 93.55% came from marine fisheries and 6.45% from open waters. Indonesian territory consist of 2/3 of Indonesian waters, has given enormous benefits for Indonesia, especially fishermen. To improve the economic level of fishermen requires efforts to develop fisheries business unit. On of the efforts is by improving the quality of products which are marketed in the regional and international levels. It is certainly need the support of the existence of various fishery facilities (infrastructure to superstructure), one of which is cold storage. Given the many benefits of cold storage as a means of processing of fishery resources, Indonesia Maritime Security Coordinating Board (IMSCB) as one of the maritime institutions for maritime security and safety, has a program to empower the coastal community through encourages the development of cold storage in the middle and lower fishery business unit. The development of cold storage facilities which able to run its maximum role requires synergistic efforts of various parties.

Keywords: Cold Storage, Fish, Regulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2072
288 Optimization of the Dental Direct Digital Imaging by Applying the Self-Recognition Technology

Authors: Mina Dabirinezhad, Mohsen Bayat Pour, Amin Dabirinejad

Abstract:

This paper is intended to introduce the technology to solve some of the deficiencies of the direct digital radiology. Nowadays, digital radiology is the latest progression in dental imaging, which has become an essential part of dentistry. There are two main parts of the direct digital radiology comprised of an intraoral X-ray machine and a sensor (digital image receptor). The dentists and the dental nurses experience afflictions during the taking image process by the direct digital X-ray machine. For instance, sometimes they need to readjust the sensor in the mouth of the patient to take the X-ray image again due to the low quality of that. Another problem is, the position of the sensor may move in the mouth of the patient and it triggers off an inappropriate image for the dentists. It means that it is a time-consuming process for dentists or dental nurses. On the other hand, taking several the X-ray images brings some problems for the patient such as being harmful to their health and feeling pain in their mouth due to the pressure of the sensor to the jaw. The author provides a technology to solve the above-mentioned issues that is called “Self-Recognition Direct Digital Radiology” (SDDR). This technology is based on the principle that the intraoral X-ray machine is capable to diagnose the location of the sensor in the mouth of the patient automatically. In addition, to solve the aforementioned problems, SDDR technology brings out fewer environmental impacts in comparison to the previous version.

Keywords: Dental direct digital imaging, digital image receptor, digital x-ray machine, and environmental impacts.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 557