Search results for: human error
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3073

Search results for: human error

1273 EEG Waves Classifier using Wavelet Transform and Fourier Transform

Authors: Maan M. Shaker

Abstract:

The electroencephalograph (EEG) signal is one of the most widely signal used in the bioinformatics field due to its rich information about human tasks. In this work EEG waves classification is achieved using the Discrete Wavelet Transform DWT with Fast Fourier Transform (FFT) by adopting the normalized EEG data. The DWT is used as a classifier of the EEG wave's frequencies, while FFT is implemented to visualize the EEG waves in multi-resolution of DWT. Several real EEG data sets (real EEG data for both normal and abnormal persons) have been tested and the results improve the validity of the proposed technique.

Keywords: Bioinformatics, DWT, EEG waves, FFT.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5536
1272 Rotation Invariant Fusion of Partial Image Parts in Vista Creation using Missing View Regeneration

Authors: H. B. Kekre, Sudeep D. Thepade

Abstract:

The automatic construction of large, high-resolution image vistas (mosaics) is an active area of research in the fields of photogrammetry [1,2], computer vision [1,4], medical image processing [4], computer graphics [3] and biometrics [8]. Image stitching is one of the possible options to get image mosaics. Vista Creation in image processing is used to construct an image with a large field of view than that could be obtained with a single photograph. It refers to transforming and stitching multiple images into a new aggregate image without any visible seam or distortion in the overlapping areas. Vista creation process aligns two partial images over each other and blends them together. Image mosaics allow one to compensate for differences in viewing geometry. Thus they can be used to simplify tasks by simulating the condition in which the scene is viewed from a fixed position with single camera. While obtaining partial images the geometric anomalies like rotation, scaling are bound to happen. To nullify effect of rotation of partial images on process of vista creation, we are proposing rotation invariant vista creation algorithm in this paper. Rotation of partial image parts in the proposed method of vista creation may introduce some missing region in the vista. To correct this error, that is to fill the missing region further we have used image inpainting method on the created vista. This missing view regeneration method also overcomes the problem of missing view [31] in vista due to cropping, irregular boundaries of partial image parts and errors in digitization [35]. The method of missing view regeneration generates the missing view of vista using the information present in vista itself.

Keywords: Vista, Overlap Estimation, Rotation Invariance, Missing View Regeneration.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1712
1271 The Journey from Lean Manufacturing to Industry 4.0: The Rail Manufacturing Process in Mexico

Authors: Diana Flores Galindo, Richard Gil Herrera

Abstract:

Nowadays, Lean Manufacturing and Industry 4.0 are very important in every country. One of the main benefits is continued market presence. It has been identified that there is a need to change existing educational programs, as well as update the knowledge and skills of existing employees. It should be borne in mind that behind each technological improvement, there is a human being. Human talent cannot be neglected. The main objectives of this article are to review the link between Lean Manufacturing, the incorporation of Industry 4.0 and the steps to follow to implement it; analyze the current situation and study the implications and benefits of this new trend, with a particular focus on Mexico. Lean Manufacturing and Industry 4.0 implementation waves must always take care of the most important capital – intellectual capital. The methodology used in this article comprised the following steps: reviewing the reality of the fourth industrial revolution, reviewing employees’ skills on the journey to become world-class, and analyzing the situation in Mexico. Lean Manufacturing and Industry 4.0 were studied not as exclusive concepts, but as complementary ones. The methodological framework used is focused on motivating companies’ collaborators to guarantee common results, innovate, and remain in the market in the face of new requirements from company stakeholders. The key findings were that both trends emphasize the need to improve communication across the entire company and incorporate new technologies into everyday work, from the shop floor to administrative staff, to help improve processes. Taking care of people, activities and processes will bring a company success. In the specific case of Mexico, companies in all sectors need to be aware of and implement technological improvements according to their specific needs. Low-cost labor represents one of the most typical barriers. In conclusion, companies must build a roadmap according to their strategy and needs to achieve their short, medium- and long-term goals.

Keywords: Lean management, lean manufacturing, industry 4.0, motivation, SWOT analysis, Hoshin Kanri.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 810
1270 Method for Auto-Calibrate Projector and Color-Depth Systems for Spatial Augmented Reality Applications

Authors: R. Estrada, A. Henriquez, R. Becerra, C. Laguna

Abstract:

Spatial Augmented Reality is a variation of Augmented Reality where the Head-Mounted Display is not required. This variation of Augmented Reality is useful in cases where the need for a Head-Mounted Display itself is a limitation. To achieve this, Spatial Augmented Reality techniques substitute the technological elements of Augmented Reality; the virtual world is projected onto a physical surface. To create an interactive spatial augmented experience, the application must be aware of the spatial relations that exist between its core elements. In this case, the core elements are referred to as a projection system and an input system, and the process to achieve this spatial awareness is called system calibration. The Spatial Augmented Reality system is considered calibrated if the projected virtual world scale is similar to the real-world scale, meaning that a virtual object will maintain its perceived dimensions when projected to the real world. Also, the input system is calibrated if the application knows the relative position of a point in the projection plane and the RGB-depth sensor origin point. Any kind of projection technology can be used, light-based projectors, close-range projectors, and screens, as long as it complies with the defined constraints; the method was tested on different configurations. The proposed procedure does not rely on a physical marker, minimizing the human intervention on the process. The tests are made using a Kinect V2 as an input sensor and several projection devices. In order to test the method, the constraints defined were applied to a variety of physical configurations; once the method was executed, some variables were obtained to measure the method performance. It was demonstrated that the method obtained can solve different arrangements, giving the user a wide range of setup possibilities.

Keywords: Color depth sensor, human computer interface, interactive surface, spatial augmented reality.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 566
1269 Effect of Injection Moulding Process Parameter on Tensile Strength Using Taguchi Method

Authors: Gurjeet Singh, M. K. Pradhan, Ajay Verma

Abstract:

The plastic industry plays very important role in the economy of any country. It is generally among the leading share of the economy of the country. Since metals and their alloys are very rarely available on the earth. Therefore, to produce plastic products and components, which finds application in many industrial as well as household consumer products is beneficial. Since 50% plastic products are manufactured by injection moulding process. For production of better quality product, we have to control quality characteristics and performance of the product. The process parameters plays a significant role in production of plastic, hence the control of process parameter is essential. In this paper the effect of the parameters selection on injection moulding process has been described. It is to define suitable parameters in producing plastic product. Selecting the process parameter by trial and error is neither desirable nor acceptable, as it is often tends to increase the cost and time. Hence, optimization of processing parameter of injection moulding process is essential. The experiments were designed with Taguchi’s orthogonal array to achieve the result with least number of experiments. Plastic material polypropylene is studied. Tensile strength test of material is done on universal testing machine, which is produced by injection moulding machine. By using Taguchi technique with the help of MiniTab-14 software the best value of injection pressure, melt temperature, packing pressure and packing time is obtained. We found that process parameter packing pressure contribute more in production of good tensile plastic product.

Keywords: Injection moulding, tensile strength, Taguchi method, poly-propylene.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3745
1268 Model Membrane from Shed Snake Skins

Authors: M. Kumpugdee-Vollrath, T. Subongkot, T. Ngawhirunpat

Abstract:

In this project we are interested in studying different kinds of shed snake skins in order to apply them as a model membrane for pharmaceutical purposes instead of human stratum corneum. Many types of shed snake skins as well as model drugs were studied by different techniques. The data will give deeper understanding about the interaction between drugs and model membranes and may allow us to choose the suitable model membrane for studying the effect of pharmaceutical products.

Keywords: DSC, FTIR, permeation, SAXS, shed snake skin.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2375
1267 Combined Sewer Overflow forecasting with Feed-forward Back-propagation Artificial Neural Network

Authors: Achela K. Fernando, Xiujuan Zhang, Peter F. Kinley

Abstract:

A feed-forward, back-propagation Artificial Neural Network (ANN) model has been used to forecast the occurrences of wastewater overflows in a combined sewerage reticulation system. This approach was tested to evaluate its applicability as a method alternative to the common practice of developing a complete conceptual, mathematical hydrological-hydraulic model for the sewerage system to enable such forecasts. The ANN approach obviates the need for a-priori understanding and representation of the underlying hydrological hydraulic phenomena in mathematical terms but enables learning the characteristics of a sewer overflow from the historical data. The performance of the standard feed-forward, back-propagation of error algorithm was enhanced by a modified data normalizing technique that enabled the ANN model to extrapolate into the territory that was unseen by the training data. The algorithm and the data normalizing method are presented along with the ANN model output results that indicate a good accuracy in the forecasted sewer overflow rates. However, it was revealed that the accurate forecasting of the overflow rates are heavily dependent on the availability of a real-time flow monitoring at the overflow structure to provide antecedent flow rate data. The ability of the ANN to forecast the overflow rates without the antecedent flow rates (as is the case with traditional conceptual reticulation models) was found to be quite poor.

Keywords: Artificial Neural Networks, Back-propagationlearning, Combined sewer overflows, Forecasting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1505
1266 Infrastructure Change Monitoring Using Multitemporal Multispectral Satellite Images

Authors: U. Datta

Abstract:

The main objective of this study is to find a suitable approach to monitor the land infrastructure growth over a period of time using multispectral satellite images. Bi-temporal change detection method is unable to indicate the continuous change occurring over a long period of time. To achieve this objective, the approach used here estimates a statistical model from series of multispectral image data over a long period of time, assuming there is no considerable change during that time period and then compare it with the multispectral image data obtained at a later time. The change is estimated pixel-wise. Statistical composite hypothesis technique is used for estimating pixel based change detection in a defined region. The generalized likelihood ratio test (GLRT) is used to detect the changed pixel from probabilistic estimated model of the corresponding pixel. The changed pixel is detected assuming that the images have been co-registered prior to estimation. To minimize error due to co-registration, 8-neighborhood pixels around the pixel under test are also considered. The multispectral images from Sentinel-2 and Landsat-8 from 2015 to 2018 are used for this purpose. There are different challenges in this method. First and foremost challenge is to get quite a large number of datasets for multivariate distribution modelling. A large number of images are always discarded due to cloud coverage. Due to imperfect modelling there will be high probability of false alarm. Overall conclusion that can be drawn from this work is that the probabilistic method described in this paper has given some promising results, which need to be pursued further.

Keywords: Co-registration, GLRT, infrastructure growth, multispectral, multitemporal, pixel-based change detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 698
1265 The Evaluation of Gravity Anomalies Based on Global Models by Land Gravity Data

Authors: M. Yilmaz, I. Yilmaz, M. Uysal

Abstract:

The Earth system generates different phenomena that are observable at the surface of the Earth such as mass deformations and displacements leading to plate tectonics, earthquakes, and volcanism. The dynamic processes associated with the interior, surface, and atmosphere of the Earth affect the three pillars of geodesy: shape of the Earth, its gravity field, and its rotation. Geodesy establishes a characteristic structure in order to define, monitor, and predict of the whole Earth system. The traditional and new instruments, observables, and techniques in geodesy are related to the gravity field. Therefore, the geodesy monitors the gravity field and its temporal variability in order to transform the geodetic observations made on the physical surface of the Earth into the geometrical surface in which positions are mathematically defined. In this paper, the main components of the gravity field modeling, (Free-air and Bouguer) gravity anomalies are calculated via recent global models (EGM2008, EIGEN6C4, and GECO) over a selected study area. The model-based gravity anomalies are compared with the corresponding terrestrial gravity data in terms of standard deviation (SD) and root mean square error (RMSE) for determining the best fit global model in the study area at a regional scale in Turkey. The least SD (13.63 mGal) and RMSE (15.71 mGal) were obtained by EGM2008 for the Free-air gravity anomaly residuals. For the Bouguer gravity anomaly residuals, EIGEN6C4 provides the least SD (8.05 mGal) and RMSE (8.12 mGal). The results indicated that EIGEN6C4 can be a useful tool for modeling the gravity field of the Earth over the study area.

Keywords: Free-air gravity anomaly, Bouguer gravity anomaly, global model, land gravity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 955
1264 Numerical Simulation of the Flowing of Ice Slurry in Seawater Pipe of Polar Ships

Authors: Li Xu, Huanbao Jiang, Zhenfei Huang, Lailai Zhang

Abstract:

In recent years, as global warming, the sea-ice extent of North Arctic undergoes an evident decrease and Arctic channel has attracted the attention of shipping industry. Ice crystals existing in the seawater of Arctic channel which enter the seawater system of the ship with the seawater were found blocking the seawater pipe. The appearance of cooler paralysis, auxiliary machine error and even ship power system paralysis may be happened if seriously. In order to reduce the effect of high temperature in auxiliary equipment, seawater system will use external ice-water to participate in the cooling cycle and achieve the state of its flow. The distribution of ice crystals in seawater pipe can be achieved. As the ice slurry system is solid liquid two-phase system, the flow process of ice-water mixture is very complex and diverse. In this paper, the flow process in seawater pipe of ice slurry is simulated with fluid dynamics simulation software based on k-ε turbulence model. As the ice packing fraction is a key factor effecting the distribution of ice crystals, the influence of ice packing fraction on the flowing process of ice slurry is analyzed. In this work, the simulation results show that as the ice packing fraction is relatively large, the distribution of ice crystals is uneven in the flowing process of the seawater which has such disadvantage as increase the possibility of blocking, that will provide scientific forecasting methods for the forming of ice block in seawater piping system. It has important significance for the reliability of the operating of polar ships in the future.

Keywords: Ice slurry, seawater pipe, ice packing fraction, numerical simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1364
1263 Extracting Terrain Points from Airborne Laser Scanning Data in Densely Forested Areas

Authors: Ziad Abdeldayem, Jakub Markiewicz, Kunal Kansara, Laura Edwards

Abstract:

Airborne Laser Scanning (ALS) is one of the main technologies for generating high-resolution digital terrain models (DTMs). DTMs are crucial to several applications, such as topographic mapping, flood zone delineation, geographic information systems (GIS), hydrological modelling, spatial analysis, etc. Laser scanning system generates irregularly spaced three-dimensional cloud of points. Raw ALS data are mainly ground points (that represent the bare earth) and non-ground points (that represent buildings, trees, cars, etc.). Removing all the non-ground points from the raw data is referred to as filtering. Filtering heavily forested areas is considered a difficult and challenging task as the canopy stops laser pulses from reaching the terrain surface. This research presents an approach for removing non-ground points from raw ALS data in densely forested areas. Smoothing splines are exploited to interpolate and fit the noisy ALS data. The presented filter utilizes a weight function to allocate weights for each point of the data. Furthermore, unlike most of the methods, the presented filtering algorithm is designed to be automatic. Three different forested areas in the United Kingdom are used to assess the performance of the algorithm. The results show that the generated DTMs from the filtered data are accurate (when compared against reference terrain data) and the performance of the method is stable for all the heavily forested data samples. The average root mean square error (RMSE) value is 0.35 m.

Keywords: Airborne laser scanning, digital terrain models, filtering, forested areas.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 702
1262 System Performance Comparison of Turbo and Trellis Coded Optical CDMA Systems

Authors: M. Kulkarni, R. K. Sinha, D. R. Bhaskar

Abstract:

In this paper, we have compared the performance of a Turbo and Trellis coded optical code division multiple access (OCDMA) system. The comparison of the two codes has been accomplished by employing optical orthogonal codes (OOCs). The Bit Error Rate (BER) performances have been compared by varying the code weights of address codes employed by the system. We have considered the effects of optical multiple access interference (OMAI), thermal noise and avalanche photodiode (APD) detector noise. Analysis has been carried out for the system with and without double optical hard limiter (DHL). From the simulation results it is observed that a better and distinct comparison can be drawn between the performance of Trellis and Turbo coded systems, at lower code weights of optical orthogonal codes for a fixed number of users. The BER performance of the Turbo coded system is found to be better than the Trellis coded system for all code weights that have been considered for the simulation. Nevertheless, the Trellis coded OCDMA system is found to be better than the uncoded OCDMA system. Trellis coded OCDMA can be used in systems where decoding time has to be kept low, bandwidth is limited and high reliability is not a crucial factor as in local area networks. Also the system hardware is less complex in comparison to the Turbo coded system. Trellis coded OCDMA system can be used without significant modification of the existing chipsets. Turbo-coded OCDMA can however be employed in systems where high reliability is needed and bandwidth is not a limiting factor.

Keywords: avalanche photodiode, optical code division multipleaccess, optical multiple access interference, Trellis codedmodulation, Turbo code

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1887
1261 Electricity Price Forecasting: A Comparative Analysis with Shallow-ANN and DNN

Authors: Fazıl Gökgöz, Fahrettin Filiz

Abstract:

Electricity prices have sophisticated features such as high volatility, nonlinearity and high frequency that make forecasting quite difficult. Electricity price has a volatile and non-random character so that, it is possible to identify the patterns based on the historical data. Intelligent decision-making requires accurate price forecasting for market traders, retailers, and generation companies. So far, many shallow-ANN (artificial neural networks) models have been published in the literature and showed adequate forecasting results. During the last years, neural networks with many hidden layers, which are referred to as DNN (deep neural networks) have been using in the machine learning community. The goal of this study is to investigate electricity price forecasting performance of the shallow-ANN and DNN models for the Turkish day-ahead electricity market. The forecasting accuracy of the models has been evaluated with publicly available data from the Turkish day-ahead electricity market. Both shallow-ANN and DNN approach would give successful result in forecasting problems. Historical load, price and weather temperature data are used as the input variables for the models. The data set includes power consumption measurements gathered between January 2016 and December 2017 with one-hour resolution. In this regard, forecasting studies have been carried out comparatively with shallow-ANN and DNN models for Turkish electricity markets in the related time period. The main contribution of this study is the investigation of different shallow-ANN and DNN models in the field of electricity price forecast. All models are compared regarding their MAE (Mean Absolute Error) and MSE (Mean Square) results. DNN models give better forecasting performance compare to shallow-ANN. Best five MAE results for DNN models are 0.346, 0.372, 0.392, 0,402 and 0.409.

Keywords: Deep learning, artificial neural networks, energy price forecasting, Turkey.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1073
1260 Hybrid Adaptive Modeling to Enhance Robustness of Real-Time Optimization

Authors: Hussain Syed Asad, Richard Kwok Kit Yuen, Gongsheng Huang

Abstract:

Real-time optimization has been considered an effective approach for improving energy efficient operation of heating, ventilation, and air-conditioning (HVAC) systems. In model-based real-time optimization, model mismatches cannot be avoided. When model mismatches are significant, the performance of the real-time optimization will be impaired and hence the expected energy saving will be reduced. In this paper, the model mismatches for chiller plant on real-time optimization are considered. In the real-time optimization of the chiller plant, simplified semi-physical or grey box model of chiller is always used, which should be identified using available operation data. To overcome the model mismatches associated with the chiller model, hybrid Genetic Algorithms (HGAs) method is used for online real-time training of the chiller model. HGAs combines Genetic Algorithms (GAs) method (for global search) and traditional optimization method (i.e. faster and more efficient for local search) to avoid conventional hit and trial process of GAs. The identification of model parameters is synthesized as an optimization problem; and the objective function is the Least Square Error between the output from the model and the actual output from the chiller plant. A case study is used to illustrate the implementation of the proposed method. It has been shown that the proposed approach is able to provide reliability in decision making, enhance the robustness of the real-time optimization strategy and improve on energy performance.

Keywords: Energy performance, hybrid adaptive modeling, hybrid genetic algorithms, real-time optimization, heating, ventilation, and air-conditioning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1125
1259 Development of 3D Laser Scanner for Robot Navigation

Authors: A. Emre Ozturk, Ergun Ercelebi

Abstract:

Autonomous robotic systems need an equipment like a human eye for their movement. In this study a 3D laser scanner has been designed and implemented for those autonomous robotic systems. In general 3D laser scanners are using 2 dimension laser range finders that are moving on one-axis (1D) to generate the model. In this study, the model has been obtained by a one-dimensional laser range finder that is moving in two –axis (2D) and because of this the laser scanner has been produced cheaper.

Keywords: 3D Laser Scanner, embedded systems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2416
1258 Designing a Novel General Sorting Network Constructor Using Artificial Evolution

Authors: Michal Bidlo, Radek Bidlo, Lukas Sekanina

Abstract:

A method is presented for the construction of arbitrary even-input sorting networks exhibiting better properties than the networks created using a conventional technique of the same type. The method was discovered by means of a genetic algorithm combined with an application-specific development. Similarly to human inventions in the area of theoretical computer science, the evolved invention was analyzed: its generality was proven and area and time complexities were determined.

Keywords: Development, genetic algorithm, program, sorting network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1276
1257 Biosynthesis and In vitro Studies of Silver Bionanoparticles Synthesized from Aspergillusspecies and its Antimicrobial Activity against Multi Drug Resistant Clinical Isolates

Authors: M. Saravanan

Abstract:

Antimicrobial resistant is becoming a major factor in virtually all hospital acquired infection may soon untreatable is a serious public health problem. These concerns have led to major research effort to discover alternative strategies for the treatment of bacterial infection. Nanobiotehnology is an upcoming and fast developing field with potential application for human welfare. An important area of nanotechnology for development of reliable and environmental friendly process for synthesis of nanoscale particles through biological systems In the present studies are reported on the use of fungal strain Aspergillus species for the extracellular synthesis of bionanoparticles from 1 mM silver nitrate (AgNO3) solution. The report would be focused on the synthesis of metallic bionanoparticles of silver using a reduction of aqueous Ag+ ion with the culture supernatants of Microorganisms. The bio-reduction of the Ag+ ions in the solution would be monitored in the aqueous component and the spectrum of the solution would measure through UV-visible spectrophotometer The bionanoscale particles were further characterized by Atomic Force Microscopy (AFM), Fourier Transform Infrared Spectroscopy (FTIR) and Thin layer chromatography. The synthesized bionanoscale particle showed a maximum absorption at 385 nm in the visible region. Atomic Force Microscopy investigation of silver bionanoparticles identified that they ranged in the size of 250 nm - 680 nm; the work analyzed the antimicrobial efficacy of the silver bionanoparticles against various multi drug resistant clinical isolates. The present Study would be emphasizing on the applicability to synthesize the metallic nanostructures and to understand the biochemical and molecular mechanism of nanoparticles formation by the cell filtrate in order to achieve better control over size and polydispersity of the nanoparticles. This would help to develop nanomedicine against various multi drug resistant human pathogens.

Keywords: Bionanoparticles, UV-visible spectroscopy, AtomicForce Microscopy, Extracellular synthesis, Multi drug resistant, antimicrobial activity, Nanomedicine

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2223
1256 A Linear Regression Model for Estimating Anxiety Index Using Wide Area Frontal Lobe Brain Blood Volume

Authors: Takashi Kaburagi, Masashi Takenaka, Yosuke Kurihara, Takashi Matsumoto

Abstract:

Major depressive disorder (MDD) is one of the most common mental illnesses today. It is believed to be caused by a combination of several factors, including stress. Stress can be quantitatively evaluated using the State-Trait Anxiety Inventory (STAI), one of the best indices to evaluate anxiety. Although STAI scores are widely used in applications ranging from clinical diagnosis to basic research, the scores are calculated based on a self-reported questionnaire. An objective evaluation is required because the subject may intentionally change his/her answers if multiple tests are carried out. In this article, we present a modified index called the “multi-channel Laterality Index at Rest (mc-LIR)” by recording the brain activity from a wider area of the frontal lobe using multi-channel functional near-infrared spectroscopy (fNIRS). The presented index aims to measure multiple positions near the Fpz defined by the international 10-20 system positioning. Using 24 subjects, the dependencies on the number of measuring points used to calculate the mc-LIR and its correlation coefficients with the STAI scores are reported. Furthermore, a simple linear regression was performed to estimate the STAI scores from mc-LIR. The cross-validation error is also reported. The experimental results show that using multiple positions near the Fpz will improve the correlation coefficients and estimation than those using only two positions.

Keywords: Stress, functional near-infrared spectroscopy, frontal lobe, state-trait anxiety inventory score.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1149
1255 Red Diode Laser in the Treatment of Epidermal Diseases in PDT

Authors: Farhad H. Mustafa, Mohamad S. Jaafar , Asaad H. Ismail, Ahamad F. Omar, Zahra A. Timimi, Hend A. A. Houssein

Abstract:

The process of laser absorption in the skin during laser irradiation was a critical point in medical application treatments. Delivery the correct amount of laser light is a critical element in photodynamic therapy (PDT). More amounts of laser light able to affect tissues in the skin and small amount not able to enhance PDT procedure in skin. The knowledge of the skin tone laser dependent distribution of 635 nm radiation and its penetration depth in skin is a very important precondition for the investigation of advantage laser induced effect in (PDT) in epidermis diseases (psoriasis). The aim of this work was to estimate an optimum effect of diode laser (635 nm) on the treatment of epidermis diseases in different color skin. Furthermore, it is to improve safety of laser in PDT in epidermis diseases treatment. Advanced system analytical program (ASAP) which is a new approach in investigating the PDT, dependent on optical properties of different skin color was used in present work. A two layered Realistic Skin Model (RSM); stratum corneum and epidermal with red laser (635 nm, 10 mW) were used for irradiative transfer to study fluence and absorbance in different penetration for various human skin colors. Several skin tones very fair, fair, light, medium and dark are used to irradiative transfer. This investigation involved the principles of laser tissue interaction when the skin optically injected by a red laser diode. The results demonstrated that the power characteristic of a laser diode (635 nm) can affect the treatment of epidermal disease in various color skins. Power absorption of the various human skins were recorded and analyzed in order to find the influence of the melanin in PDT treatment in epidermal disease. A two layered RSM show that the change in penetration depth in epidermal layer of the color skin has a larger effect on the distribution of absorbed laser in the skin; this is due to the variation of the melanin concentration for each color.

Keywords: Photodynamic therapy, Realistic skin model, Laser, Light penetration, simulation, Optical properties of skin, Melanin.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2365
1254 Gate Tunnel Current Calculation for NMOSFET Based on Deep Sub-Micron Effects

Authors: Ashwani K. Rana, Narottam Chand, Vinod Kapoor

Abstract:

Aggressive scaling of MOS devices requires use of ultra-thin gate oxides to maintain a reasonable short channel effect and to take the advantage of higher density, high speed, lower cost etc. Such thin oxides give rise to high electric fields, resulting in considerable gate tunneling current through gate oxide in nano regime. Consequently, accurate analysis of gate tunneling current is very important especially in context of low power application. In this paper, a simple and efficient analytical model has been developed for channel and source/drain overlap region gate tunneling current through ultra thin gate oxide n-channel MOSFET with inevitable deep submicron effect (DSME).The results obtained have been verified with simulated and reported experimental results for the purpose of validation. It is shown that the calculated tunnel current is well fitted to the measured one over the entire oxide thickness range. The proposed model is suitable enough to be used in circuit simulator due to its simplicity. It is observed that neglecting deep sub-micron effect may lead to large error in the calculated gate tunneling current. It is found that temperature has almost negligible effect on gate tunneling current. It is also reported that gate tunneling current reduces with the increase of gate oxide thickness. The impact of source/drain overlap length is also assessed on gate tunneling current.

Keywords: Gate tunneling current, analytical model, gate dielectrics, non uniform poly gate doping, MOSFET, fringing field effect and image charges.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1721
1253 General Formula for Water Surface Profile over Side Weir in the Combined, Trapezoidal and Exponential, Channels

Authors: Abdulrahman Abdulrahman

Abstract:

A side weir is a hydraulic structure set into the side of a channel. This structure is used for water level control in channels, to divert flow from a main channel into a side channel when the water level in the main channel exceeds a specific limit and as storm overflows from urban sewerage system. Computation of water surface over the side weirs is essential to determine the flow rate of the side weir. Analytical solutions for water surface profile along rectangular side weir are available only for the special cases of rectangular and trapezoidal channels considering constant specific energy. In this paper, a rectangular side weir located in a combined (trapezoidal with exponential) channel was considered. Expanding binominal series of integer and fraction powers and the using of reduction formula of cosine function integrals, a general analytical formula was obtained for water surface profile along a side weir in a combined (trapezoidal with exponential) channel. Since triangular, rectangular, trapezoidal and parabolic cross-sections are special cases of the combined cross section, the derived formula, is applicable to triangular, rectangular, trapezoidal cross-sections as analytical solution and semi-analytical solution to parabolic cross-section with maximum relative error smaller than 0.76%. The proposed solution should be a useful engineering tool for the evaluation and design of side weirs in open channel.

Keywords: Analytical solution, combined channel, exponential channel, side weirs, trapezoidal channel, water surface profile.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 914
1252 Curbing Abuses of Legal Power in the Society

Authors: Tajudeen Ojo Ibraheem

Abstract:

In a world characterized by greed and the lust for power and its attendant trappings, abuse of legal power is nothing new to most of us. Legal abuses of power abound in all fields of human endeavour. Accounts of such abuses dominate the mass media and for the average individual, no single day goes by without his getting to hear about at least one such occurrence. This paper briefly looks at the meaning of legal power, what legal abuse is all about, its causes, and some of its manifestations in the society. Its consequences will also be discussed and some suggestions for reform will be made. In the course of the paper, references will be made to various jurisdictions around the world.

Keywords: Abuse, legal, power, society.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2000
1251 Prediction of Optimum Cutting Parameters to obtain Desired Surface in Finish Pass end Milling of Aluminium Alloy with Carbide Tool using Artificial Neural Network

Authors: Anjan Kumar Kakati, M. Chandrasekaran, Amitava Mandal, Amit Kumar Singh

Abstract:

End milling process is one of the common metal cutting operations used for machining parts in manufacturing industry. It is usually performed at the final stage in manufacturing a product and surface roughness of the produced job plays an important role. In general, the surface roughness affects wear resistance, ductility, tensile, fatigue strength, etc., for machined parts and cannot be neglected in design. In the present work an experimental investigation of end milling of aluminium alloy with carbide tool is carried out and the effect of different cutting parameters on the response are studied with three-dimensional surface plots. An artificial neural network (ANN) is used to establish the relationship between the surface roughness and the input cutting parameters (i.e., spindle speed, feed, and depth of cut). The Matlab ANN toolbox works on feed forward back propagation algorithm is used for modeling purpose. 3-12-1 network structure having minimum average prediction error found as best network architecture for predicting surface roughness value. The network predicts surface roughness for unseen data and found that the result/prediction is better. For desired surface finish of the component to be produced there are many different combination of cutting parameters are available. The optimum cutting parameter for obtaining desired surface finish, to maximize tool life is predicted. The methodology is demonstrated, number of problems are solved and algorithm is coded in Matlab®.

Keywords: End milling, Surface roughness, Neural networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2152
1250 On the Accuracy of Basic Modal Displacement Method Considering Various Earthquakes

Authors: Seyed Sadegh Naseralavi, Sadegh Balaghi, Ehsan Khojastehfar

Abstract:

Time history seismic analysis is supposed to be the most accurate method to predict the seismic demand of structures. On the other hand, the required computational time of this method toward achieving the result is its main deficiency. While being applied in optimization process, in which the structure must be analyzed thousands of time, reducing the required computational time of seismic analysis of structures makes the optimization algorithms more practical. Apparently, the invented approximate methods produce some amount of errors in comparison with exact time history analysis but the recently proposed method namely, Complete Quadratic Combination (CQC) and Sum Root of the Sum of Squares (SRSS) drastically reduces the computational time by combination of peak responses in each mode. In the present research, the Basic Modal Displacement (BMD) method is introduced and applied towards estimation of seismic demand of main structure. Seismic demand of sampled structure is estimated by calculation of modal displacement of basic structure (in which the modal displacement has been calculated). Shear steel sampled structures are selected as case studies. The error applying the introduced method is calculated by comparison of the estimated seismic demands with exact time history dynamic analysis. The efficiency of the proposed method is demonstrated by application of three types of earthquakes (in view of time of peak ground acceleration).

Keywords: Time history dynamic analysis, basic modal displacement, earthquake induced demands, shear steel structures.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1408
1249 Biomolecules Based Microarray for Screening Human Endothelial Cells Behavior

Authors: Adel Dalilottojari, Bahman Delalat, Frances J. Harding, Michaelia P. Cockshell, Claudine S. Bonder, Nicolas H. Voelcker

Abstract:

Endothelial Progenitor Cell (EPC) based therapies continue to be of interest to treat ischemic events based on their proven role to promote blood vessel formation and thus tissue re-vascularisation. Current strategies for the production of clinical-grade EPCs requires the in vitro isolation of EPCs from peripheral blood followed by cell expansion to provide sufficient quantities EPCs for cell therapy. This study aims to examine the use of different biomolecules to significantly improve the current strategy of EPC capture and expansion on collagen type I (Col I). In this study, four different biomolecules were immobilised on a surface and then investigated for their capacity to support EPC capture and proliferation. First, a cell microarray platform was fabricated by coating a glass surface with epoxy functional allyl glycidyl ether plasma polymer (AGEpp) to mediate biomolecule binding. The four candidate biomolecules tested were Col I, collagen type II (Col II), collagen type IV (Col IV) and vascular endothelial growth factor A (VEGF-A), which were arrayed on the epoxy-functionalised surface using a non-contact printer. The surrounding area between the printed biomolecules was passivated with polyethylene glycol-bisamine (A-PEG) to prevent non-specific cell attachment. EPCs were seeded onto the microarray platform and cell numbers quantified after 1 h (to determine capture) and 72 h (to determine proliferation). All of the extracellular matrix (ECM) biomolecules printed demonstrated an ability to capture EPCs within 1 h of cell seeding with Col II exhibiting the highest level of attachment when compared to the other biomolecules. Interestingly, Col IV exhibited the highest increase in EPC expansion after 72 h when compared to Col I, Col II and VEGF-A. These results provide information for significant improvement in the capture and expansion of human EPC for further application.

Keywords: Cardiovascular disease, cell microarray platform, cell therapy, endothelial progenitor cells, high throughput screening.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1406
1248 Performance Analysis of Chrominance Red and Chrominance Blue in JPEG

Authors: Mamta Garg

Abstract:

While compressing text files is useful, compressing still image files is almost a necessity. A typical image takes up much more storage than a typical text message and without compression images would be extremely clumsy to store and distribute. The amount of information required to store pictures on modern computers is quite large in relation to the amount of bandwidth commonly available to transmit them over the Internet and applications. Image compression addresses the problem of reducing the amount of data required to represent a digital image. Performance of any image compression method can be evaluated by measuring the root-mean-square-error & peak signal to noise ratio. The method of image compression that will be analyzed in this paper is based on the lossy JPEG image compression technique, the most popular compression technique for color images. JPEG compression is able to greatly reduce file size with minimal image degradation by throwing away the least “important" information. In JPEG, both color components are downsampled simultaneously, but in this paper we will compare the results when the compression is done by downsampling the single chroma part. In this paper we will demonstrate more compression ratio is achieved when the chrominance blue is downsampled as compared to downsampling the chrominance red in JPEG compression. But the peak signal to noise ratio is more when the chrominance red is downsampled as compared to downsampling the chrominance blue in JPEG compression. In particular we will use the hats.jpg as a demonstration of JPEG compression using low pass filter and demonstrate that the image is compressed with barely any visual differences with both methods.

Keywords: JPEG, Discrete Cosine Transform, Quantization, Color Space Conversion, Image Compression, Peak Signal to Noise Ratio & Compression Ratio.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1666
1247 Optimal Image Representation for Linear Canonical Transform Multiplexing

Authors: Navdeep Goel, Salvador Gabarda

Abstract:

Digital images are widely used in computer applications. To store or transmit the uncompressed images requires considerable storage capacity and transmission bandwidth. Image compression is a means to perform transmission or storage of visual data in the most economical way. This paper explains about how images can be encoded to be transmitted in a multiplexing time-frequency domain channel. Multiplexing involves packing signals together whose representations are compact in the working domain. In order to optimize transmission resources each 4 × 4 pixel block of the image is transformed by a suitable polynomial approximation, into a minimal number of coefficients. Less than 4 × 4 coefficients in one block spares a significant amount of transmitted information, but some information is lost. Different approximations for image transformation have been evaluated as polynomial representation (Vandermonde matrix), least squares + gradient descent, 1-D Chebyshev polynomials, 2-D Chebyshev polynomials or singular value decomposition (SVD). Results have been compared in terms of nominal compression rate (NCR), compression ratio (CR) and peak signal-to-noise ratio (PSNR) in order to minimize the error function defined as the difference between the original pixel gray levels and the approximated polynomial output. Polynomial coefficients have been later encoded and handled for generating chirps in a target rate of about two chirps per 4 × 4 pixel block and then submitted to a transmission multiplexing operation in the time-frequency domain.

Keywords: Chirp signals, Image multiplexing, Image transformation, Linear canonical transform, Polynomial approximation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2117
1246 An Optimal Unsupervised Satellite image Segmentation Approach Based on Pearson System and k-Means Clustering Algorithm Initialization

Authors: Ahmed Rekik, Mourad Zribi, Ahmed Ben Hamida, Mohamed Benjelloun

Abstract:

This paper presents an optimal and unsupervised satellite image segmentation approach based on Pearson system and k-Means Clustering Algorithm Initialization. Such method could be considered as original by the fact that it utilised K-Means clustering algorithm for an optimal initialisation of image class number on one hand and it exploited Pearson system for an optimal statistical distributions- affectation of each considered class on the other hand. Satellite image exploitation requires the use of different approaches, especially those founded on the unsupervised statistical segmentation principle. Such approaches necessitate definition of several parameters like image class number, class variables- estimation and generalised mixture distributions. Use of statistical images- attributes assured convincing and promoting results under the condition of having an optimal initialisation step with appropriated statistical distributions- affectation. Pearson system associated with a k-means clustering algorithm and Stochastic Expectation-Maximization 'SEM' algorithm could be adapted to such problem. For each image-s class, Pearson system attributes one distribution type according to different parameters and especially the Skewness 'β1' and the kurtosis 'β2'. The different adapted algorithms, K-Means clustering algorithm, SEM algorithm and Pearson system algorithm, are then applied to satellite image segmentation problem. Efficiency of those combined algorithms was firstly validated with the Mean Quadratic Error 'MQE' evaluation, and secondly with visual inspection along several comparisons of these unsupervised images- segmentation.

Keywords: Unsupervised classification, Pearson system, Satellite image, Segmentation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2016
1245 Investigation of Improved Chaotic Signal Tracking by Echo State Neural Networks and Multilayer Perceptron via Training of Extended Kalman Filter Approach

Authors: Farhad Asadi, S. Hossein Sadati

Abstract:

This paper presents a prediction performance of feedforward Multilayer Perceptron (MLP) and Echo State Networks (ESN) trained with extended Kalman filter. Feedforward neural networks and ESN are powerful neural networks which can track and predict nonlinear signals. However, their tracking performance depends on the specific signals or data sets, having the risk of instability accompanied by large error. In this study we explore this process by applying different network size and leaking rate for prediction of nonlinear or chaotic signals in MLP neural networks. Major problems of ESN training such as the problem of initialization of the network and improvement in the prediction performance are tackled. The influence of coefficient of activation function in the hidden layer and other key parameters are investigated by simulation results. Extended Kalman filter is employed in order to improve the sequential and regulation learning rate of the feedforward neural networks. This training approach has vital features in the training of the network when signals have chaotic or non-stationary sequential pattern. Minimization of the variance in each step of the computation and hence smoothing of tracking were obtained by examining the results, indicating satisfactory tracking characteristics for certain conditions. In addition, simulation results confirmed satisfactory performance of both of the two neural networks with modified parameterization in tracking of the nonlinear signals.

Keywords: Feedforward neural networks, nonlinear signal prediction, echo state neural networks approach, leaking rates, capacity of neural networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 745
1244 Spatial Query Localization Method in Limited Reference Point Environment

Authors: Victor Krebss

Abstract:

Task of object localization is one of the major challenges in creating intelligent transportation. Unfortunately, in densely built-up urban areas, localization based on GPS only produces a large error, or simply becomes impossible. New opportunities arise for the localization due to the rapidly emerging concept of a wireless ad-hoc network. Such network, allows estimating potential distance between these objects measuring received signal level and construct a graph of distances in which nodes are the localization objects, and edges - estimates of the distances between pairs of nodes. Due to the known coordinates of individual nodes (anchors), it is possible to determine the location of all (or part) of the remaining nodes of the graph. Moreover, road map, available in digital format can provide localization routines with valuable additional information to narrow node location search. However, despite abundance of well-known algorithms for solving the problem of localization and significant research efforts, there are still many issues that currently are addressed only partially. In this paper, we propose localization approach based on the graph mapped distances on the digital road map data basis. In fact, problem is reduced to distance graph embedding into the graph representing area geo location data. It makes possible to localize objects, in some cases even if only one reference point is available. We propose simple embedding algorithm and sample implementation as spatial queries over sensor network data stored in spatial database, allowing employing effectively spatial indexing, optimized spatial search routines and geometry functions.

Keywords: Intelligent Transportation System, Sensor Network, Localization, Spatial Query, GIS, Graph Embedding.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1523