Search results for: imputation techniques
1867 Efficient Microspore Isolation Methods for High Yield Embryoids and Regeneration in Rice (Oryza sativa L.)
Authors: S. M. Shahinul Islam, Israt Ara, Narendra Tuteja, Sreeramanan Subramaniam
Abstract:
Through anther and microspore culture methods, complete homozygous plants can be produced within a year as compared to the long inbreeding method. Isolated microspore culture is one of the most important techniques for rapid development of haploid plants. The efficiency of this method is influenced by several factors such as cultural conditions, growth regulators, plant media, pretreatments, physical and growth conditions of the donor plants, pollen isolation procedure, etc. The main purpose of this study was to improve the isolated microspore culture protocol in order to increase the efficiency of embryoids, its regeneration and reducing albinisms. Under this study we have tested mainly three different microspore isolation procedures by glass rod, homozeniger and by blending and found the efficiency on gametic embryogenesis. There are three types of media viz. washing, pre-culture and induction was used. The induction medium as AMC (modified MS) supplemented by 2, 4-D (2.5 mg/l), kinetin (0.5 mg/l) and higher amount of D-Manitol (90 g/l) instead of sucrose and two types of amino acids (L-glutamine and L-serine) were used. Out of three main microspore isolation procedure by homogenizer isolation (P4) showed best performance on ELS induction (177%) and green plantlets (104%) compared with other techniques. For all cases albinisims occurred but microspore isolation from excised anthers by glass rod and homogenizer showed lesser numbers of albino plants that was also one of the important findings in this study.
Keywords: Androgenesis, pretreatment, microspore culture, regeneration, albino plants, Oryza sativa.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 41331866 Topographical Image Transference Compatibility Generated Through Moiré Technique Applying Parametrical Softwares of Computer Assisted Design
Authors: M. V. G. Silva, J. Gazzola, I. M. Dal Fabbro, A. C. L. Lino
Abstract:
Computer aided design accounts with the support of parametric software in the design of machine components as well as of any other pieces of interest. The complexities of the element under study sometimes offer certain difficulties to computer design, or ever might generate mistakes in the final body conception. Reverse engineering techniques are based on the transformation of already conceived body images into a matrix of points which can be visualized by the design software. The literature exhibits several techniques to obtain machine components dimensional fields, as contact instrument (MMC), calipers and optical methods as laser scanner, holograms as well as moiré methods. The objective of this research work was to analyze the moiré technique as instrument of reverse engineering, applied to bodies of nom complex geometry as simple solid figures, creating matrices of points. These matrices were forwarded to a parametric software named SolidWorks to generate the virtual object. Volume data obtained by mechanical means, i.e., by caliper, the volume obtained through the moiré method and the volume generated by the SolidWorks software were compared and found to be in close agreement. This research work suggests the application of phase shifting moiré methods as instrument of reverse engineering, serving also to support farm machinery element designs.Keywords: Reverse engineering, Moiré technique, three dimensional image generation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 34571865 Detection of Action Potentials in the Presence of Noise Using Phase-Space Techniques
Authors: Christopher Paterson, Richard Curry, Alan Purvis, Simon Johnson
Abstract:
Emerging Bio-engineering fields such as Brain Computer Interfaces, neuroprothesis devices and modeling and simulation of neural networks have led to increased research activity in algorithms for the detection, isolation and classification of Action Potentials (AP) from noisy data trains. Current techniques in the field of 'unsupervised no-prior knowledge' biosignal processing include energy operators, wavelet detection and adaptive thresholding. These tend to bias towards larger AP waveforms, AP may be missed due to deviations in spike shape and frequency and correlated noise spectrums can cause false detection. Also, such algorithms tend to suffer from large computational expense. A new signal detection technique based upon the ideas of phasespace diagrams and trajectories is proposed based upon the use of a delayed copy of the AP to highlight discontinuities relative to background noise. This idea has been used to create algorithms that are computationally inexpensive and address the above problems. Distinct AP have been picked out and manually classified from real physiological data recorded from a cockroach. To facilitate testing of the new technique, an Auto Regressive Moving Average (ARMA) noise model has been constructed bases upon background noise of the recordings. Along with the AP classification means this model enables generation of realistic neuronal data sets at arbitrary signal to noise ratio (SNR).Keywords: Action potential detection, Low SNR, Phase spacediagrams/trajectories, Unsupervised/no-prior knowledge.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16431864 Design and Performance Improvement of Three-Dimensional Optical Code Division Multiple Access Networks with NAND Detection Technique
Authors: Satyasen Panda, Urmila Bhanja
Abstract:
In this paper, we have presented and analyzed three-dimensional (3-D) matrices of wavelength/time/space code for optical code division multiple access (OCDMA) networks with NAND subtraction detection technique. The 3-D codes are constructed by integrating a two-dimensional modified quadratic congruence (MQC) code with one-dimensional modified prime (MP) code. The respective encoders and decoders were designed using fiber Bragg gratings and optical delay lines to minimize the bit error rate (BER). The performance analysis of the 3D-OCDMA system is based on measurement of signal to noise ratio (SNR), BER and eye diagram for a different number of simultaneous users. Also, in the analysis, various types of noises and multiple access interference (MAI) effects were considered. The results obtained with NAND detection technique were compared with those obtained with OR and AND subtraction techniques. The comparison results proved that the NAND detection technique with 3-D MQC\MP code can accommodate more number of simultaneous users for longer distances of fiber with minimum BER as compared to OR and AND subtraction techniques. The received optical power is also measured at various levels of BER to analyze the effect of attenuation.Keywords: Cross correlation, three-dimensional optical code division multiple access, spectral amplitude coding optical code division multiple access, multiple access interference, phase induced intensity noise, three-dimensional modified quadratic congruence/modified prime code.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15291863 Cyber Fraud Schemes: Modus Operandi, Tools and Techniques, and the Role of European Legislation as a Defense Strategy
Authors: Papathanasiou Anastasios, Liontos George, Liagkou Vasiliki, Glavas Euripides
Abstract:
The purpose of this paper is to describe the growing problem of various cyber fraud schemes that exist on the internet and are currently among the most prevalent. The main focus of this paper is to provide a detailed description of the modus operandi, tools, and techniques utilized in four basic typologies of cyber frauds: Business Email Compromise (BEC) attacks, investment fraud, romance scams, and online sales fraud. The paper aims to shed light on the methods employed by cybercriminals in perpetrating these types of fraud, as well as the strategies they use to deceive and victimize individuals and businesses on the internet. Furthermore, this study outlines defense strategies intended to tackle the issue head-on, with a particular emphasis on the crucial role played by European legislation. European legislation has proactively adapted to the evolving landscape of cyber fraud, striving to enhance cybersecurity awareness, bolster user education, and implement advanced technical controls to mitigate associated risks. The paper evaluates the advantages and innovations brought about by the European legislation while also acknowledging potential flaws that cybercriminals might exploit. As a result, recommendations for refining the legislation are offered in this study in order to better address this pressing issue.
Keywords: Business email compromise, cybercrime, European legislation, investment fraud, Network and Information Security, online sales fraud, romance scams.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1961862 Participation in IAEA Proficiency Test to Analyse Cobalt, Strontium and Caesium in Seawater Using Direct Counting and Radiochemical Techniques
Authors: S. Visetpotjanakit, C. Khrautongkieo
Abstract:
Radiation monitoring in the environment and foodstuffs is one of the main responsibilities of Office of Atoms for Peace (OAP) as the nuclear regulatory body of Thailand. The main goal of the OAP is to assure the safety of the Thai people and environment from any radiological incidents. Various radioanalytical methods have been developed to monitor radiation and radionuclides in the environmental and foodstuff samples. To validate our analytical performance, several proficiency test exercises from the International Atomic Energy Agency (IAEA) have been performed. Here, the results of a proficiency test exercise referred to as the Proficiency Test for Tritium, Cobalt, Strontium and Caesium Isotopes in Seawater 2017 (IAEA-RML-2017-01) are presented. All radionuclides excepting ³H were analysed using various radioanalytical methods, i.e. direct gamma-ray counting for determining ⁶⁰Co, ¹³⁴Cs and ¹³⁷Cs and developed radiochemical techniques for analysing ¹³⁴Cs, ¹³⁷Cs using AMP pre-concentration technique and 90Sr using di-(2-ethylhexyl) phosphoric acid (HDEHP) liquid extraction technique. The analysis results were submitted to IAEA. All results passed IAEA criteria, i.e. accuracy, precision and trueness and obtained ‘Accepted’ statuses. These confirm the data quality from the OAP environmental radiation laboratory to monitor radiation in the environment.
Keywords: International atomic energy agency, proficiency test, radiation monitoring, seawater.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8231861 Surface Topography Assessment Techniques based on an In-process Monitoring Approach of Tool Wear and Cutting Force Signature
Authors: A. M. Alaskari, S. E. Oraby
Abstract:
The quality of a machined surface is becoming more and more important to justify the increasing demands of sophisticated component performance, longevity, and reliability. Usually, any machining operation leaves its own characteristic evidence on the machined surface in the form of finely spaced micro irregularities (surface roughness) left by the associated indeterministic characteristics of the different elements of the system: tool-machineworkpart- cutting parameters. However, one of the most influential sources in machining affecting surface roughness is the instantaneous state of tool edge. The main objective of the current work is to relate the in-process immeasurable cutting edge deformation and surface roughness to a more reliable easy-to-measure force signals using a robust non-linear time-dependent modeling regression techniques. Time-dependent modeling is beneficial when modern machining systems, such as adaptive control techniques are considered, where the state of the machined surface and the health of the cutting edge are monitored, assessed and controlled online using realtime information provided by the variability encountered in the measured force signals. Correlation between wear propagation and roughness variation is developed throughout the different edge lifetimes. The surface roughness is further evaluated in the light of the variation in both the static and the dynamic force signals. Consistent correlation is found between surface roughness variation and tool wear progress within its initial and constant regions. At the first few seconds of cutting, expected and well known trend of the effect of the cutting parameters is observed. Surface roughness is positively influenced by the level of the feed rate and negatively by the cutting speed. As cutting continues, roughness is affected, to different extents, by the rather localized wear modes either on the tool nose or on its flank areas. Moreover, it seems that roughness varies as wear attitude transfers from one mode to another and, in general, it is shown that it is improved as wear increases but with possible corresponding workpart dimensional inaccuracy. The dynamic force signals are found reasonably sensitive to simulate either the progressive or the random modes of tool edge deformation. While the frictional force components, feeding and radial, are found informative regarding progressive wear modes, the vertical (power) components is found more representative carrier to system instability resulting from the edge-s random deformation.
Keywords: Dynamic force signals, surface roughness (finish), tool wear and deformation, tool wear modes (nose, flank)
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13491860 Extracting the Coupled Dynamics in Thin-Walled Beams from Numerical Data Bases
Authors: Mohammad A. Bani-Khaled
Abstract:
In this work we use the Discrete Proper Orthogonal Decomposition transform to characterize the properties of coupled dynamics in thin-walled beams by exploiting numerical simulations obtained from finite element simulations. The outcomes of the will improve our understanding of the linear and nonlinear coupled behavior of thin-walled beams structures. Thin-walled beams have widespread usage in modern engineering application in both large scale structures (aeronautical structures), as well as in nano-structures (nano-tubes). Therefore, detailed knowledge in regard to the properties of coupled vibrations and buckling in these structures are of great interest in the research community. Due to the geometric complexity in the overall structure and in particular in the cross-sections it is necessary to involve computational mechanics to numerically simulate the dynamics. In using numerical computational techniques, it is not necessary to over simplify a model in order to solve the equations of motions. Computational dynamics methods produce databases of controlled resolution in time and space. These numerical databases contain information on the properties of the coupled dynamics. In order to extract the system dynamic properties and strength of coupling among the various fields of the motion, processing techniques are required. Time- Proper Orthogonal Decomposition transform is a powerful tool for processing databases for the dynamics. It will be used to study the coupled dynamics of thin-walled basic structures. These structures are ideal to form a basis for a systematic study of coupled dynamics in structures of complex geometry.
Keywords: Coupled dynamics, geometric complexity, Proper Orthogonal Decomposition (POD), thin walled beams.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10161859 Modeling and Simulation of Ship Structures Using Finite Element Method
Authors: Javid Iqbal, Zhu Shifan
Abstract:
The development in the construction of unconventional ships and the implementation of lightweight materials have shown a large impulse towards finite element (FE) method, making it a general tool for ship design. This paper briefly presents the modeling and analysis techniques of ship structures using FE method for complex boundary conditions which are difficult to analyze by existing Ship Classification Societies rules. During operation, all ships experience complex loading conditions. These loads are general categories into thermal loads, linear static, dynamic and non-linear loads. General strength of the ship structure is analyzed using static FE analysis. FE method is also suitable to consider the local loads generated by ballast tanks and cargo in addition to hydrostatic and hydrodynamic loads. Vibration analysis of a ship structure and its components can be performed using FE method which helps in obtaining the dynamic stability of the ship. FE method has developed better techniques for calculation of natural frequencies and different mode shapes of ship structure to avoid resonance both globally and locally. There is a lot of development towards the ideal design in ship industry over the past few years for solving complex engineering problems by employing the data stored in the FE model. This paper provides an overview of ship modeling methodology for FE analysis and its general application. Historical background, the basic concept of FE, advantages, and disadvantages of FE analysis are also reported along with examples related to hull strength and structural components.
Keywords: Dynamic analysis, finite element methods, ship structure, vibration analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24671858 Review of Downscaling Methods in Climate Change and Their Role in Hydrological Studies
Authors: Nishi Bhuvandas, P. V. Timbadiya, P. L. Patel, P. D. Porey
Abstract:
Recent perceived climate variability raises concerns with unprecedented hydrological phenomena and extremes. Distribution and circulation of the waters of the Earth become increasingly difficult to determine because of additional uncertainty related to anthropogenic emissions. The world wide observed changes in the large-scale hydrological cycle have been related to an increase in the observed temperature over several decades. Although the effect of change in climate on hydrology provides a general picture of possible hydrological global change, new tools and frameworks for modelling hydrological series with nonstationary characteristics at finer scales, are required for assessing climate change impacts. Of the downscaling techniques, dynamic downscaling is usually based on the use of Regional Climate Models (RCMs), which generate finer resolution output based on atmospheric physics over a region using General Circulation Model (GCM) fields as boundary conditions. However, RCMs are not expected to capture the observed spatial precipitation extremes at a fine cell scale or at a basin scale. Statistical downscaling derives a statistical or empirical relationship between the variables simulated by the GCMs, called predictors, and station-scale hydrologic variables, called predictands. The main focus of the paper is on the need for using statistical downscaling techniques for projection of local hydrometeorological variables under climate change scenarios. The projections can be then served as a means of input source to various hydrologic models to obtain streamflow, evapotranspiration, soil moisture and other hydrological variables of interest.
Keywords: Climate Change, Downscaling, GCM, RCM.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 33751857 Image Magnification Using Adaptive Interpolationby Pixel Level Data-Dependent Geometrical Shapes
Authors: Muhammad Sajjad, Naveed Khattak, Noman Jafri
Abstract:
World has entered in 21st century. The technology of computer graphics and digital cameras is prevalent. High resolution display and printer are available. Therefore high resolution images are needed in order to produce high quality display images and high quality prints. However, since high resolution images are not usually provided, there is a need to magnify the original images. One common difficulty in the previous magnification techniques is that of preserving details, i.e. edges and at the same time smoothing the data for not introducing the spurious artefacts. A definitive solution to this is still an open issue. In this paper an image magnification using adaptive interpolation by pixel level data-dependent geometrical shapes is proposed that tries to take into account information about the edges (sharp luminance variations) and smoothness of the image. It calculate threshold, classify interpolation region in the form of geometrical shapes and then assign suitable values inside interpolation region to the undefined pixels while preserving the sharp luminance variations and smoothness at the same time. The results of proposed technique has been compared qualitatively and quantitatively with five other techniques. In which the qualitative results show that the proposed method beats completely the Nearest Neighbouring (NN), bilinear(BL) and bicubic(BC) interpolation. The quantitative results are competitive and consistent with NN, BL, BC and others.Keywords: Adaptive, digital image processing, imagemagnification, interpolation, geometrical shapes, qualitative &quantitative analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18001856 Numerical Investigation of Pressure Drop and Erosion Wear by Computational Fluid Dynamics Simulation
Authors: Praveen Kumar, Nitin Kumar, Hemant Kumar
Abstract:
The modernization of computer technology and commercial computational fluid dynamic (CFD) simulation has given better detailed results as compared to experimental investigation techniques. CFD techniques are widely used in different field due to its flexibility and performance. Evaluation of pipeline erosion is complex phenomenon to solve by numerical arithmetic technique, whereas CFD simulation is an easy tool to resolve that type of problem. Erosion wear behaviour due to solid–liquid mixture in the slurry pipeline has been investigated using commercial CFD code in FLUENT. Multi-phase Euler-Lagrange model was adopted to predict the solid particle erosion wear in 22.5° pipe bend for the flow of bottom ash-water suspension. The present study addresses erosion prediction in three dimensional 22.5° pipe bend for two-phase (solid and liquid) flow using finite volume method with standard k-ε turbulence, discrete phase model and evaluation of erosion wear rate with varying velocity 2-4 m/s. The result shows that velocity of solid-liquid mixture found to be highly dominating parameter as compared to solid concentration, density, and particle size. At low velocity, settling takes place in the pipe bend due to low inertia and gravitational effect on solid particulate which leads to high erosion at bottom side of pipeline.Keywords: Computational fluid dynamics, erosion, slurry transportation, k-ε Model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19181855 Teacher Training Course: Conflict Resolution through Mediation
Authors: Csilla M. Szabó
Abstract:
In Hungary, the society has changed a lot for the past 25 years, and these changes could be detected in educational situations as well. The number and the intensity of conflicts have been increased in most fields of life, as well as at schools. Teachers have difficulties to be able to handle school conflicts. What is more, the new net generation, generation Z has values and behavioural patterns different from those of the previous one, which might generate more serious conflicts at school, especially with teachers who were mainly socialising in a traditional teacher – student relationship. In Hungary, the bill CCIV of 2011 declared the foundation of Institutes of Teacher Training in higher education institutes. One of the tasks of the Institutes is to survey the competences and needs of teachers working in public education and to provide further trainings and services for them according to their needs and requirements. This job is supported by the Social Renewal Operative Programs 4.1.2.B. The professors of a college carried out a questionnaire and surveyed the needs and the requirements of teachers working in the region. Based on the results, the professors of the Institute of Teacher Training decided to meet the requirements of teachers and to launch short teacher further training courses in spring 2015. One of the courses is going to focus on school conflict management through mediation. The aim of the pilot course is to provide conflict management techniques for teachers and to present different mediation techniques to them. The theoretical part of the course (5 hours) will enable participants to understand the main points and the advantages of mediation, while the practical part (10 hours) will involve teachers in role plays to learn how to cope with conflict situations applying mediation. We hope if conflicts could be reduced, it would influence school atmosphere in a positive way and the teaching – learning process could be more successful and effective.Keywords: Conflict resolution, generation Z, mediation, teacher training.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17351854 Application of Interferometric Techniques for Quality Control of Oils Used in the Food Industry
Authors: Andres Piña, Amy Meléndez, Pablo Cano, Tomas Cahuich
Abstract:
The purpose of this project is to propose a quick and environmentally friendly alternative to measure the quality of oils used in food industry. There is evidence that repeated and indiscriminate use of oils in food processing cause physicochemical changes with formation of potentially toxic compounds that can affect the health of consumers and cause organoleptic changes. In order to assess the quality of oils, non-destructive optical techniques such as Interferometry offer a rapid alternative to the use of reagents, using only the interaction of light on the oil. Through this project, we used interferograms of samples of oil placed under different heating conditions to establish the changes in their quality. These interferograms were obtained by means of a Mach-Zehnder Interferometer using a beam of light from a HeNe laser of 10mW at 632.8nm. Each interferogram was captured, analyzed and measured full width at half-maximum (FWHM) using the software from Amcap and ImageJ. The total of FWHMs was organized in three groups. It was observed that the average obtained from each of the FWHMs of group A shows a behavior that is almost linear, therefore it is probable that the exposure time is not relevant when the oil is kept under constant temperature. Group B exhibits a slight exponential model when temperature raises between 373 K and 393 K. Results of the t-Student show a probability of 95% (0.05) of the existence of variation in the molecular composition of both samples. Furthermore, we found a correlation between the Iodine Indexes (Physicochemical Analysis) and the Interferograms (Optical Analysis) of group C. Based on these results, this project highlights the importance of the quality of the oils used in food industry and shows how Interferometry can be a useful tool for this purpose.Keywords: Food industry, interferometric, oils, quality control.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21801853 Critical Approach to Define the Architectural Structure of a Health Prototype in a Rural Area of Brazil
Authors: Domenico Chizzoniti, Monica Moscatelli, Letizia Cattani, Luca Preis
Abstract:
A primary healthcare facility in developing countries should be a multifunctional space able to respond to different requirements: Flexibility, modularity, aggregation and reversibility. These basic features could be better satisfied if applied to an architectural artifact that complies with the typological, figurative and constructive aspects of the context in which it is located. Therefore, the purpose of this paper is to identify a procedure that can define the figurative aspects of the architectural structure of the health prototype for the marginal areas of developing countries through a critical approach. The application context is the rural areas of the Northeast of Bahia in Brazil. The prototype should be located in the rural district of Quingoma, in the municipality of Lauro de Freitas, a particular place where there is still a cultural fusion of black and indigenous populations. Based on the historical analysis of settlement strategies and architectural structures in spaces of public interest or collective use, this paper aims to provide a procedure able to identify the categories and rules underlying typological and figurative aspects, in order to detect significant and generalizable elements, as well as materials and constructive techniques typically adopted in the rural areas of Brazil. The object of this work is therefore not only the recovery of certain constructive approaches but also the development of a procedure that integrates the requirements of the primary healthcare prototype with its surrounding economic, social, cultural, settlement and figurative conditions.Keywords: Architectural typology, Developing countries, Local construction techniques, Primary health care.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9431852 Methods and Algorithms of Ensuring Data Privacy in AI-Based Healthcare Systems and Technologies
Authors: Omar Farshad Jeelani, Makaire Njie, Viktoriia M. Korzhuk
Abstract:
Recently, the application of AI-powered algorithms in healthcare continues to flourish. Particularly, access to healthcare information, including patient health history, diagnostic data, and PII (Personally Identifiable Information) is paramount in the delivery of efficient patient outcomes. However, as the exchange of healthcare information between patients and healthcare providers through AI-powered solutions increases, protecting a person’s information and their privacy has become even more important. Arguably, the increased adoption of healthcare AI has resulted in a significant concentration on the security risks and protection measures to the security and privacy of healthcare data, leading to escalated analyses and enforcement. Since these challenges are brought by the use of AI-based healthcare solutions to manage healthcare data, AI-based data protection measures are used to resolve the underlying problems. Consequently, these projects propose AI-powered safeguards and policies/laws to protect the privacy of healthcare data. The project present the best-in-school techniques used to preserve data privacy of AI-powered healthcare applications. Popular privacy-protecting methods like Federated learning, cryptography techniques, differential privacy methods, and hybrid methods are discussed together with potential cyber threats, data security concerns, and prospects. Also, the project discusses some of the relevant data security acts/laws that govern the collection, storage, and processing of healthcare data to guarantee owners’ privacy is preserved. This inquiry discusses various gaps and uncertainties associated with healthcare AI data collection procedures, and identifies potential correction/mitigation measures.
Keywords: Data privacy, artificial intelligence, healthcare AI, data sharing, healthcare organizations.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1141851 Distributed Cost-Based Scheduling in Cloud Computing Environment
Authors: Rupali, Anil Kumar Jaiswal
Abstract:
Cloud computing can be defined as one of the prominent technologies that lets a user change, configure and access the services online. it can be said that this is a prototype of computing that helps in saving cost and time of a user practically the use of cloud computing can be found in various fields like education, health, banking etc. Cloud computing is an internet dependent technology thus it is the major responsibility of Cloud Service Providers(CSPs) to care of data stored by user at data centers. Scheduling in cloud computing environment plays a vital role as to achieve maximum utilization and user satisfaction cloud providers need to schedule resources effectively. Job scheduling for cloud computing is analyzed in the following work. To complete, recreate the task calculation, and conveyed scheduling methods CloudSim3.0.3 is utilized. This research work discusses the job scheduling for circulated processing condition also by exploring on this issue we find it works with minimum time and less cost. In this work two load balancing techniques have been employed: ‘Throttled stack adjustment policy’ and ‘Active VM load balancing policy’ with two brokerage services ‘Advanced Response Time’ and ‘Reconfigure Dynamically’ to evaluate the VM_Cost, DC_Cost, Response Time, and Data Processing Time. The proposed techniques are compared with Round Robin scheduling policy.
Keywords: Physical machines, virtual machines, support for repetition, self-healing, highly scalable programming model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8501850 An Approach to Image Extraction and Accurate Skin Detection from Web Pages
Authors: Moheb R. Girgis, Tarek M. Mahmoud, Tarek Abd-El-Hafeez
Abstract:
This paper proposes a system to extract images from web pages and then detect the skin color regions of these images. As part of the proposed system, using BandObject control, we built a Tool bar named 'Filter Tool Bar (FTB)' by modifying the Pavel Zolnikov implementation. The Yahoo! Team provides us with the Yahoo! SDK API, which also supports image search and is really useful. In the proposed system, we introduced three new methods for extracting images from the web pages (after loading the web page by using the proposed FTB, before loading the web page physically from the localhost, and before loading the web page from any server). These methods overcome the drawback of the regular expressions method for extracting images suggested by Ilan Assayag. The second part of the proposed system is concerned with the detection of the skin color regions of the extracted images. So, we studied two famous skin color detection techniques. The first technique is based on the RGB color space and the second technique is based on YUV and YIQ color spaces. We modified the second technique to overcome the failure of detecting complex image's background by using the saturation parameter to obtain an accurate skin detection results. The performance evaluation of the efficiency of the proposed system in extracting images before and after loading the web page from localhost or any server in terms of the number of extracted images is presented. Finally, the results of comparing the two skin detection techniques in terms of the number of pixels detected are presented.
Keywords: Browser Helper Object, Color spaces, Image and URL extraction, Skin detection, Web Browser events.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18961849 Mapping of Alteration Zones in Mineral Rich Belt of South-East Rajasthan Using Remote Sensing Techniques
Authors: Mrinmoy Dhara, Vivek K. Sengar, Shovan L. Chattoraj, Soumiya Bhattacharjee
Abstract:
Remote sensing techniques have emerged as an asset for various geological studies. Satellite images obtained by different sensors contain plenty of information related to the terrain. Digital image processing further helps in customized ways for the prospecting of minerals. In this study, an attempt has been made to map the hydrothermally altered zones using multispectral and hyperspectral datasets of South East Rajasthan. Advanced Space-borne Thermal Emission and Reflection Radiometer (ASTER) and Hyperion (Level1R) dataset have been processed to generate different Band Ratio Composites (BRCs). For this study, ASTER derived BRCs were generated to delineate the alteration zones, gossans, abundant clays and host rocks. ASTER and Hyperion images were further processed to extract mineral end members and classified mineral maps have been produced using Spectral Angle Mapper (SAM) method. Results were validated with the geological map of the area which shows positive agreement with the image processing outputs. Thus, this study concludes that the band ratios and image processing in combination play significant role in demarcation of alteration zones which may provide pathfinders for mineral prospecting studies.
Keywords: Advanced space-borne thermal emission and reflection radiometer, ASTER, Hyperion, Band ratios, Alteration zones, spectral angle mapper.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14861848 Real-time Haptic Modeling and Simulation for Prosthetic Insertion
Authors: Catherine A. Todd, Fazel Naghdy
Abstract:
In this work a surgical simulator is produced which enables a training otologist to conduct a virtual, real-time prosthetic insertion. The simulator provides the Ear, Nose and Throat surgeon with real-time visual and haptic responses during virtual cochlear implantation into a 3D model of the human Scala Tympani (ST). The parametric model is derived from measured data as published in the literature and accounts for human morphological variance, such as differences in cochlear shape, enabling patient-specific pre- operative assessment. Haptic modeling techniques use real physical data and insertion force measurements, to develop a force model which mimics the physical behavior of an implant as it collides with the ST walls during an insertion. Output force profiles are acquired from the insertion studies conducted in the work, to validate the haptic model. The simulator provides the user with real-time, quantitative insertion force information and associated electrode position as user inserts the virtual implant into the ST model. The information provided by this study may also be of use to implant manufacturers for design enhancements as well as for training specialists in optimal force administration, using the simulator. The paper reports on the methods for anatomical modeling and haptic algorithm development, with focus on simulator design, development, optimization and validation. The techniques may be transferrable to other medical applications that involve prosthetic device insertions where user vision is obstructed.Keywords: Haptic modeling, medical device insertion, real-time visualization of prosthetic implantation, surgical simulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20441847 Interest of the Sequences Pseudo Noises Codes of Different Lengths for the Reduction from the Interference between Users of CDMA Network
Authors: Nerguè Kassahan Kone, Souleymane Oumtanaga
Abstract:
The third generation (3G) of cellular system adopted the spread spectrum as solution for the transmission of the data in the physical layer. Contrary to systems IS-95 or CDMAOne (systems with spread spectrum of the preceding generation), the new standard, called Universal Mobil Telecommunications System (UMTS), uses long codes in the down link. The system is conceived for the vocal communication and the transmission of the data. In particular, the down link is very important, because of the asymmetrical request of the data, i.e., more remote loading towards the mobiles than towards the basic station. Moreover, the UMTS uses for the down link an orthogonal spreading out with a variable factor of spreading out (OVSF for Orthogonal Variable Spreading Factor). This characteristic makes it possible to increase the flow of data of one or more users by reducing their factor of spreading out without changing the factor of spreading out of other users. In the current standard of the UMTS, two techniques to increase the performances of the down link were proposed, the diversity of sending antenna and the codes space-time. These two techniques fight only fainding. The receiver proposed for the mobil station is the RAKE, but one can imagine a receiver more sophisticated, able to reduce the interference between users and the impact of the coloured noise and interferences to narrow band. In this context, where the users have long codes synchronized with variable factor of spreading out and ignorance by the mobile of the other active codes/users, the use of the sequences of code pseudo-noises different lengths is presented in the form of one of the most appropriate solutions.Keywords: DS-CDMA, multiple access interference, ratio Signal / interference + Noise.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13511846 Seawater Desalination for Production of Highly Pure Water Using a Hydrophobic PTFE Membrane and Direct Contact Membrane Distillation (DCMD)
Authors: Ahmad Kayvani Fard, Yehia Manawi
Abstract:
Qatar’s primary source of fresh water is through seawater desalination. Amongst the major processes that are commercially available on the market, the most common large scale techniques are Multi-Stage Flash distillation (MSF), Multi Effect distillation (MED), and Reverse Osmosis (RO). Although commonly used, these three processes are highly expensive down to high energy input requirements and high operating costs allied with maintenance and stress induced on the systems in harsh alkaline media. Beside that cost, environmental footprint of these desalination techniques are significant; from damaging marine eco-system, to huge land use, to discharge of tons of GHG and huge carbon footprint. Other less energy consuming techniques based on membrane separation are being sought to reduce both the carbon footprint and operating costs is membrane distillation (MD). Emerged in 1960s, MD is an alternative technology for water desalination attracting more attention since 1980s. MD process involves the evaporation of a hot feed, typically below boiling point of brine at standard conditions, by creating a water vapor pressure difference across the porous, hydrophobic membrane. Main advantages of MD compared to other commercially available technologies (MSF and MED) and specially RO are reduction of membrane and module stress due to absence of trans-membrane pressure, less impact of contaminant fouling on distillate due to transfer of only water vapor, utilization of low grade or waste heat from oil and gas industries to heat up the feed up to required temperature difference across the membrane, superior water quality, and relatively lower capital and operating cost. To achieve the objective of this study, state of the art flat-sheet cross-flow DCMD bench scale unit was designed, commissioned, and tested. The objective of this study is to analyze the characteristics and morphology of the membrane suitable for DCMD through SEM imaging and contact angle measurement and to study the water quality of distillate produced by DCMD bench scale unit. Comparison with available literature data is undertaken where appropriate and laboratory data is used to compare a DCMD distillate quality with that of other desalination techniques and standards. Membrane SEM analysis showed that the PTFE membrane used for the study has contact angle of 127º with highly porous surface supported with less porous and bigger pore size PP membrane. Study on the effect of feed solution (salinity) and temperature on water quality of distillate produced from ICP and IC analysis showed that with any salinity and different feed temperature (up to 70ºC) the electric conductivity of distillate is less than 5 μS/cm with 99.99% salt rejection and proved to be feasible and effective process capable of consistently producing high quality distillate from very high feed salinity solution (i.e. 100000 mg/L TDS) even with substantial quality difference compared to other desalination methods such as RO and MSF.
Keywords: Membrane Distillation, Waste Heat, Seawater Desalination, Membrane, Freshwater, Direct Contact Membrane Distillation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 41511845 Evaluation of Efficient CSI Based Channel Feedback Techniques for Adaptive MIMO-OFDM Systems
Authors: Muhammad Rehan Khalid, Muhammad Haroon Siddiqui, Danish Ilyas
Abstract:
This paper explores the implementation of adaptive coding and modulation schemes for Multiple-Input Multiple-Output Orthogonal Frequency Division Multiplexing (MIMO-OFDM) feedback systems. Adaptive coding and modulation enables robust and spectrally-efficient transmission over time-varying channels. The basic premise is to estimate the channel at the receiver and feed this estimate back to the transmitter, so that the transmission scheme can be adapted relative to the channel characteristics. Two types of codebook based channel feedback techniques are used in this work. The longterm and short-term CSI at the transmitter is used for efficient channel utilization. OFDM is a powerful technique employed in communication systems suffering from frequency selectivity. Combined with multiple antennas at the transmitter and receiver, OFDM proves to be robust against delay spread. Moreover, it leads to significant data rates with improved bit error performance over links having only a single antenna at both the transmitter and receiver. The coded modulation increases the effective transmit power relative to uncoded variablerate variable-power MQAM performance for MIMO-OFDM feedback system. Hence proposed arrangement becomes an attractive approach to achieve enhanced spectral efficiency and improved error rate performance for next generation high speed wireless communication systems.Keywords: Adaptive Coded Modulation, MQAM, MIMO, OFDM, Codebooks, Feedback.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19081844 The Role of Vibro-Stone Column for Enhancing the Soft Soil Properties
Authors: Mohsen Ramezan Shirazi, Orod Zarrin, Komeil Valipourian
Abstract:
This study investigated the behavior of improved soft soils through the vibro replacement technique by considering their settlements and consolidation rates and the applicability of this technique in various types of soils and settlement and bearing capacity calculations.Keywords: Bearing capacity, expansive clay, stone columns, vibro techniques.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 38811843 Particle Swarm Optimization Algorithm vs. Genetic Algorithm for Image Watermarking Based Discrete Wavelet Transform
Authors: Omaima N. Ahmad AL-Allaf
Abstract:
Over communication networks, images can be easily copied and distributed in an illegal way. The copyright protection for authors and owners is necessary. Therefore, the digital watermarking techniques play an important role as a valid solution for authority problems. Digital image watermarking techniques are used to hide watermarks into images to achieve copyright protection and prevent its illegal copy. Watermarks need to be robust to attacks and maintain data quality. Therefore, we discussed in this paper two approaches for image watermarking, first is based on Particle Swarm Optimization (PSO) and the second approach is based on Genetic Algorithm (GA). Discrete wavelet transformation (DWT) is used with the two approaches separately for embedding process to cover image transformation. Each of PSO and GA is based on co-relation coefficient to detect the high energy coefficient watermark bit in the original image and then hide the watermark in original image. Many experiments were conducted for the two approaches with different values of PSO and GA parameters. From experiments, PSO approach got better results with PSNR equal 53, MSE equal 0.0039. Whereas GA approach got PSNR equal 50.5 and MSE equal 0.0048 when using population size equal to 100, number of iterations equal to 150 and 3×3 block. According to the results, we can note that small block size can affect the quality of image watermarking based PSO/GA because small block size can increase the search area of the watermarking image. Better PSO results were obtained when using swarm size equal to 100.
Keywords: Image watermarking, genetic algorithm, particle swarm optimization, discrete wavelet transform.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11591842 Issues in Spectral Source Separation Techniques for Plant-wide Oscillation Detection and Diagnosis
Authors: A.K. Tangirala, S. Babji
Abstract:
In the last few years, three multivariate spectral analysis techniques namely, Principal Component Analysis (PCA), Independent Component Analysis (ICA) and Non-negative Matrix Factorization (NMF) have emerged as effective tools for oscillation detection and isolation. While the first method is used in determining the number of oscillatory sources, the latter two methods are used to identify source signatures by formulating the detection problem as a source identification problem in the spectral domain. In this paper, we present a critical drawback of the underlying linear (mixing) model which strongly limits the ability of the associated source separation methods to determine the number of sources and/or identify the physical source signatures. It is shown that the assumed mixing model is only valid if each unit of the process gives equal weighting (all-pass filter) to all oscillatory components in its inputs. This is in contrast to the fact that each unit, in general, acts as a filter with non-uniform frequency response. Thus, the model can only facilitate correct identification of a source with a single frequency component, which is again unrealistic. To overcome this deficiency, an iterative post-processing algorithm that correctly identifies the physical source(s) is developed. An additional issue with the existing methods is that they lack a procedure to pre-screen non-oscillatory/noisy measurements which obscure the identification of oscillatory sources. In this regard, a pre-screening procedure is prescribed based on the notion of sparseness index to eliminate the noisy and non-oscillatory measurements from the data set used for analysis.Keywords: non-negative matrix factorization, PCA, source separation, plant-wide diagnosis
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15341841 Performance Analysis of MC-SS for the Indoor BPLC Systems
Authors: Justinian Anatory
Abstract:
power-line networks are promise infrastructure for broadband services provision to end users. However, the network performance is affected by stochastic channel changing which is due to load impedances, number of branches and branched line lengths. It has been proposed that multi-carrier modulations techniques such as orthogonal frequency division multiplexing (OFDM), Multi-Carrier Spread Spectrum (MC-SS), wavelet OFDM can be used in such environment. This paper investigates the performance of different indoor topologies of power-line networks that uses MC-SS modulation scheme.It is observed that when a branch is added in the link between sending and receiving end of an indoor channel an average of 2.5dB power loss is found. In additional, when the branch is added at a node an average of 1dB power loss is found. Additionally when the terminal impedances of the branch change from line characteristic impedance to impedance either higher or lower values the channel performances were tremendously improved. For example changing terminal load from characteristic impedance (85 .) to 5 . the signal to noise ratio (SNR) required to attain the same performances were decreased from 37dB to 24dB respectively. Also, changing the terminal load from channel characteristic impedance (85 .) to very higher impedance (1600 .) the SNR required to maintain the same performances were decreased from 37dB to 23dB. The result concludes that MC-SS performs better compared with OFDM techniques in all aspects and especially when the channel is terminated in either higher or lower impedances.Keywords: Communication channel model; Broadband Powerlinecommunication; Branched network; OFDM; Delay Spread, MCSS;impulsive noise; load impedance
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16061840 A Data Hiding Model with High Security Features Combining Finite State Machines and PMM method
Authors: Souvik Bhattacharyya, Gautam Sanyal
Abstract:
Recent years have witnessed the rapid development of the Internet and telecommunication techniques. Information security is becoming more and more important. Applications such as covert communication, copyright protection, etc, stimulate the research of information hiding techniques. Traditionally, encryption is used to realize the communication security. However, important information is not protected once decoded. Steganography is the art and science of communicating in a way which hides the existence of the communication. Important information is firstly hidden in a host data, such as digital image, video or audio, etc, and then transmitted secretly to the receiver.In this paper a data hiding model with high security features combining both cryptography using finite state sequential machine and image based steganography technique for communicating information more securely between two locations is proposed. The authors incorporated the idea of secret key for authentication at both ends in order to achieve high level of security. Before the embedding operation the secret information has been encrypted with the help of finite-state sequential machine and segmented in different parts. The cover image is also segmented in different objects through normalized cut.Each part of the encoded secret information has been embedded with the help of a novel image steganographic method (PMM) on different cuts of the cover image to form different stego objects. Finally stego image is formed by combining different stego objects and transmit to the receiver side. At the receiving end different opposite processes should run to get the back the original secret message.Keywords: Cover Image, Finite state sequential machine, Melaymachine, Pixel Mapping Method (PMM), Stego Image, NCUT.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22611839 Implementation of the Quality Management System and Development of Organizational Learning: Case of Three Small and Medium-Sized Enterprises in Morocco
Authors: Abdelghani Boudiaf
Abstract:
The profusion of studies relating to the concept of organizational learning shows the importance that has been given to this concept in the management sciences. A few years ago, companies leaned towards ISO 9001 certification; this requires the implementation of the quality management system (QMS). In order for this objective to be achieved, companies must have a set of skills, which pushes them to develop learning through continuous training. The results of empirical research have shown that implementation of the QMS in the company promotes the development of learning. It should also be noted that several types of learning are developed in this sense. Given the nature of skills development is normative in the context of the quality demarche, companies are obliged to qualify and improve the skills of their human resources. Continuous training is the keystone to develop the necessary learning. To carry out continuous training, companies need to be able to identify their real needs by developing training plans based on well-defined engineering. The training process goes obviously through several stages. Initially, training has a general aspect, that is to say, it focuses on topics and actions of a general nature. Subsequently, this is done in a more targeted and more precise way to accompany the evolution of the QMS and also to make the changes decided each time (change of working method, change of practices, change of objectives, change of mentality, etc.). To answer our problematic we opted for the method of qualitative research. It should be noted that the case study method crosses several data collection techniques to explain and understand a phenomenon. Three cases of companies were studied as part of this research work using different data collection techniques related to this method.
Keywords: Changing mentalities, continuous training, organizational learning, quality management system, skills development.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7271838 Evaluation of Urban Development Proposals An ANP Approach
Authors: T. Gómez-Navarro, M. García-Melón, D. Díaz-Martín, S. Acuna-Dutra,
Abstract:
In this paper a new approach to prioritize urban planning projects in an efficient and reliable way is presented. It is based on environmental pressure indices and multicriteria decision methods. The paper introduces a rigorous method with acceptable complexity of rank ordering urban development proposals according to their environmental pressure. The technique combines the use of Environmental Pressure Indicators, the aggregation of indicators in an Environmental Pressure Index by means of the Analytic Network Process method and interpreting the information obtained from the experts during the decision-making process. The ANP method allows the aggregation of the experts- judgments on each of the indicators into one Environmental Pressure Index. In addition, ANP is based on utility ratio functions which are the most appropriate for the analysis of uncertain data, like experts- estimations. Finally, unlike the other multicriteria techniques, ANP allows the decision problem to be modelled using the relationships among dependent criteria. The method has been applied to the proposal for urban development of La Carlota airport in Caracas (Venezuela). The Venezuelan Government would like to see a recreational project develop on the abandoned area and mean a significant improvement for the capital. There are currently three options on their table which are currently under evaluation. They include a Health Club, a Residential area and a Theme Park. The participating experts coincided in the appreciation that the method proposed in this paper is useful and an improvement from traditional techniques such as environmental impact studies, lifecycle analysis, etc. They find the results obtained coherent, the process seems sufficiently rigorous and precise, and the use of resources is significantly less than in other methods.
Keywords: Environmental pressure indicators, multicriteria decision analysis, analytic network process.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1803