Search results for: continuous time domain estimation
7369 Estimation of Human Absorbed Dose Using Compartmental Model
Authors: M. Mousavi-Daramoroudi, H. Yousefnia, F. Abbasi-Davani, S. Zolghadri
Abstract:
Dosimetry is an indispensable and precious factor in patient treatment planning to minimize the absorbed dose in vital tissues. In this study, compartmental model was used in order to estimate the human absorbed dose of 177Lu-DOTATOC from the biodistribution data in wild type rats. For this purpose, 177Lu-DOTATOC was prepared under optimized conditions and its biodistribution was studied in male Syrian rats up to 168 h. Compartmental model was applied to mathematical description of the drug behaviour in tissue at different times. Dosimetric estimation of the complex was performed using radiation absorbed dose assessment resource (RADAR). The biodistribution data showed high accumulation in the adrenal and pancreas as the major expression sites for somatostatin receptor (SSTR). While kidneys as the major route of excretion receive 0.037 mSv/MBq, pancreas and adrenal also obtain 0.039 and 0.028 mSv/MBq. Due to the usage of this method, the points of accumulated activity data were enhanced, and further information of tissues uptake was collected that it will be followed by high (or improved) precision in dosimetric calculations.
Keywords: Compartmental modeling, human absorbed dose, 177Lu-DOTATOC, Syrian rats.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9267368 Classification of Extreme Ground-Level Ozone Based on Generalized Extreme Value Model for Air Monitoring Station
Authors: Siti Aisyah Zakaria, Nor Azrita Mohd Amin, Noor Fadhilah Ahmad Radi, Nasrul Hamidin
Abstract:
Higher ground-level ozone (GLO) concentration adversely affects human health, vegetations as well as activities in the ecosystem. In Malaysia, most of the analysis on GLO concentration are carried out using the average value of GLO concentration, which refers to the centre of distribution to make a prediction or estimation. However, analysis which focuses on the higher value or extreme value in GLO concentration is rarely explored. Hence, the objective of this study is to classify the tail behaviour of GLO using generalized extreme value (GEV) distribution estimation the return level using the corresponding modelling (Gumbel, Weibull, and Frechet) of GEV distribution. The results show that Weibull distribution which is also known as short tail distribution and considered as having less extreme behaviour is the best-fitted distribution for four selected air monitoring stations in Peninsular Malaysia, namely Larkin, Pelabuhan Kelang, Shah Alam, and Tanjung Malim; while Gumbel distribution which is considered as a medium tail distribution is the best-fitted distribution for Nilai station. The return level of GLO concentration in Shah Alam station is comparatively higher than other stations. Overall, return levels increase with increasing return periods but the increment depends on the type of the tail of GEV distribution’s tail. We conduct this study by using maximum likelihood estimation (MLE) method to estimate the parameters at four selected stations in Peninsular Malaysia. Next, the validation for the fitted block maxima series to GEV distribution is performed using probability plot, quantile plot and likelihood ratio test. Profile likelihood confidence interval is tested to verify the type of GEV distribution. These results are important as a guide for early notification on future extreme ozone events.
Keywords: Extreme value theory, generalized extreme value distribution, ground-level ozone, return level.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5197367 Local Spectrum Feature Extraction for Face Recognition
Authors: Muhammad Imran Ahmad, Ruzelita Ngadiran, Mohd Nazrin Md Isa, Nor Ashidi Mat Isa, Mohd Zaizu Ilyas, Raja Abdullah Raja Ahmad, Said Amirul Anwar Ab Hamid, Muzammil Jusoh
Abstract:
This paper presents two techniques, local feature extraction using image spectrum and low frequency spectrum modelling using GMM to capture the underlying statistical information to improve the performance of face recognition system. Local spectrum features are extracted using overlap sub block window that are mapped on the face image. For each of this block, spatial domain is transformed to frequency domain using DFT. A low frequency coefficient is preserved by discarding high frequency coefficients by applying rectangular mask on the spectrum of the facial image. Low frequency information is non- Gaussian in the feature space and by using combination of several Gaussian functions that has different statistical properties, the best feature representation can be modelled using probability density function. The recognition process is performed using maximum likelihood value computed using pre-calculated GMM components. The method is tested using FERET datasets and is able to achieved 92% recognition rates.
Keywords: Local features modelling, face recognition system, Gaussian mixture models.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22557366 M-band Wavelet and Cosine Transform Based Watermark Algorithm Using Randomization and Principal Component Analysis
Authors: Tong Liu, Xuan Xu, Xiaodi Wang
Abstract:
Computational techniques derived from digital image processing are playing a significant role in the security and digital copyrights of multimedia and visual arts. This technology has the effect within the domain of computers. This research presents discrete M-band wavelet transform (MWT) and cosine transform (DCT) based watermarking algorithm by incorporating the principal component analysis (PCA). The proposed algorithm is expected to achieve higher perceptual transparency. Specifically, the developed watermarking scheme can successfully resist common signal processing, such as geometric distortions, and Gaussian noise. In addition, the proposed algorithm can be parameterized, thus resulting in more security. To meet these requirements, the image is transformed by a combination of MWT & DCT. In order to improve the security further, we randomize the watermark image to create three code books. During the watermark embedding, PCA is applied to the coefficients in approximation sub-band. Finally, first few component bands represent an excellent domain for inserting the watermark.
Keywords: discrete M-band wavelet transform , discrete M-band wavelet transform, randomized watermark, principal component analysis
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20097365 Flood Predicting in Karkheh River Basin Using Stochastic ARIMA Model
Authors: Karim Hamidi Machekposhti, Hossein Sedghi, Abdolrasoul Telvari, Hossein Babazadeh
Abstract:
Floods have huge environmental and economic impact. Therefore, flood prediction is given a lot of attention due to its importance. This study analysed the annual maximum streamflow (discharge) (AMS or AMD) of Karkheh River in Karkheh River Basin for flood predicting using ARIMA model. For this purpose, we use the Box-Jenkins approach, which contains four-stage method model identification, parameter estimation, diagnostic checking and forecasting (predicting). The main tool used in ARIMA modelling was the SAS and SPSS software. Model identification was done by visual inspection on the ACF and PACF. SAS software computed the model parameters using the ML, CLS and ULS methods. The diagnostic checking tests, AIC criterion, RACF graph and RPACF graphs, were used for selected model verification. In this study, the best ARIMA models for Annual Maximum Discharge (AMD) time series was (4,1,1) with their AIC value of 88.87. The RACF and RPACF showed residuals’ independence. To forecast AMD for 10 future years, this model showed the ability of the model to predict floods of the river under study in the Karkheh River Basin. Model accuracy was checked by comparing the predicted and observation series by using coefficient of determination (R2).
Keywords: Time series modelling, stochastic processes, ARIMA model, Karkheh River.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10447364 A Mathematical Model Approach Regarding the Children’s Height Development with Fractional Calculus
Authors: Nisa Özge Önal, Kamil Karaçuha, Göksu Hazar Erdinç, Banu Bahar Karaçuha, Ertuğrul Karaçuha
Abstract:
The study aims to use a mathematical approach with the fractional calculus which is developed to have the ability to continuously analyze the factors related to the children’s height development. Until now, tracking the development of the child is getting more important and meaningful. Knowing and determining the factors related to the physical development of the child any desired time would provide better, reliable and accurate results for childcare. In this frame, 7 groups for height percentile curve (3th, 10th, 25th, 50th, 75th, 90th, and 97th) of Turkey are used. By using discrete height data of 0-18 years old children and the least squares method, a continuous curve is developed valid for any time interval. By doing so, in any desired instant, it is possible to find the percentage and location of the child in Percentage Chart. Here, with the help of the fractional calculus theory, a mathematical model is developed. The outcomes of the proposed approach are quite promising compared to the linear and the polynomial method. The approach also yields to predict the expected values of children in the sense of height.
Keywords: Children growth percentile, children physical development, fractional calculus, linear and polynomial model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8627363 Modelling of Soil Erosion by Non Conventional Methods
Authors: Ganesh D. Kale, Sheela N. Vadsola
Abstract:
Soil erosion is the most serious problem faced at global and local level. So planning of soil conservation measures has become prominent agenda in the view of water basin managers. To plan for the soil conservation measures, the information on soil erosion is essential. Universal Soil Loss Equation (USLE), Revised Universal Soil Loss Equation 1 (RUSLE1or RUSLE) and Modified Universal Soil Loss Equation (MUSLE), RUSLE 1.06, RUSLE1.06c, RUSLE2 are most widely used conventional erosion estimation methods. The essential drawbacks of USLE, RUSLE1 equations are that they are based on average annual values of its parameters and so their applicability to small temporal scale is questionable. Also these equations do not estimate runoff generated soil erosion. So applicability of these equations to estimate runoff generated soil erosion is questionable. Data used in formation of USLE, RUSLE1 equations was plot data so its applicability at greater spatial scale needs some scale correction factors to be induced. On the other hand MUSLE is unsuitable for predicting sediment yield of small and large events. Although the new revised forms of USLE like RUSLE 1.06, RUSLE1.06c and RUSLE2 were land use independent and they have almost cleared all the drawbacks in earlier versions like USLE and RUSLE1, they are based on the regional data of specific area and their applicability to other areas having different climate, soil, land use is questionable. These conventional equations are applicable for sheet and rill erosion and unable to predict gully erosion and spatial pattern of rills. So the research was focused on development of nonconventional (other than conventional) methods of soil erosion estimation. When these non-conventional methods are combined with GIS and RS, gives spatial distribution of soil erosion. In the present paper the review of literature on non- conventional methods of soil erosion estimation supported by GIS and RS is presented.Keywords: Conventional methods, GIS, non-conventionalmethods, remote sensing, soil erosion modeling
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 42927362 The Current Situation and Perspectives of Electricity Demand and Estimation of Carbon Dioxide Emissions and Efficiency
Abstract:
This article presents a current and future energy situation in Libya. The electric power efficiency and operating hours in power plants are evaluated from 2005 to 2010. Carbon dioxide emissions in most of power plants are estimated. In 2005, the efficiency of steam power plants achieved a range of 20% to 28%. While, the gas turbine power plants efficiency ranged between 9% and 25%, this can be considered as low efficiency. However, the efficiency improvement has clearly observed in some power plants from 2008 to 2010, especially in the power plant of North Benghazi and west Tripoli. In fact, these power plants have modified to combine cycle. The efficiency of North Benghazi power plant has increased from 25% to 46.6%, while in Tripoli it is increased from 22% to 34%. On the other hand, the efficiency improvement is not observed in the gas turbine power plants. When compared to the quantity of fuel used, the carbon dioxide emissions resulting from electricity generation plants were very high. Finally, an estimation of the energy demand has been done to the maximum load and the annual load factor (i.e., the ratio between the output power and installed power).
Keywords: Power plant, Efficiency improvement, Carbon dioxide Emissions.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 31087361 Optimization of Real Time Measured Data Transmission, Given the Amount of Data Transmitted
Authors: Michal Kopcek, Tomas Skulavik, Michal Kebisek, Gabriela Krizanova
Abstract:
The operation of nuclear power plants involves continuous monitoring of the environment in their area. This monitoring is performed using a complex data acquisition system, which collects status information about the system itself and values of many important physical variables e.g. temperature, humidity, dose rate etc. This paper describes a proposal and optimization of communication that takes place in teledosimetric system between the central control server responsible for the data processing and storing and the decentralized measuring stations, which are measuring the physical variables. Analyzes of ongoing communication were performed and consequently the optimization of the system architecture and communication was done.
Keywords: Communication protocol, transmission optimization, data acquisition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18217360 Low Power and Less Area Architecture for Integer Motion Estimation
Authors: C Hisham, K Komal, Amit K Mishra
Abstract:
Full search block matching algorithm is widely used for hardware implementation of motion estimators in video compression algorithms. In this paper we are proposing a new architecture, which consists of a 2D parallel processing unit and a 1D unit both working in parallel. The proposed architecture reduces both data access power and computational power which are the main causes of power consumption in integer motion estimation. It also completes the operations with nearly the same number of clock cycles as compared to a 2D systolic array architecture. In this work sum of absolute difference (SAD)-the most repeated operation in block matching, is calculated in two steps. The first step is to calculate the SAD for alternate rows by a 2D parallel unit. If the SAD calculated by the parallel unit is less than the stored minimum SAD, the SAD of the remaining rows is calculated by the 1D unit. Early termination, which stops avoidable computations has been achieved with the help of alternate rows method proposed in this paper and by finding a low initial SAD value based on motion vector prediction. Data reuse has been applied to the reference blocks in the same search area which significantly reduced the memory access.
Keywords: Sum of absolute difference, high speed DSP.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14937359 Numerical Modeling and Computer Simulation of Ground Movement above Underground Mine
Authors: A. Nuric, S. Nuric, L. Kricak, I. Lapandic, R. Husagic
Abstract:
This paper describes topic of computer simulation with regard to the ground movement above an underground mine. Simulation made with software package ADINA for nonlinear elastic-plastic analysis with finite elements method. The one of representative profiles from Mine 'Stara Jama' in Zenica has been investigated. A collection and selection of both geo-mechanical data and geometric parameters of the mine was necessary for performing these simulations. Results of estimation have been compared with measured values (vertical displacement of surface), and then simulation performed with assumed dynamic and dimensions of excavation, over a period of time. Results are presented with bitmaps and charts.
Keywords: Computer, finite element method, mine, nonlinear analysis, numerical modeling, simulation, subsidence.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 28327358 CFD Simulations for Studying Flow Behaviors in Dipping Tank in Continuous Latex Gloves Production Lines
Authors: W. Koranuntachai, T. Chantrasmi, U. Nontakaew
Abstract:
Medical latex gloves are made from the latex compound in production lines. Latex dipping is considered one of the most important processes that directly affect the final product quality. In a continuous production line, a chain conveyor carries the formers through the process and partially submerges them into an open channel flow in a latex dipping tank. In general, the conveyor speed is determined by the desired production capacity, and the latex-dipping tank can then be designed accordingly. It is important to understand the flow behavior in the dipping tank in order to achieve high quality in the process. In this work, Computational Fluid Dynamics (CFD) was used to simulate the flow past an array of formers in a simplified latex dipping process. The computational results showed both the flow structure and the vortex generation between two formers. The maximum shear stress over the surface of the formers was used as the quality metric of the latex-dipping process when adjusting operation parameters.
Keywords: medical latex gloves, latex dipping, dipping tank, computational fluid dynamics
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5427357 Protein Graph Partitioning by Mutually Maximization of cycle-distributions
Authors: Frank Emmert Streib
Abstract:
The classification of the protein structure is commonly not performed for the whole protein but for structural domains, i.e., compact functional units preserved during evolution. Hence, a first step to a protein structure classification is the separation of the protein into its domains. We approach the problem of protein domain identification by proposing a novel graph theoretical algorithm. We represent the protein structure as an undirected, unweighted and unlabeled graph which nodes correspond the secondary structure elements of the protein. This graph is call the protein graph. The domains are then identified as partitions of the graph corresponding to vertices sets obtained by the maximization of an objective function, which mutually maximizes the cycle distributions found in the partitions of the graph. Our algorithm does not utilize any other kind of information besides the cycle-distribution to find the partitions. If a partition is found, the algorithm is iteratively applied to each of the resulting subgraphs. As stop criterion, we calculate numerically a significance level which indicates the stability of the predicted partition against a random rewiring of the protein graph. Hence, our algorithm terminates automatically its iterative application. We present results for one and two domain proteins and compare our results with the manually assigned domains by the SCOP database and differences are discussed.Keywords: Graph partitioning, unweighted graph, protein domains.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13567356 Quantitative Estimation of Periodicities in Lyari River Flow Routing
Authors: Rana Khalid Naeem, Asif Mansoor
Abstract:
The hydrologic time series data display periodic structure and periodic autoregressive process receives considerable attention in modeling of such series. In this communication long term record of monthly waste flow of Lyari river is utilized to quantify by using PAR modeling technique. The parameters of model are estimated by using Frances & Paap methodology. This study shows that periodic autoregressive model of order 2 is the most parsimonious model for assessing periodicity in waste flow of the river. A careful statistical analysis of residuals of PAR (2) model is used for establishing goodness of fit. The forecast by using proposed model confirms significance and effectiveness of the model.Keywords: Diagnostic checks, Lyari river, Model selection, Monthly waste flow, Periodicity, Periodic autoregressive model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16487355 Motion Detection Method for Clutter Rejection in the Bio-Radar Signal Processing
Authors: Carolina Gouveia, José Vieira, Pedro Pinho
Abstract:
The cardiopulmonary signal monitoring, without the usage of contact electrodes or any type of in-body sensors, has several applications such as sleeping monitoring and continuous monitoring of vital signals in bedridden patients. This system has also applications in the vehicular environment to monitor the driver, in order to avoid any possible accident in case of cardiac failure. Thus, the bio-radar system proposed in this paper, can measure vital signals accurately by using the Doppler effect principle that relates the received signal properties with the distance change between the radar antennas and the person’s chest-wall. Once the bio-radar aim is to monitor subjects in real-time and during long periods of time, it is impossible to guarantee the patient immobilization, hence their random motion will interfere in the acquired signals. In this paper, a mathematical model of the bio-radar is presented, as well as its simulation in MATLAB. The used algorithm for breath rate extraction is explained and a method for DC offsets removal based in a motion detection system is proposed. Furthermore, experimental tests were conducted with a view to prove that the unavoidable random motion can be used to estimate the DC offsets accurately and thus remove them successfully.Keywords: Bio-signals, DC Component, Doppler Effect, ellipse fitting, radar, SDR.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7937354 The Role of Contextual Ontologies in Enterprise Modeling
Authors: Ahmed Arara
Abstract:
Information sharing and exchange, rather than information processing, is what characterizes information technology in the 21st century. Ontologies, as shared common understanding, gain increasing attention, as they appear as the most promising solution to enable information sharing both at a semantic level and in a machine-processable way. Domain Ontology-based modeling has been exploited to provide shareability and information exchange among diversified, heterogeneous applications of enterprises. Contextual ontologies are “an explicit specification of contextual conceptualization". That is: ontology is characterized by concepts that have multiple representations and they may exist in several contexts. Hence, contextual ontologies are a set of concepts and relationships, which are seen from different perspectives. Contextualization is to allow for ontologies to be partitioned according to their contexts. The need for contextual ontologies in enterprise modeling has become crucial due to the nature of today's competitive market. Information resources in enterprise is distributed and diversified and is in need to be shared and communicated locally through the intranet and globally though the internet. This paper discusses the roles that ontologies play in an enterprise modeling, and how ontologies assist in building a conceptual model in order to provide communicative and interoperable information systems. The issue of enterprise modeling based on contextual domain ontology is also investigated, and a framework is proposed for an enterprise model that consists of various applications.Keywords: Contextual ontologies, Enterprise model, domainontology.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18427353 Second Order Sliding Mode Observer Using MRAS Theory for Sensorless Control of Multiphase Induction Machine
Authors: Mohammad Jafarifar
Abstract:
This paper presents a speed estimation scheme based on second-order sliding-mode Super Twisting Algorithm (STA) and Model Reference Adaptive System (MRAS) estimation theory for Sensorless control of multiphase induction machine. A stator current observer is designed based on the STA, which is utilized to take the place of the reference voltage model of the standard MRAS algorithm. The observer is insensitive to the variation of rotor resistance and magnetizing inductance when the states arrive at the sliding mode. Derivatives of rotor flux are obtained and designed as the state of MRAS, thus eliminating the integration. Compared with the first-order sliding-mode speed estimator, the proposed scheme makes full use of the auxiliary sliding-mode surface, thus alleviating the chattering behavior without increasing the complexity. Simulation results show the robustness and effectiveness of the proposed scheme.Keywords: Multiphase induction machine, field oriented control, sliding mode, super twisting algorithm, MRAS algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22977352 Stabilization of Rotational Motion of Spacecrafts Using Quantized Two Torque Inputs Based on Random Dither
Authors: Yusuke Kuramitsu, Tomoaki Hashimoto, Hirokazu Tahara
Abstract:
The control problem of underactuated spacecrafts has attracted a considerable amount of interest. The control method for a spacecraft equipped with less than three control torques is useful when one of the three control torques had failed. On the other hand, the quantized control of systems is one of the important research topics in recent years. The random dither quantization method that transforms a given continuous signal to a discrete signal by adding artificial random noise to the continuous signal before quantization has also attracted a considerable amount of interest. The objective of this study is to develop the control method based on random dither quantization method for stabilizing the rotational motion of a rigid spacecraft with two control inputs. In this paper, the effectiveness of random dither quantization control method for the stabilization of rotational motion of spacecrafts with two torque inputs is verified by numerical simulations.Keywords: Spacecraft control, quantized control, nonlinear control, random dither method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7087351 Mathematical Modelling of Different Types of Body Support Surface for Pressure Ulcer Prevention
Authors: Mahbub C. Mishu, Venketesh N. Dubey, Tamas Hickish, Jonathan Cole
Abstract:
Pressure ulcer is a common problem for today’s healthcare industry. It occurs due to external load applied to the skin. Also when the subject is immobile for a longer period of time and there is continuous load applied to a particular area of human body, blood flow gets reduced and as a result pressure ulcer develops. Body support surface has a significant role in preventing ulceration so it is important to know the characteristics of support surface under loading conditions. In this paper we have presented mathematical models of different types of viscoelastic materials and also we have shown the validation of our simulation results with experiments.
Keywords: Pressure ulcer, viscoelastic material, mathematical model, experimental validation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19197350 Transcriptional Evidence for the Involvement of MyD88 in Flagellin Recognition: Genomic Identification of Rock Bream MyD88 and Comparative Analysis
Authors: N. Umasuthan, S. D. N. K. Bathige, W. S. Thulasitha, I. Whang, J. Lee
Abstract:
The MyD88 is an evolutionarily conserved host-expressed adaptor protein that is essential for proper TLR/ IL1R immune-response signaling. A previously identified complete cDNA (1626 bp) of OfMyD88 comprised an ORF of 867 bp encoding a protein of 288 amino acids (32.9 kDa). The gDNA (3761 bp) of OfMyD88 revealed a quinquepartite genome organization composed of 5 exons (with the sizes of 310, 132, 178, 92 and 155 bp) separated by 4 introns. All the introns displayed splice signals consistent with the consensus GT/AG rule. A bipartite domain structure with two domains namely death domain (24-103) coded by 1st exon, and TIR domain (151-288) coded by last 3 exons were identified through in silico analysis. Moreover, homology modeling of these two domains revealed a similar quaternary folding nature between human and rock bream homologs. A comprehensive comparison of vertebrate MyD88 genes showed that they possess a 5-exonic structure.In this structure, the last three exons were strongly conserved, and this suggests that a rigid structure has been maintained during vertebrate evolution.A cluster of TATA box-like sequences were found 0.25 kb upstream of cDNA starting position. In addition, putative 5'-flanking region of OfMyD88 was predicted to have TFBS implicated with TLR signaling, including copies of NFkB1, APRF/ STAT3, Sp1, IRF1 and 2 and Stat1/2. Using qPCR technique, a ubiquitous mRNA expression was detected in liver and blood. Furthermore, a significantly up-regulated transcriptional expression of OfMyD88 was detected in head kidney (12-24 h; >2-fold), spleen (6 h; 1.5-fold), liver (3 h; 1.9-fold) and intestine (24 h; ~2-fold) post-Fla challenge. These data suggest a crucial role for MyD88 in antibacterial immunity of teleosts.
Keywords: MyD88, Innate immunity, Flagellin, Genomic analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19087349 Low-Cost Inertial Sensors Modeling Using Allan Variance
Authors: A. A. Hussen, I. N. Jleta
Abstract:
Micro-electromechanical system (MEMS) accelerometers and gyroscopes are suitable for the inertial navigation system (INS) of many applications due to low price, small dimensions and light weight. The main disadvantage in a comparison with classic sensors is a worse long term stability. The estimation accuracy is mostly affected by the time-dependent growth of inertial sensor errors, especially the stochastic errors. In order to eliminate negative effects of these random errors, they must be accurately modeled. In this paper, the Allan variance technique will be used in modeling the stochastic errors of the inertial sensors. By performing a simple operation on the entire length of data, a characteristic curve is obtained whose inspection provides a systematic characterization of various random errors contained in the inertial-sensor output data.Keywords: Allan variance, accelerometer, gyroscope, stochastic errors.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 52797348 Wind Load Characteristics in Libya
Authors: Mohammed B. Abohedma, Milad M. Alshebani
Abstract:
Recent trends in building constructions in Libya are more toward tall (high-rise) building projects. As a consequence, a better estimation of the lateral loading in the design process is becoming the focal of a safe and cost effective building industry. Byin- large, Libya is not considered a potential earthquake prone zone, making wind is the dominant design lateral loads. Current design practice in the country estimates wind speeds on a mere random bases by considering certain factor of safety to the chosen wind speed. Therefore, a need for a more accurate estimation of wind speeds in Libya was the motivation behind this study. Records of wind speed data were collected from 22 metrological stations in Libya, and were statistically analysed. The analysis of more than four decades of wind speed records suggests that the country can be divided into four zones of distinct wind speeds. A computer “survey" program was manipulated to draw design wind speeds contour map for the state of Libya. The paper presents the statistical analysis of Libya-s recorded wind speed data and proposes design wind speed values for a 50-year return period that covers the entire country.Keywords: Ccontour map, return period, wind speed, and zone.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 37077347 Tidal Data Analysis using ANN
Authors: Ritu Vijay, Rekha Govil
Abstract:
The design of a complete expansion that allows for compact representation of certain relevant classes of signals is a central problem in signal processing applications. Achieving such a representation means knowing the signal features for the purpose of denoising, classification, interpolation and forecasting. Multilayer Neural Networks are relatively a new class of techniques that are mathematically proven to approximate any continuous function arbitrarily well. Radial Basis Function Networks, which make use of Gaussian activation function, are also shown to be a universal approximator. In this age of ever-increasing digitization in the storage, processing, analysis and communication of information, there are numerous examples of applications where one needs to construct a continuously defined function or numerical algorithm to approximate, represent and reconstruct the given discrete data of a signal. Many a times one wishes to manipulate the data in a way that requires information not included explicitly in the data, which is done through interpolation and/or extrapolation. Tidal data are a very perfect example of time series and many statistical techniques have been applied for tidal data analysis and representation. ANN is recent addition to such techniques. In the present paper we describe the time series representation capabilities of a special type of ANN- Radial Basis Function networks and present the results of tidal data representation using RBF. Tidal data analysis & representation is one of the important requirements in marine science for forecasting.Keywords: ANN, RBF, Tidal Data.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16567346 Cursor Position Estimation Model for Virtual Touch Screen Using Camera
Authors: Somkiat Wangsiripitak
Abstract:
Virtual touch screen using camera is an ordinary screen which uses a camera to imitate the touch screen by taking a picture of an indicator, e.g., finger, which is laid on the screen, converting the indicator tip position on the picture to the position on the screen, and moving the cursor on the screen to that position. In fact, the indicator is not laid on the screen directly, but it is intervened by the cover at some intervals. In spite of this gap, if the eye-indicator-camera angle is not large, the mapping from the indicator tip positions on the image to the corresponding cursor positions on the screen is not difficult and could be done with a little error. However, the larger the angle is, the bigger the error in the mapping occurs. This paper proposes cursor position estimation model for virtual touch screen using camera which could eliminate this kind of error. The proposed model (i) moves the on-screen pilot cursor to the screen position which locates on the screen at the position just behind the indicator tip when the indicator tip has been looked from the camera position, and then (ii) converts that pilot cursor position to the desirable cursor position (the position on the screen when it has been looked from the user-s eye through the indicator tip) by using the bilinear transformation. Simulation results show the correctness of the estimated cursor position by using the proposed model.
Keywords: Bilinear transformation, cursor position, pilot cursor, virtual touch screen.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16327345 New Wavelet-Based Superresolution Algorithm for Speckle Reduction in SAR Images
Authors: Mario Mastriani
Abstract:
This paper describes a novel projection algorithm, the Projection Onto Span Algorithm (POSA) for wavelet-based superresolution and removing speckle (in wavelet domain) of unknown variance from Synthetic Aperture Radar (SAR) images. Although the POSA is good as a new superresolution algorithm for image enhancement, image metrology and biometric identification, here one will use it like a tool of despeckling, being the first time that an algorithm of super-resolution is used for despeckling of SAR images. Specifically, the speckled SAR image is decomposed into wavelet subbands; POSA is applied to the high subbands, and reconstruct a SAR image from the modified detail coefficients. Experimental results demonstrate that the new method compares favorably to several other despeckling methods on test SAR images.
Keywords: Projection, speckle, superresolution, synthetic aperture radar, thresholding, wavelets.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16187344 Formulation of Extended-Release Gliclazide Tablet Using a Mathematical Model for Estimation of Hypromellose
Authors: Farzad Khajavi, Farzaneh Jalilfar, Faranak Jafari, Leila Shokrani
Abstract:
Formulation of gliclazide in the form of extended-release tablet in 30 and 60 mg dosage forms was performed using hypromellose (HPMC K4M) as a retarding agent. Drug-release profiles were investigated in comparison with references Diamicron MR 30 and 60 mg tablets. The effect of size of powder particles, the amount of hypromellose in formulation, hardness of tablets, and also the effect of halving the tablets were investigated on drug release profile. A mathematical model which describes hypromellose behavior in initial times of drug release was proposed for the estimation of hypromellose content in modified-release gliclazide 60 mg tablet. This model is based on erosion of hypromellose in dissolution media. The model is applicable to describe release profiles of insoluble drugs. Therefore, by using dissolved amount of drug in initial times of dissolution and the model, the amount of hypromellose in formulation can be predictable. The model was used to predict the HPMC K4M content in modified-release gliclazide 30 mg and extended-release quetiapine 200 mg tablets.
Keywords: Hypromellose, gliclazide, drug release, modified-release tablet, mathematical model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23057343 A Posterior Predictive Model-Based Control Chart for Monitoring Healthcare
Authors: Yi-Fan Lin, Peter P. Howley, Frank A. Tuyl
Abstract:
Quality measurement and reporting systems are used in healthcare internationally. In Australia, the Australian Council on Healthcare Standards records and reports hundreds of clinical indicators (CIs) nationally across the healthcare system. These CIs are measures of performance in the clinical setting, and are used as a screening tool to help assess whether a standard of care is being met. Existing analysis and reporting of these CIs incorporate Bayesian methods to address sampling variation; however, such assessments are retrospective in nature, reporting upon the previous six or twelve months of data. The use of Bayesian methods within statistical process control for monitoring systems is an important pursuit to support more timely decision-making. Our research has developed and assessed a new graphical monitoring tool, similar to a control chart, based on the beta-binomial posterior predictive (BBPP) distribution to facilitate the real-time assessment of health care organizational performance via CIs. The BBPP charts have been compared with the traditional Bernoulli CUSUM (BC) chart by simulation. The more traditional “central” and “highest posterior density” (HPD) interval approaches were each considered to define the limits, and the multiple charts were compared via in-control and out-of-control average run lengths (ARLs), assuming that the parameter representing the underlying CI rate (proportion of cases with an event of interest) required estimation. Preliminary results have identified that the BBPP chart with HPD-based control limits provides better out-of-control run length performance than the central interval-based and BC charts. Further, the BC chart’s performance may be improved by using Bayesian parameter estimation of the underlying CI rate.
Keywords: Average run length, Bernoulli CUSUM chart, beta binomial posterior predictive distribution, clinical indicator, health care organization, highest posterior density interval.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8787342 Existence and Uniqueness of Periodic Solution for a Discrete-time SIR Epidemic Model with Time Delays and Impulses
Abstract:
In this paper, a discrete-time SIR epidemic model with nonlinear incidence rate, time delays and impulses is investigated. Sufficient conditions for the existence and uniqueness of periodic solutions are obtained by using contraction theorem and inequality techniques. An example is employed to illustrate our results.
Keywords: Discrete-time SIR epidemic model, time delay, nonlinear incidence rate, impulse.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16487341 Wear Measuring and Wear Modelling Based On Archard, ASTM, and Neural Network Models
Authors: A. Shebani, C. Pislaru
Abstract:
The wear measuring and wear modelling are fundamental issues in the industrial field, mainly correlated to the economy and safety. Therefore, there is a need to study the wear measurements and wear estimation. Pin-on-disc test is the most common test which is used to study the wear behaviour. In this paper, the pin-on-disc (AEROTECH UNIDEX 11) is used for the investigation of the effects of normal load and hardness of material on the wear under dry and sliding conditions. In the pin-on-disc rig, two specimens were used; one, a pin is made of steel with a tip, positioned perpendicular to the disc, where the disc is made of aluminium. The pin wear and disc wear were measured by using the following instruments: The Talysurf instrument, a digital microscope, and the alicona instrument. The Talysurf profilometer was used to measure the pin/disc wear scar depth, digital microscope was used to measure the diameter and width of wear scar, and the alicona was used to measure the pin wear and disc wear. After that, the Archard model, American Society for Testing and Materials model (ASTM), and neural network model were used for pin/disc wear modelling. Simulation results were implemented by using the Matlab program. This paper focuses on how the alicona can be used for wear measurements and how the neural network can be used for wear estimation.
Keywords: Wear measuring, Wear modelling, Neural Network, Alicona.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 42787340 Optimization of Solar Tracking Systems
Authors: A. Zaher, A. Traore, F. Thiéry, T. Talbert, B. Shaer
Abstract:
In this paper, an intelligent approach is proposed to optimize the orientation of continuous solar tracking systems on cloudy days. Considering the weather case, the direct sunlight is more important than the diffuse radiation in case of clear sky. Thus, the panel is always pointed towards the sun. In case of an overcast sky, the solar beam is close to zero, and the panel is placed horizontally to receive the maximum of diffuse radiation. Under partly covered conditions, the panel must be pointed towards the source that emits the maximum of solar energy and it may be anywhere in the sky dome. Thus, the idea of our approach is to analyze the images, captured by ground-based sky camera system, in order to detect the zone in the sky dome which is considered as the optimal source of energy under cloudy conditions. The proposed approach is implemented using experimental setup developed at PROMES-CNRS laboratory in Perpignan city (France). Under overcast conditions, the results were very satisfactory, and the intelligent approach has provided efficiency gains of up to 9% relative to conventional continuous sun tracking systems.
Keywords: Clouds detection, fuzzy inference systems, images processing, sun trackers.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1213