Search results for: Half step
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1275

Search results for: Half step

345 Faster FPGA Routing Solution using DNA Computing

Authors: Manpreet Singh, Parvinder Singh Sandhu, Manjinder Singh Kahlon

Abstract:

There are many classical algorithms for finding routing in FPGA. But Using DNA computing we can solve the routes efficiently and fast. The run time complexity of DNA algorithms is much less than other classical algorithms which are used for solving routing in FPGA. The research in DNA computing is in a primary level. High information density of DNA molecules and massive parallelism involved in the DNA reactions make DNA computing a powerful tool. It has been proved by many research accomplishments that any procedure that can be programmed in a silicon computer can be realized as a DNA computing procedure. In this paper we have proposed two tier approaches for the FPGA routing solution. First, geometric FPGA detailed routing task is solved by transforming it into a Boolean satisfiability equation with the property that any assignment of input variables that satisfies the equation specifies a valid routing. Satisfying assignment for particular route will result in a valid routing and absence of a satisfying assignment implies that the layout is un-routable. In second step, DNA search algorithm is applied on this Boolean equation for solving routing alternatives utilizing the properties of DNA computation. The simulated results are satisfactory and give the indication of applicability of DNA computing for solving the FPGA Routing problem.

Keywords: FPGA, Routing, DNA Computing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1586
344 Molecular Dynamic Simulation and Receptor-based Pharmacophore Modeling on Human Renin for Discovery of Novel Inhibitors

Authors: Chanin Park, Sundarapandian Thangapandian, Yuno Lee, Minky Son, Shalini John, Young-sik Sohn, Keun Woo Lee

Abstract:

Hypertension is characterized with stress on the heart and blood vessels thus increasing the risk of heart attack and renal diseases. The Renin angiotensin system (RAS) plays a major role in blood pressure control. Renin is the enzyme that controls the RAS at the rate-limiting step. Our aim is to develop new drug-like leads which can inhibit renin and thereby emerge as therapeutics for hypertension. To achieve this, molecular dynamics (MD) simulation and receptor-based pharmacophore modeling were implemented, and three rennin-inhibitor complex structures were selected based on IC50 value and scaffolds of inhibitors. Three pharmacophore models were generated considering conformations induced by inhibitor. The compounds mapped to these models were selected and subjected to drug-like screening. The identified hits were docked into the active site of renin. Finally, hit1 satisfying the binding mode and interaction energy was selected as possible lead candidate to be used in novel renin inhibitors.

Keywords: Renin inhibitor, Molecular dynamics simulation, Structure-based pharmacophore modeling

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1961
343 A Numerical Description of a Fibre Reinforced Concrete Using a Genetic Algorithm

Authors: Henrik L. Funke, Lars Ulke-Winter, Sandra Gelbrich, Lothar Kroll

Abstract:

This work reports about an approach for an automatic adaptation of concrete formulations based on genetic algorithms (GA) to optimize a wide range of different fit-functions. In order to achieve the goal, a method was developed which provides a numerical description of a fibre reinforced concrete (FRC) mixture regarding the production technology and the property spectrum of the concrete. In a first step, the FRC mixture with seven fixed components was characterized by varying amounts of the components. For that purpose, ten concrete mixtures were prepared and tested. The testing procedure comprised flow spread, compressive and bending tensile strength. The analysis and approximation of the determined data was carried out by GAs. The aim was to obtain a closed mathematical expression which best describes the given seven-point cloud of FRC by applying a Gene Expression Programming with Free Coefficients (GEP-FC) strategy. The seven-parametric FRC-mixtures model which is generated according to this method correlated well with the measured data. The developed procedure can be used for concrete mixtures finding closed mathematical expressions, which are based on the measured data.

Keywords: Concrete design, fibre reinforced concrete, genetic algorithms, GEP-FC.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 979
342 A Hybrid Differential Transform Approach for Laser Heating of a Double-Layered Thin Film

Authors: Cheng-Ying Lo

Abstract:

This paper adopted the hybrid differential transform approach for studying heat transfer problems in a gold/chromium thin film with an ultra-short-pulsed laser beam projecting on the gold side. The physical system, formulated based on the hyperbolic two-step heat transfer model, covers three characteristics: (i) coupling effects between the electron/lattice systems, (ii) thermal wave propagation in metals, and (iii) radiation effects along the interface. The differential transform method is used to transfer the governing equations in the time domain into the spectrum equations, which is further discretized in the space domain by the finite difference method. The results, obtained through a recursive process, show that the electron temperature in the gold film can rise up to several thousand degrees before its electron/lattice systems reach equilibrium at only several hundred degrees. The electron and lattice temperatures in the chromium film are much lower than those in the gold film.

Keywords: Differential transform, hyperbolic heat transfer, thin film, ultrashort-pulsed laser.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1584
341 Determining of Threshold Levels of Burst by Burst AQAM/CDMA in Slow Rayleigh Fading Environments

Authors: F. Nejadebrahimi, M. ArdebiliPour

Abstract:

In this paper, we are going to determine the threshold levels of adaptive modulation in a burst by burst CDMA system by a suboptimum method so that the above method attempts to increase the average bit per symbol (BPS) rate of transceiver system by switching between the different modulation modes in variable channel condition. In this method, we choose the minimum values of average bit error rate (BER) and maximum values of average BPS on different values of average channel signal to noise ratio (SNR) and then calculate the relative threshold levels of them, so that when the instantaneous SNR increases, a higher order modulation be employed for increasing throughput and vise-versa when the instantaneous SNR decreases, a lower order modulation be employed for improvement of BER. In transmission step, by this adaptive modulation method, in according to comparison between obtained estimation of pilot symbols and a set of above suboptimum threshold levels, above system chooses one of states no transmission, BPSK, 4QAM and square 16QAM for modulation of data. The expected channel in this paper is a slow Rayleigh fading.

Keywords: AQAM, burst, BER, BPS, CDMA, threshold.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1524
340 A Two Level Load Balancing Approach for Cloud Environment

Authors: Anurag Jain, Rajneesh Kumar

Abstract:

Cloud computing is the outcome of rapid growth of internet. Due to elastic nature of cloud computing and unpredictable behavior of user, load balancing is the major issue in cloud computing paradigm. An efficient load balancing technique can improve the performance in terms of efficient resource utilization and higher customer satisfaction. Load balancing can be implemented through task scheduling, resource allocation and task migration. Various parameters to analyze the performance of load balancing approach are response time, cost, data processing time and throughput. This paper demonstrates a two level load balancer approach by combining join idle queue and join shortest queue approach. Authors have used cloud analyst simulator to test proposed two level load balancer approach. The results are analyzed and compared with the existing algorithms and as observed, proposed work is one step ahead of existing techniques.

Keywords: Cloud Analyst, Cloud Computing, Join Idle Queue, Join Shortest Queue, Load balancing, Task Scheduling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1988
339 Copper Price Prediction Model for Various Economic Situations

Authors: Haidy S. Ghali, Engy Serag, A. Samer Ezeldin

Abstract:

Copper is an essential raw material used in the construction industry. During 2021 and the first half of 2022, the global market suffered from a significant fluctuation in copper raw material prices due to the aftermath of both the COVID-19 pandemic and the Russia-Ukraine war which exposed its consumers to an unexpected financial risk. Thereto, this paper aims to develop two hybrid price prediction models using artificial neural network and long short-term memory (ANN-LSTM), by Python, that can forecast the average monthly copper prices, traded in the London Metal Exchange; the first model is a multivariate model that forecasts the copper price of the next 1-month and the second is a univariate model that predicts the copper prices of the upcoming three months. Historical data of average monthly London Metal Exchange copper prices are collected from January 2009 till July 2022 and potential external factors are identified and employed in the multivariate model. These factors lie under three main categories: energy prices, and economic indicators of the three major exporting countries of copper depending on the data availability. Before developing the LSTM models, the collected external parameters are analyzed with respect to the copper prices using correlation, and multicollinearity tests in R software; then, the parameters are further screened to select the parameters that influence the copper prices. Then, the two LSTM models are developed, and the dataset is divided into training, validation, and testing sets. The results show that the performance of the 3-month prediction model is better than the 1-month prediction model; but still, both models can act as predicting tools for diverse economic situations.

Keywords: Copper prices, prediction model, neural network, time series forecasting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 172
338 Effect of Curing Profile to Eliminate the Voids / Black Dots Formation in Underfill Epoxy for Hi-CTE Flip Chip Packaging

Authors: Zainudin Kornain, Azman Jalar, Rozaidi Rasid, Fong Chee Seng

Abstract:

Void formation in underfill is considered as failure in flip chip manufacturing process. Void formation possibly caused by several factors such as poor soldering and flux residue during die attach process, void entrapment due moisture contamination, dispense pattern process and setting up the curing process. This paper presents the comparison of single step and two steps curing profile towards the void and black dots formation in underfill for Hi-CTE Flip Chip Ceramic Ball Grid Array Package (FC-CBGA). Statistic analysis was conducted to analyze how different factors such as wafer lot, sawing technique, underfill fillet height and curing profile recipe were affected the formation of voids and black dots. A C-Mode Scanning Aqoustic Microscopy (C-SAM) was used to scan the total count of voids and black dots. It was shown that the 2 steps curing profile provided solution for void elimination and black dots in underfill after curing process.

Keywords: black dots formation, curing profile, FC-CBGA, underfill, void formation,

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4061
337 Low Power and Less Area Architecture for Integer Motion Estimation

Authors: C Hisham, K Komal, Amit K Mishra

Abstract:

Full search block matching algorithm is widely used for hardware implementation of motion estimators in video compression algorithms. In this paper we are proposing a new architecture, which consists of a 2D parallel processing unit and a 1D unit both working in parallel. The proposed architecture reduces both data access power and computational power which are the main causes of power consumption in integer motion estimation. It also completes the operations with nearly the same number of clock cycles as compared to a 2D systolic array architecture. In this work sum of absolute difference (SAD)-the most repeated operation in block matching, is calculated in two steps. The first step is to calculate the SAD for alternate rows by a 2D parallel unit. If the SAD calculated by the parallel unit is less than the stored minimum SAD, the SAD of the remaining rows is calculated by the 1D unit. Early termination, which stops avoidable computations has been achieved with the help of alternate rows method proposed in this paper and by finding a low initial SAD value based on motion vector prediction. Data reuse has been applied to the reference blocks in the same search area which significantly reduced the memory access.

Keywords: Sum of absolute difference, high speed DSP.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1485
336 Inadequate Requirements Engineering Process: A Key Factor for Poor Software Development in Developing Nations: A Case Study

Authors: K. Adu Michael, K. Alese Boniface

Abstract:

Developing a reliable and sustainable software products is today a big challenge among up–coming software developers in Nigeria. The inability to develop a comprehensive problem statement needed to execute proper requirements engineering process is missing. The need to describe the ‘what’ of a system in one document, written in a natural language is a major step in the overall process of Software Engineering. Requirements Engineering is a process use to discover, analyze and validate system requirements. This process is needed in reducing software errors at the early stage of the development of software. The importance of each of the steps in Requirements Engineering is clearly explained in the context of using detailed problem statement from client/customer to get an overview of an existing system along with expectations from the new system. This paper elicits inadequate Requirements Engineering principle as the major cause of poor software development in developing nations using a case study of final year computer science students of a tertiary-education institution in Nigeria.

Keywords: Client/Customer, Problem Statement, Requirements Engineering, Software Developers.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2449
335 Risk Assessment in Durations and Costs for Construction of Industrial Facilities in Egypt Using Equations and Computer

Authors: M. Kamal Elbokl, Negadi Kheira

Abstract:

Risk Evaluation is an important step in protecting your workers and your business, as well as complying with the law. It helps you focus on the risks that really matter in your workplace – the ones with the potential to cause real harm. We are in this paper introduce basics of risk assessment then we mention some of ways to risk evaluation by computer especially Monte Carlo simulation and Microsoft project.

We use Program Evaluation and Review Technique (PERT) to deal with Risks in Industrial Facilities in Evaluation and Assessment for this risk. Using PERT Technique in Microsoft Project by the PERT toolbar and using PERTMASTER Program with Primavera Program we evaluate many hazards and make calculations for that by mathematical equation to make right decisions. We define and calculate risk factor and risk severity to ranking the type of the risk then dealing with it using in that many ways like probability computation, curves, and tables. By introducing variables in the equation of functions in computer programs we calculate the risk in the time and the cost in general case and then mention some examples in industrial facilities field.

Keywords: Risk, Industrial Facilities, PERT, Monte Carlo Simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1943
334 Preparation and Characterization of Recycled PET/PP Blends from Automotive Textile Waste for Use in the Furniture Edge Banding Sector

Authors: Merve Ozer, Tolga Gokkurt, Yasemen Gokkurt, Ezgi Bozbey

Abstract:

In this study, research has been conducted on the recovery of automotive textile waste, which has heavy use in the automotive sector and consists of PET/PP content, through the upcycling technique of post-product and post-consumer usage. The aim is to investigate the formulation and production methods that will enable the substitution of original PP raw materials, used in the production of plastic edge bands, with PP/PET alloys. The lamination structure of the mentioned waste makes it impossible to separate the incompatible PP and PET phases, thereby hindering the production of high-quality raw materials or products through recycling. In this study, a comprehensive process was examined through a two-step production process using different types of block and maleic-grafted copolymers to achieve compatibility between these two incompatible phases. The obtained plastic raw materials, referred to as PP/PET blends, were examined in detail, with a focus on their mechanical, thermal, and morphological properties, to discuss their substitutability for the original raw materials.

Keywords: Twin screw extruders, mechanical recycling, melt blending, plastic blends, polyethylene, polypropylene, recycling of plastics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 160
333 Bioethanol: Indonesian Macro-Algae as a Renewable Feedstock for Liquid Fuel

Authors: T. Poespowati, E. Marsyahyo, R. Kartika-Dewi

Abstract:

This experimental study aims at studying the conversion of macro-algae into bioethanol under several steps of procedure: preparation, pre-treatment, fermentation, and distillation. The main objective of this work was to investigate the role of buffer’s type as a stabiliser of pH level and fermentation time on the yield of ethanol. For this purpose, experiments were carried out on biomass macro-algae to de-couple the pre-treatment and fermentation processes from those associated with distillation process. β- glucosidase was used as cellulose decomposer during hydrolysis step and yeast was used during fermentation process. The species of macro-algae utilised as energy feedstock was Ulva lactuca and it was harvested from southern coast of Central of Java Island – Indonesia. Experiments were conducted in a simple fermenter over a different buffer: citrate buffer and acetic buffer, and over a range of fermentation times between 5 to 20 days. The ethanol production was found to be significantly affected by both variables. The optimum time of fermentation was 10 days with citrate buffer; result in 0.88458% of ethanol, and the ethanol content after distillation process was shown 0.985015%.

Keywords: Fermentation, ulva-lactuca, buffer, β-glucosidase, bioethanol.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2430
332 Performance Evaluation of a Minimum Mean Square Error-Based Physical Sidelink Share Channel Receiver under Fading Channel

Authors: Yang Fu, Jaime Rodrigo Navarro, Jose F. Monserrat, Faiza Bouchmal, Oscar Carrasco Quilis

Abstract:

Cellular Vehicle to Everything (C-V2X) is considered a promising solution for future autonomous driving. From Release 16 to Release 17, the Third Generation Partnership Project (3GPP) has introduced the definitions and services for 5G New Radio (NR) V2X. Since establishing a simulator for C-V2X communications is an essential preliminary step to achieve reliable and stable communication links, this paper proposes a complete framework of a link-level simulator based on the 3GPP specifications for the Physical Sidelink Share Channel (PSSCH) of the 5G NR Physical Layer (PHY). In this framework, several algorithms in the receiver part, i.e., sliding window in channel estimation and Minimum Mean Square Error (MMSE)-based equalization, are developed. Finally, the performance of the developed PSSCH receiver is validated through extensive simulations under different assumptions.

Keywords: Yang Fu, Jaime Rodrigo Navarro, Jose F. Monserrat, Faiza Bouchmal, Oscar Carrasco Quilis

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 132
331 Advanced Neural Network Learning Applied to Pulping Modeling

Authors: Z. Zainuddin, W. D. Wan Rosli, R. Lanouette, S. Sathasivam

Abstract:

This paper reports work done to improve the modeling of complex processes when only small experimental data sets are available. Neural networks are used to capture the nonlinear underlying phenomena contained in the data set and to partly eliminate the burden of having to specify completely the structure of the model. Two different types of neural networks were used for the application of pulping problem. A three layer feed forward neural networks, using the Preconditioned Conjugate Gradient (PCG) methods were used in this investigation. Preconditioning is a method to improve convergence by lowering the condition number and increasing the eigenvalues clustering. The idea is to solve the modified odified problem M-1 Ax= M-1b where M is a positive-definite preconditioner that is closely related to A. We mainly focused on Preconditioned Conjugate Gradient- based training methods which originated from optimization theory, namely Preconditioned Conjugate Gradient with Fletcher-Reeves Update (PCGF), Preconditioned Conjugate Gradient with Polak-Ribiere Update (PCGP) and Preconditioned Conjugate Gradient with Powell-Beale Restarts (PCGB). The behavior of the PCG methods in the simulations proved to be robust against phenomenon such as oscillations due to large step size.

Keywords: Convergence, pulping modeling, neural networks, preconditioned conjugate gradient.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1400
330 Quick Sequential Search Algorithm Used to Decode High-Frequency Matrices

Authors: Mohammed M. Siddeq, Mohammed H. Rasheed, Omar M. Salih, Marcos A. Rodrigues

Abstract:

This research proposes a data encoding and decoding method based on the Matrix Minimization algorithm. This algorithm is applied to high-frequency coefficients for compression/encoding. The algorithm starts by converting every three coefficients to a single value; this is accomplished based on three different keys. The decoding/decompression uses a search method called QSS (Quick Sequential Search) Decoding Algorithm presented in this research based on the sequential search to recover the exact coefficients. In the next step, the decoded data are saved in an auxiliary array. The basic idea behind the auxiliary array is to save all possible decoded coefficients; this is because another algorithm, such as conventional sequential search, could retrieve encoded/compressed data independently from the proposed algorithm. The experimental results showed that our proposed decoding algorithm retrieves original data faster than conventional sequential search algorithms.

Keywords: Matrix Minimization Algorithm, Decoding Sequential Search Algorithm, image compression, Discrete Cosine Transform, Discrete Wavelet Transform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 230
329 Optimal Economic Restructuring Aimed at an Increase in GDP Constrained by a Decrease in Energy Consumption and CO2 Emissions

Authors: Alexander Y. Vaninsky

Abstract:

The objective of this paper is finding the way of economic restructuring - that is, change in the shares of sectoral gross outputs - resulting in the maximum possible increase in the gross domestic product (GDP) combined with decreases in energy consumption and CO2 emissions. It uses an input-output model for the GDP and factorial models for the energy consumption and CO2 emissions to determine the projection of the gradient of GDP, and the antigradients of the energy consumption and CO2 emissions, respectively, on a subspace formed by the structure-related variables. Since the gradient (antigradient) provides a direction of the steepest increase (decrease) of the objective function, and their projections retain this property for the functions' limitation to the subspace, each of the three directional vectors solves a particular problem of optimal structural change. In the next step, a type of factor analysis is applied to find a convex combination of the projected gradient and antigradients having maximal possible positive correlation with each of the three. This convex combination provides the desired direction of the structural change. The national economy of the United States is used as an example of applications.

Keywords: Economic restructuring, Input-Output analysis, Divisia index, Factorial decomposition, E3 models.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1605
328 Khilafat from Khilafat-e-Rashida: The Only Form of Governance to Unite Muslim Countries

Authors: Zoaib Mirza

Abstract:

Half of the Muslim countries in the world have declared Islam the state religion in their constitutions. Yet, none of these countries have implemented authentic Islamic laws in line with the Quran (Holy Book), practices of Prophet Mohammad (P.B.U.H) called the Sunnah, and his four successors known as the Rightly Guided - Khalifa. Since their independence, these countries have adopted different government systems like Democracy, Dictatorship, Republic, Communism, and Monarchy. Instead of benefiting the people, these government systems have put these countries into political, social, and economic crises. These Islamic countries do not have equal representation and membership in worldwide political forums. Western countries lead these forums. Therefore, it is now imperative for the Muslim leaders of all these countries to collaborate, reset, and implement the original Islamic form of government, which led to the prosperity and success of people, including non-Muslims, 1400 years ago. They should unite as one nation under Khalifat, which means establishing the authority of Allah (SWT) and following the divine commandments related to the social, political, and economic systems. As they have declared Islam in their constitution, they should work together to apply the divine framework of the governance revealed by Allah (SWT) and implemented by Prophet Mohammad (P.B.U.H) and his four successors called Khalifas. This paper provides an overview of the downfall and the end of the Khalifat system by 1924, the ways in which the West caused political, social, and economic crises in the Muslim countries, and finally, a summary of the social, political, and economic systems implemented by the Prophet Mohammad (P.B.U.H) and his successors, Khalifas, called the Rightly Guided – Hazrat Abu Bakr (RA), Hazrat Omar (RA), Hazrat Usman (RA), and Hazrat Ali (RA).

Keywords: Khalifat, Khilafat-e-Rashida, The Rightly Guided, colonization, capitalism, neocolonization, government systems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 574
327 Rainfall–Runoff Simulation Using WetSpa Model in Golestan Dam Basin, Iran

Authors: M. R. Dahmardeh Ghaleno, M. Nohtani, S. Khaledi

Abstract:

Flood simulation and prediction is one of the most active research areas in surface water management. WetSpa is a distributed, continuous, and physical model with daily or hourly time step that explains precipitation, runoff, and evapotranspiration processes for both simple and complex contexts. This model uses a modified rational method for runoff calculation. In this model, runoff is routed along the flow path using Diffusion-Wave equation which depends on the slope, velocity, and flow route characteristics. Golestan Dam Basin is located in Golestan province in Iran and it is passing over coordinates 55° 16´ 50" to 56° 4´ 25" E and 37° 19´ 39" to 37° 49´ 28"N. The area of the catchment is about 224 km2, and elevations in the catchment range from 414 to 2856 m at the outlet, with average slope of 29.78%. Results of the simulations show a good agreement between calculated and measured hydrographs at the outlet of the basin. Drawing upon Nash-Sutcliffe model efficiency coefficient for calibration periodic model estimated daily hydrographs and maximum flow rate with an accuracy up to 59% and 80.18%, respectively.

Keywords: Watershed simulation, WetSpa, stream flow, flood prediction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1025
326 Adaptive Motion Estimator Based on Variable Block Size Scheme

Authors: S. Dhahri, A. Zitouni, H. Chaouch, R. Tourki

Abstract:

This paper presents an adaptive motion estimator that can be dynamically reconfigured by the best algorithm depending on the variation of the video nature during the lifetime of an application under running. The 4 Step Search (4SS) and the Gradient Search (GS) algorithms are integrated in the estimator in order to be used in the case of rapid and slow video sequences respectively. The Full Search Block Matching (FSBM) algorithm has been also integrated in order to be used in the case of the video sequences which are not real time oriented. In order to efficiently reduce the computational cost while achieving better visual quality with low cost power, the proposed motion estimator is based on a Variable Block Size (VBS) scheme that uses only the 16x16, 16x8, 8x16 and 8x8 modes. Experimental results show that the adaptive motion estimator allows better results in term of Peak Signal to Noise Ratio (PSNR), computational cost, FPGA occupied area, and dissipated power relatively to the most popular variable block size schemes presented in the literature.

Keywords: H264, Configurable Motion Estimator, VariableBlock Size, PSNR, Dissipated power.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1646
325 Survey on Awareness, Knowledge and Practices: Managing Osteoporosis among Practitioners in a Tertiary Hospital, Malaysia

Authors: P. H. Tee, S. M. Zamri, K. M. Kasim, S. K. Tiew

Abstract:

This study evaluates the management of osteoporosis in a tertiary care government hospital in Malaysia. As the number of admitted patients having osteoporotic fractures is on the rise, osteoporotic medications are an increasing financial burden to government hospitals because they account for half of the orthopedic budget and expenditure. Comprehensive knowledge among practitioners is important to detect early and avoid this preventable disease and its serious complications. The purpose of this study is to evaluate the awareness, knowledge, and practices in managing osteoporosis among practitioners in Hospital Tengku Ampuan Rahimah (HTAR), Klang. A questionnaire from an overseas study in managing osteoporosis among primary care physicians is adapted to Malaysia’s Clinical Practice Guideline of Osteoporosis 2012 (revised 2015) and international guidelines were distributed to all orthopedic practitioners in HTAR Klang (including surgeons, orthopedic medical officers), endocrinologists, rheumatologists and geriatricians. The participants were evaluated on their expertise in the diagnosis, prevention, treatment decision and medications for osteoporosis. Collected data were analyzed for all descriptive and statistical analyses as appropriate. All 45 participants responded to the questionnaire. Participants scored highest on expertise in prevention, followed by diagnosis, treatment decision and lastly, medication. Most practitioners stated that own-initiated continuing professional education from articles and books was the most effective way to update their knowledge, followed by attendance in conferences on osteoporosis. This study confirms the importance of comprehensive training and education regarding osteoporosis among tertiary care physicians and surgeons, predominantly in pharmacotherapy, to deliver wholesome care for osteoporotic patients.

Keywords: Awareness, knowledge, osteoporosis, practices.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 695
324 Model Reduction of Linear Systems by Conventional and Evolutionary Techniques

Authors: S. Panda, S. K. Tomar, R. Prasad, C. Ardil

Abstract:

Reduction of Single Input Single Output (SISO) continuous systems into Reduced Order Model (ROM), using a conventional and an evolutionary technique is presented in this paper. In the conventional technique, the mixed advantages of Mihailov stability criterion and continued fraction expansions (CFE) technique is employed where the reduced denominator polynomial is derived using Mihailov stability criterion and the numerator is obtained by matching the quotients of the Cauer second form of Continued fraction expansions. In the evolutionary technique method Particle Swarm Optimization (PSO) is employed to reduce the higher order model. PSO method is based on the minimization of the Integral Squared Error (ISE) between the transient responses of original higher order model and the reduced order model pertaining to a unit step input. Both the methods are illustrated through numerical example.

Keywords: Reduced Order Modeling, Stability, Continued Fraction Expansions, Mihailov Stability Criterion, Particle Swarm Optimization, Integral Squared Error.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1918
323 Characterization and Evaluation of the Activity of Dipeptidyl Peptidase IV from the Black-Bellied Hornet Vespa basalis

Authors: Feng Chia Hsieh, Sheng Kuo Hsieh, Tzyy Rong Jinn

Abstract:

Characterization and evaluation of the activity of Vespa basalis DPP-IV, which expressed in Spodoptera frugiperda 21 cells. The expression of rDPP-IV was confirmed by SDS–PAGE, Western blot analyses, LC-MS/MS and measurement of its peptidase specificity. One-step purification by Ni-NTA affinity chromatography and the total amount of rDPP-IV recovered was approximately 6.4mg per liter from infected culture medium; an equivalent amount would be produced by 1x109 infected Sf21 insect cells. Through the affinity purification led to highly stable rDPP-IV enzyme was recovered and with significant peptidase activity. The rDPP-IV exhibited classical Michaelis–Menten kinetics, with kcat/Km in the range of 10-500 mM-1×S-1 for the five synthetic substrates and optimum substrate is Ala-Pro-pNA. As expected in inhibition assay, the enzymatic activity of rDPP-IV was significantly reduced by 80 or 60% in the presence of sitagliptin (a DPP-IV inhibitor) or PMSF (a serine protease inhibitor), but was not apparently affected by iodoacetamide (a cysteine protease inhibitor).

Keywords: Dipeptidyl-Peptidase IV, Phenylmethylsulfonyl fluoride; Serine protease, Sitagliptin, Vespa basalis

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1573
322 Efficient Tuning Parameter Selection by Cross-Validated Score in High Dimensional Models

Authors: Yoonsuh Jung

Abstract:

As DNA microarray data contain relatively small sample size compared to the number of genes, high dimensional models are often employed. In high dimensional models, the selection of tuning parameter (or, penalty parameter) is often one of the crucial parts of the modeling. Cross-validation is one of the most common methods for the tuning parameter selection, which selects a parameter value with the smallest cross-validated score. However, selecting a single value as an ‘optimal’ value for the parameter can be very unstable due to the sampling variation since the sample sizes of microarray data are often small. Our approach is to choose multiple candidates of tuning parameter first, then average the candidates with different weights depending on their performance. The additional step of estimating the weights and averaging the candidates rarely increase the computational cost, while it can considerably improve the traditional cross-validation. We show that the selected value from the suggested methods often lead to stable parameter selection as well as improved detection of significant genetic variables compared to the tradition cross-validation via real data and simulated data sets.

Keywords: Cross Validation, Parameter Averaging, Parameter Selection, Regularization Parameter Search.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1569
321 An Ontology Based Question Answering System on Software Test Document Domain

Authors: Meltem Serhatli, Ferda N. Alpaslan

Abstract:

Processing the data by computers and performing reasoning tasks is an important aim in Computer Science. Semantic Web is one step towards it. The use of ontologies to enhance the information by semantically is the current trend. Huge amount of domain specific, unstructured on-line data needs to be expressed in machine understandable and semantically searchable format. Currently users are often forced to search manually in the results returned by the keyword-based search services. They also want to use their native languages to express what they search. In this paper, an ontology-based automated question answering system on software test documents domain is presented. The system allows users to enter a question about the domain by means of natural language and returns exact answer of the questions. Conversion of the natural language question into the ontology based query is the challenging part of the system. To be able to achieve this, a new algorithm regarding free text to ontology based search engine query conversion is proposed. The algorithm is based on investigation of suitable question type and parsing the words of the question sentence.

Keywords: Description Logics, ontology, question answering, reasoning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2143
320 Feature Extraction from Aerial Photos

Authors: Mesut Gündüz, Ferruh Yildiz, Ayşe Onat

Abstract:

In Geographic Information System, one of the sources of obtaining needed geographic data is digitizing analog maps and evaluation of aerial and satellite photos. In this study, a method will be discussed which can be used to extract vectorial features and creating vectorized drawing files for aerial photos. At the same time a software developed for these purpose. Converting from raster to vector is also known as vectorization and it is the most important step when creating vectorized drawing files. In the developed algorithm, first of all preprocessing on the aerial photo is done. These are; converting to grayscale if necessary, reducing noise, applying some filters and determining the edge of the objects etc. After these steps, every pixel which constitutes the photo are followed from upper left to right bottom by examining its neighborhood relationship and one pixel wide lines or polylines obtained. The obtained lines have to be erased for preventing confusion while continuing vectorization because if not erased they can be perceived as new line, but if erased it can cause discontinuity in vector drawing so the image converted from 2 bit to 8 bit and the detected pixels are expressed as a different bit. In conclusion, the aerial photo can be converted to vector form which includes lines and polylines and can be opened in any CAD application.

Keywords: Vectorization, Aerial Photos, Vectorized DrawingFile.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1600
319 Artificial Intelligence Techniques Applications for Power Disturbances Classification

Authors: K.Manimala, Dr.K.Selvi, R.Ahila

Abstract:

Artificial Intelligence (AI) methods are increasingly being used for problem solving. This paper concerns using AI-type learning machines for power quality problem, which is a problem of general interest to power system to provide quality power to all appliances. Electrical power of good quality is essential for proper operation of electronic equipments such as computers and PLCs. Malfunction of such equipment may lead to loss of production or disruption of critical services resulting in huge financial and other losses. It is therefore necessary that critical loads be supplied with electricity of acceptable quality. Recognition of the presence of any disturbance and classifying any existing disturbance into a particular type is the first step in combating the problem. In this work two classes of AI methods for Power quality data mining are studied: Artificial Neural Networks (ANNs) and Support Vector Machines (SVMs). We show that SVMs are superior to ANNs in two critical respects: SVMs train and run an order of magnitude faster; and SVMs give higher classification accuracy.

Keywords: back propagation network, power quality, probabilistic neural network, radial basis function support vector machine

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1550
318 Automated Heart Sound Classification from Unsegmented Phonocardiogram Signals Using Time Frequency Features

Authors: Nadia Masood Khan, Muhammad Salman Khan, Gul Muhammad Khan

Abstract:

Cardiologists perform cardiac auscultation to detect abnormalities in heart sounds. Since accurate auscultation is a crucial first step in screening patients with heart diseases, there is a need to develop computer-aided detection/diagnosis (CAD) systems to assist cardiologists in interpreting heart sounds and provide second opinions. In this paper different algorithms are implemented for automated heart sound classification using unsegmented phonocardiogram (PCG) signals. Support vector machine (SVM), artificial neural network (ANN) and cartesian genetic programming evolved artificial neural network (CGPANN) without the application of any segmentation algorithm has been explored in this study. The signals are first pre-processed to remove any unwanted frequencies. Both time and frequency domain features are then extracted for training the different models. The different algorithms are tested in multiple scenarios and their strengths and weaknesses are discussed. Results indicate that SVM outperforms the rest with an accuracy of 73.64%.

Keywords: Pattern recognition, machine learning, computer aided diagnosis, heart sound classification, and feature extraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1272
317 An Investigation on Overstrength Factor (Ω) of Reinforced Concrete Buildings in Turkish Earthquake Draft Code (TEC-2016)

Authors: M. Hakan Arslan, I. Hakkı Erkan

Abstract:

Overstrength factor is an important parameter of load reduction factor. In this research, the overstrength factor (Ω) of reinforced concrete (RC) buildings and the parameters of Ω in TEC-2016 draft version have been explored. For this aim, 48 RC buildings have been modeled according to the current seismic code TEC-2007 and Turkish Building Code-500-2000 criteria. After modelling step, nonlinear static pushover analyses have been applied to these buildings by using TEC-2007 Section 7. After the nonlinear pushover analyses, capacity curves (lateral load-lateral top displacement curves) have been plotted for 48 RC buildings. Using capacity curves, overstrength factors (Ω) have been derived for each building. The obtained overstrength factor (Ω) values have been compared with TEC-2016 values for related building types, and the results have been interpreted. According to the obtained values from the study, overstrength factor (Ω) given in TEC-2016 draft code is found quite suitable.

Keywords: Reinforced concrete buildings, overstrength factor, earthquake, static pushover analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2018
316 An Adaptive Virtual Desktop Service in Cloud Computing Platform

Authors: Shuen-Tai Wang, Hsi-Ya Chang

Abstract:

Cloud computing is becoming more and more matured over the last few years and consequently the demands for better cloud services is increasing rapidly. One of the research topics to improve cloud services is the desktop computing in virtualized environment. This paper aims at the development of an adaptive virtual desktop service in cloud computing platform based on our previous research on the virtualization technology. We implement cloud virtual desktop and application software streaming technology that make it possible for providing Virtual Desktop as a Service (VDaaS). Given the development of remote desktop virtualization, it allows shifting the user’s desktop from the traditional PC environment to the cloud-enabled environment, which is stored on a remote virtual machine rather than locally. This proposed effort has the potential to positively provide an efficient, resilience and elastic environment for online cloud service. Users no longer need to burden the platform maintenances and drastically reduces the overall cost of hardware and software licenses. Moreover, this flexible remote desktop service represents the next significant step to the mobile workplace, and it lets users access their desktop environments from virtually anywhere.

Keywords: Cloud Computing, Virtualization, Virtual Desktop, VDaaS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2480