Search results for: Gaussian function
4322 The Co-Simulation Interface SystemC/Matlab Applied in JPEG and SDR Application
Authors: Walid Hassairi, Moncef Bousselmi, Mohamed Abid
Abstract:
Functional verification is a major part of today’s system design task. Several approaches are available for verification on a high abstraction level, where designs are often modeled using MATLAB/Simulink. However, different approaches are a barrier to a unified verification flow. In this paper, we propose a co-simulation interface between SystemC and MATLAB and Simulink to enable functional verification of multi-abstraction levels designs. The resulting verification flow is tested on JPEG compression algorithm. The required synchronization of both simulation environments, as well as data type conversion is solved using the proposed co-simulation flow. We divided into two encoder jpeg parts. First implemented in SystemC which is the DCT is representing the HW part. Second, consisted of quantization and entropy encoding which is implemented in Matlab is the SW part. For communication and synchronization between these two parts we use S-Function and engine in Simulink matlab. With this research premise, this study introduces a new implementation of a Hardware SystemC of DCT. We compare the result of our simulation compared to SW / SW. We observe a reduction in simulation time you have 88.15% in JPEG and the design efficiency of the supply design is 90% in SDR.Keywords: hardware/software, co-design, co-simulation, systemc, matlab, s-function, communication, synchronization
Procedia PDF Downloads 4054321 Recognizing an Individual, Their Topic of Conversation and Cultural Background from 3D Body Movement
Authors: Gheida J. Shahrour, Martin J. Russell
Abstract:
The 3D body movement signals captured during human-human conversation include clues not only to the content of people’s communication but also to their culture and personality. This paper is concerned with automatic extraction of this information from body movement signals. For the purpose of this research, we collected a novel corpus from 27 subjects, arranged them into groups according to their culture. We arranged each group into pairs and each pair communicated with each other about different topics. A state-of-art recognition system is applied to the problems of person, culture, and topic recognition. We borrowed modeling, classification, and normalization techniques from speech recognition. We used Gaussian Mixture Modeling (GMM) as the main technique for building our three systems, obtaining 77.78%, 55.47%, and 39.06% from the person, culture, and topic recognition systems respectively. In addition, we combined the above GMM systems with Support Vector Machines (SVM) to obtain 85.42%, 62.50%, and 40.63% accuracy for person, culture, and topic recognition respectively. Although direct comparison among these three recognition systems is difficult, it seems that our person recognition system performs best for both GMM and GMM-SVM, suggesting that inter-subject differences (i.e. subject’s personality traits) are a major source of variation. When removing these traits from culture and topic recognition systems using the Nuisance Attribute Projection (NAP) and the Intersession Variability Compensation (ISVC) techniques, we obtained 73.44% and 46.09% accuracy from culture and topic recognition systems respectively.Keywords: person recognition, topic recognition, culture recognition, 3D body movement signals, variability compensation
Procedia PDF Downloads 5414320 Constructing the Joint Mean-Variance Regions for Univariate and Bivariate Normal Distributions: Approach Based on the Measure of Cumulative Distribution Functions
Authors: Valerii Dashuk
Abstract:
The usage of the confidence intervals in economics and econometrics is widespread. To be able to investigate a random variable more thoroughly, joint tests are applied. One of such examples is joint mean-variance test. A new approach for testing such hypotheses and constructing confidence sets is introduced. Exploring both the value of the random variable and its deviation with the help of this technique allows checking simultaneously the shift and the probability of that shift (i.e., portfolio risks). Another application is based on the normal distribution, which is fully defined by mean and variance, therefore could be tested using the introduced approach. This method is based on the difference of probability density functions. The starting point is two sets of normal distribution parameters that should be compared (whether they may be considered as identical with given significance level). Then the absolute difference in probabilities at each 'point' of the domain of these distributions is calculated. This measure is transformed to a function of cumulative distribution functions and compared to the critical values. Critical values table was designed from the simulations. The approach was compared with the other techniques for the univariate case. It differs qualitatively and quantitatively in easiness of implementation, computation speed, accuracy of the critical region (theoretical vs. real significance level). Stable results when working with outliers and non-normal distributions, as well as scaling possibilities, are also strong sides of the method. The main advantage of this approach is the possibility to extend it to infinite-dimension case, which was not possible in the most of the previous works. At the moment expansion to 2-dimensional state is done and it allows to test jointly up to 5 parameters. Therefore the derived technique is equivalent to classic tests in standard situations but gives more efficient alternatives in nonstandard problems and on big amounts of data.Keywords: confidence set, cumulative distribution function, hypotheses testing, normal distribution, probability density function
Procedia PDF Downloads 1744319 Semi-Supervised Learning for Spanish Speech Recognition Using Deep Neural Networks
Authors: B. R. Campomanes-Alvarez, P. Quiros, B. Fernandez
Abstract:
Automatic Speech Recognition (ASR) is a machine-based process of decoding and transcribing oral speech. A typical ASR system receives acoustic input from a speaker or an audio file, analyzes it using algorithms, and produces an output in the form of a text. Some speech recognition systems use Hidden Markov Models (HMMs) to deal with the temporal variability of speech and Gaussian Mixture Models (GMMs) to determine how well each state of each HMM fits a short window of frames of coefficients that represents the acoustic input. Another way to evaluate the fit is to use a feed-forward neural network that takes several frames of coefficients as input and produces posterior probabilities over HMM states as output. Deep neural networks (DNNs) that have many hidden layers and are trained using new methods have been shown to outperform GMMs on a variety of speech recognition systems. Acoustic models for state-of-the-art ASR systems are usually training on massive amounts of data. However, audio files with their corresponding transcriptions can be difficult to obtain, especially in the Spanish language. Hence, in the case of these low-resource scenarios, building an ASR model is considered as a complex task due to the lack of labeled data, resulting in an under-trained system. Semi-supervised learning approaches arise as necessary tasks given the high cost of transcribing audio data. The main goal of this proposal is to develop a procedure based on acoustic semi-supervised learning for Spanish ASR systems by using DNNs. This semi-supervised learning approach consists of: (a) Training a seed ASR model with a DNN using a set of audios and their respective transcriptions. A DNN with a one-hidden-layer network was initialized; increasing the number of hidden layers in training, to a five. A refinement, which consisted of the weight matrix plus bias term and a Stochastic Gradient Descent (SGD) training were also performed. The objective function was the cross-entropy criterion. (b) Decoding/testing a set of unlabeled data with the obtained seed model. (c) Selecting a suitable subset of the validated data to retrain the seed model, thereby improving its performance on the target test set. To choose the most precise transcriptions, three confidence scores or metrics, regarding the lattice concept (based on the graph cost, the acoustic cost and a combination of both), was performed as selection technique. The performance of the ASR system will be calculated by means of the Word Error Rate (WER). The test dataset was renewed in order to extract the new transcriptions added to the training dataset. Some experiments were carried out in order to select the best ASR results. A comparison between a GMM-based model without retraining and the DNN proposed system was also made under the same conditions. Results showed that the semi-supervised ASR-model based on DNNs outperformed the GMM-model, in terms of WER, in all tested cases. The best result obtained an improvement of 6% relative WER. Hence, these promising results suggest that the proposed technique could be suitable for building ASR models in low-resource environments.Keywords: automatic speech recognition, deep neural networks, machine learning, semi-supervised learning
Procedia PDF Downloads 3394318 Improvement of Process Competitiveness Using Intelligent Reference Models
Authors: Julio Macedo
Abstract:
Several methodologies are now available to conceive the improvements of a process so that it becomes competitive as for example total quality, process reengineering, six sigma, define measure analysis improvement control method. These improvements are of different nature and can be external to the process represented by an optimization model or a discrete simulation model. In addition, the process stakeholders are several and have different desired performances for the process. Hence, the methodologies above do not have a tool to aid in the conception of the required improvements. In order to fill this void we suggest the use of intelligent reference models. A reference model is a set of qualitative differential equations and an objective function that minimizes the gap between the current and the desired performance indexes of the process. The reference models are intelligent so when they receive the current state of the problematic process and the desired performance indexes they generate the required improvements for the problematic process. The reference models are fuzzy cognitive maps added with an objective function and trained using the improvements implemented by the high performance firms. Experiments done in a set of students show the reference models allow them to conceive more improvements than students that do not use these models.Keywords: continuous improvement, fuzzy cognitive maps, process competitiveness, qualitative simulation, system dynamics
Procedia PDF Downloads 874317 Comparative Analysis of Islamic Bank in Indonesia and Malaysia with Risk Profile, Good Corporate Governance, Earnings, and Capital Method: Performance of Business Function and Social Function Perspective
Authors: Achsania Hendratmi, Nisful Laila, Fatin Fadhilah Hasib, Puji Sucia Sukmaningrum
Abstract:
This study aims to compare and see the differences between Islamic bank in Indonesia and Islamic bank in Malaysia using RGEC method (Risk Profile, Good Corporate Governance, Earnings, and Capital). This study examines the comparison in business and social performance of eleven Islamic banks in Indonesia and fifteen Islamic banks in Malaysia. This research used quantitative approach and the collections of data was done by collecting all the annual reports of banks that has been created as a sample over the period 2011-2015. The test result of the Independent Samples T-test and Mann-Whitney Test showed there were differences in the business performance of Islamic Bank in Indonesia and Malaysia as seen from the aspect of Risk profile (FDR), GCG, and Earnings (ROA). Also, there were differences of business and social performance as seen from Earnings (ROE), Capital (CAR), and Sharia Conformity Indicator (PSR and ZR) aspects.Keywords: business performance, Islamic banks, RGEC, social performance
Procedia PDF Downloads 2944316 Deepnic, A Method to Transform Each Variable into Image for Deep Learning
Authors: Nguyen J. M., Lucas G., Brunner M., Ruan S., Antonioli D.
Abstract:
Deep learning based on convolutional neural networks (CNN) is a very powerful technique for classifying information from an image. We propose a new method, DeepNic, to transform each variable of a tabular dataset into an image where each pixel represents a set of conditions that allow the variable to make an error-free prediction. The contrast of each pixel is proportional to its prediction performance and the color of each pixel corresponds to a sub-family of NICs. NICs are probabilities that depend on the number of inputs to each neuron and the range of coefficients of the inputs. Each variable can therefore be expressed as a function of a matrix of 2 vectors corresponding to an image whose pixels express predictive capabilities. Our objective is to transform each variable of tabular data into images into an image that can be analysed by CNNs, unlike other methods which use all the variables to construct an image. We analyse the NIC information of each variable and express it as a function of the number of neurons and the range of coefficients used. The predictive value and the category of the NIC are expressed by the contrast and the color of the pixel. We have developed a pipeline to implement this technology and have successfully applied it to genomic expressions on an Affymetrix chip.Keywords: tabular data, deep learning, perfect trees, NICS
Procedia PDF Downloads 904315 Design and Implementation of a Counting and Differentiation System for Vehicles through Video Processing
Authors: Derlis Gregor, Kevin Cikel, Mario Arzamendia, Raúl Gregor
Abstract:
This paper presents a self-sustaining mobile system for counting and classification of vehicles through processing video. It proposes a counting and classification algorithm divided in four steps that can be executed multiple times in parallel in a SBC (Single Board Computer), like the Raspberry Pi 2, in such a way that it can be implemented in real time. The first step of the proposed algorithm limits the zone of the image that it will be processed. The second step performs the detection of the mobile objects using a BGS (Background Subtraction) algorithm based on the GMM (Gaussian Mixture Model), as well as a shadow removal algorithm using physical-based features, followed by morphological operations. In the first step the vehicle detection will be performed by using edge detection algorithms and the vehicle following through Kalman filters. The last step of the proposed algorithm registers the vehicle passing and performs their classification according to their areas. An auto-sustainable system is proposed, powered by batteries and photovoltaic solar panels, and the data transmission is done through GPRS (General Packet Radio Service)eliminating the need of using external cable, which will facilitate it deployment and translation to any location where it could operate. The self-sustaining trailer will allow the counting and classification of vehicles in specific zones with difficult access.Keywords: intelligent transportation system, object detection, vehicle couting, vehicle classification, video processing
Procedia PDF Downloads 3224314 Generalized Extreme Value Regression with Binary Dependent Variable: An Application for Predicting Meteorological Drought Probabilities
Authors: Retius Chifurira
Abstract:
Logistic regression model is the most used regression model to predict meteorological drought probabilities. When the dependent variable is extreme, the logistic model fails to adequately capture drought probabilities. In order to adequately predict drought probabilities, we use the generalized linear model (GLM) with the quantile function of the generalized extreme value distribution (GEVD) as the link function. The method maximum likelihood estimation is used to estimate the parameters of the generalized extreme value (GEV) regression model. We compare the performance of the logistic and the GEV regression models in predicting drought probabilities for Zimbabwe. The performance of the regression models are assessed using the goodness-of-fit tests, namely; relative root mean square error (RRMSE) and relative mean absolute error (RMAE). Results show that the GEV regression model performs better than the logistic model, thereby providing a good alternative candidate for predicting drought probabilities. This paper provides the first application of GLM derived from extreme value theory to predict drought probabilities for a drought-prone country such as Zimbabwe.Keywords: generalized extreme value distribution, general linear model, mean annual rainfall, meteorological drought probabilities
Procedia PDF Downloads 2004313 Cloning and Expression of Azurin: A Protein Having Antitumor and Cell Penetrating Ability
Authors: Mohsina Akhter
Abstract:
Cancer has become a wide spread disease around the globe and takes many lives every year. Different treatments are being practiced but all have potential side effects with somewhat less specificity towards target sites. Pseudomonas aeruginosa is known to secrete a protein azurin with special anti-cancer function. It has unique cell penetrating peptide comprising of 18 amino acids that have ability to enter cancer cells specifically. Reported function of Azurin is to stabilize p53 inside the tumor cells and induces apoptosis through Bax mediated cytochrome c release from mitochondria. At laboratory scale, we have made recombinant azurin through cloning rpTZ57R/T-azu vector into E.coli strain DH-5α and subcloning rpET28-azu vector into E.coli BL21-CodonPlus (DE3). High expression was ensured with IPTG induction at different concentrations then optimized high expression level at 1mM concentration of IPTG for 5 hours. Purification has been done by using Ni+2 affinity chromatography. We have concluded that azurin can be a remarkable improvement in cancer therapeutics if it produces on a large scale. Azurin does not enter into the normal cells so it will prove a safe and secure treatment for patients and prevent them from hazardous anomalies.Keywords: azurin, pseudomonas aeruginosa, cancer, therapeutics
Procedia PDF Downloads 3114312 Hybrid Gravity Gradient Inversion-Ant Colony Optimization Algorithm for Motion Planning of Mobile Robots
Authors: Meng Wu
Abstract:
Motion planning is a common task required to be fulfilled by robots. A strategy combining Ant Colony Optimization (ACO) and gravity gradient inversion algorithm is proposed for motion planning of mobile robots. In this paper, in order to realize optimal motion planning strategy, the cost function in ACO is designed based on gravity gradient inversion algorithm. The obstacles around mobile robot can cause gravity gradient anomalies; the gradiometer is installed on the mobile robot to detect the gravity gradient anomalies. After obtaining the anomalies, gravity gradient inversion algorithm is employed to calculate relative distance and orientation between mobile robot and obstacles. The relative distance and orientation deduced from gravity gradient inversion algorithm is employed as cost function in ACO algorithm to realize motion planning. The proposed strategy is validated by the simulation and experiment results.Keywords: motion planning, gravity gradient inversion algorithm, ant colony optimization
Procedia PDF Downloads 1374311 Optimizing the Public Policy Information System under the Environment of E-Government
Authors: Qian Zaijian
Abstract:
E-government is one of the hot issues in the current academic research of public policy and management. As the organic integration of information and communication technology (ICT) and public administration, e-government is one of the most important areas in contemporary information society. Policy information system is a basic subsystem of public policy system, its operation affects the overall effect of the policy process or even exerts a direct impact on the operation of a public policy and its success or failure. The basic principle of its operation is information collection, processing, analysis and release for a specific purpose. The function of E-government for public policy information system lies in the promotion of public access to the policy information resources, information transmission through e-participation, e-consultation in the process of policy analysis and processing of information and electronic services in policy information stored, to promote the optimization of policy information systems. However, due to many factors, the function of e-government to promote policy information system optimization has its practical limits. In the building of E-government in our country, we should take such path as adhering to the principle of freedom of information, eliminating the information divide (gap), expanding e-consultation, breaking down information silos and other major path, so as to promote the optimization of public policy information systems.Keywords: China, e-consultation, e-democracy, e-government, e-participation, ICTs, public policy information systems
Procedia PDF Downloads 8634310 Multiscale Syntheses of Knee Collateral Ligament Stresses: Aggregate Mechanics as a Function of Molecular Properties
Authors: Raouf Mbarki, Fadi Al Khatib, Malek Adouni
Abstract:
Knee collateral ligaments play a significant role in restraining excessive frontal motion (varus/valgus rotations). In this investigation, a multiscale frame was developed based on structural hierarchies of the collateral ligaments starting from the bottom (tropocollagen molecule) to up where the fibred reinforced structure established. Experimental data of failure tensile test were considered as the principal driver of the developed model. This model was calibrated statistically using Bayesian calibration due to the high number of unknown parameters. Then the model is scaled up to fit the real structure of the collateral ligaments and simulated under realistic boundary conditions. Predications have been successful in describing the observed transient response of the collateral ligaments during tensile test under pre- and post-damage loading conditions. Collateral ligaments maximum stresses and strengths were observed near to the femoral insertions, a results that is in good agreement with experimental investigations. Also for the first time, damage initiation and propagation were documented with this model as a function of the cross-link density between tropocollagen molecules.Keywords: multiscale model, tropocollagen, fibrils, ligaments commas
Procedia PDF Downloads 1594309 Conventional and Computational Investigation of the Synthesized Organotin(IV) Complexes Derived from o-Vanillin and 3-Nitro-o-Phenylenediamine
Authors: Harminder Kaur, Manpreet Kaur, Akanksha Kapila, Reenu
Abstract:
Schiff base with general formula H₂L was derived from condensation of o-vanillin and 3-nitro-o-phenylenediamine. This Schiff base was used for the synthesis of organotin(IV) complexes with general formula R₂SnL [R=Phenyl or n-octyl] using equimolar quantities. Elemental analysis UV-Vis, FTIR, and multinuclear spectroscopic techniques (¹H, ¹³C, and ¹¹⁹Sn) NMR were carried out for the characterization of the synthesized complexes. These complexes were coloured and soluble in polar solvents. Computational studies have been performed to obtain the details of the geometry and electronic structures of ligand as well as complexes. Geometry of the ligands and complexes have been optimized at the level of Density Functional Theory with B3LYP/6-311G (d,p) and B3LYP/MPW1PW91 respectively followed by vibrational frequency analysis using Gaussian 09. Observed ¹¹⁹Sn NMR chemical shifts of one of the synthesized complexes showed tetrahedral geometry around Tin atom which is also confirmed by DFT. HOMO-LUMO energy distribution was calculated. FTIR, ¹HNMR and ¹³CNMR spectra were also obtained theoretically using DFT. Further IRC calculations were employed to determine the transition state for the reaction and to get the theoretical information about the reaction pathway. Moreover, molecular docking studies can be explored to ensure the anticancer activity of the newly synthesized organotin(IV) complexes.Keywords: DFT, molecular docking, organotin(IV) complexes, o-vanillin, 3-nitro-o-phenylenediamine
Procedia PDF Downloads 1594308 Modeling and Simulation of Organic Solar Cells Based on P3HT:PCBM using SCAPS 1-D (Influence of Defects and Temperature on the Performance of the Solar Cell)
Authors: Souhila Boukli Hacene, Djamila Kherbouche, Abdelhak Chikhaoui
Abstract:
In this work, we elucidate theoretically the effect of defects and temperature on the performance of the organic bulk heterojunction solar cell (BHJ) P3HT: PCBM. We have studied the influence of their parameters on cell characteristics. For this purpose, we used the effective medium model and the solar cell simulator (SCAPS) to model the characteristics of the solar cell. We also explore the transport of charge carriers in the device. It was assumed that the mixture is lightly p-type doped and that the band gap contains acceptor defects near the HOMO level with a Gaussian distribution of energy states at 100 and 50 meV. We varied defects density between 1012-1017 cm-3, from 1016 cm-3, a total decrease of the photovoltaic characteristics due to the increase of the non-radiative recombination can be noticed. Then we studied the effect of variation of the electron and the hole capture cross-section on the cell’s performance, we noticed that the cell obtains a better efficiency of about 3.6% for an electron capture cross section ≤ 10-15 cm2 and a hole capture cross section ≤ 10-19 cm2. On the other hand, we also varied the temperature between 120K and 400K. We observed that the temperature of the solar cell induces a noticeable effect on its voltage. While the effect of temperature on the solar cell current is negligible.Keywords: organic solar cell, P3HT:PCBM, defects, temperature, SCAPS
Procedia PDF Downloads 914307 Transfer Function Model-Based Predictive Control for Nuclear Core Power Control in PUSPATI TRIGA Reactor
Authors: Mohd Sabri Minhat, Nurul Adilla Mohd Subha
Abstract:
The 1MWth PUSPATI TRIGA Reactor (RTP) in Malaysia Nuclear Agency has been operating more than 35 years. The existing core power control is using conventional controller known as Feedback Control Algorithm (FCA). It is technically challenging to keep the core power output always stable and operating within acceptable error bands for the safety demand of the RTP. Currently, the system could be considered unsatisfactory with power tracking performance, yet there is still significant room for improvement. Hence, a new design core power control is very important to improve the current performance in tracking and regulating reactor power by controlling the movement of control rods that suit the demand of highly sensitive of nuclear reactor power control. In this paper, the proposed Model Predictive Control (MPC) law was applied to control the core power. The model for core power control was based on mathematical models of the reactor core, MPC, and control rods selection algorithm. The mathematical models of the reactor core were based on point kinetics model, thermal hydraulic models, and reactivity models. The proposed MPC was presented in a transfer function model of the reactor core according to perturbations theory. The transfer function model-based predictive control (TFMPC) was developed to design the core power control with predictions based on a T-filter towards the real-time implementation of MPC on hardware. This paper introduces the sensitivity functions for TFMPC feedback loop to reduce the impact on the input actuation signal and demonstrates the behaviour of TFMPC in term of disturbance and noise rejections. The comparisons of both tracking and regulating performance between the conventional controller and TFMPC were made using MATLAB and analysed. In conclusion, the proposed TFMPC has satisfactory performance in tracking and regulating core power for controlling nuclear reactor with high reliability and safety.Keywords: core power control, model predictive control, PUSPATI TRIGA reactor, TFMPC
Procedia PDF Downloads 2414306 Similarity Solutions of Nonlinear Stretched Biomagnetic Flow and Heat Transfer with Signum Function and Temperature Power Law Geometries
Authors: M. G. Murtaza, E. E. Tzirtzilakis, M. Ferdows
Abstract:
Biomagnetic fluid dynamics is an interdisciplinary field comprising engineering, medicine, and biology. Bio fluid dynamics is directed towards finding and developing the solutions to some of the human body related diseases and disorders. This article describes the flow and heat transfer of two dimensional, steady, laminar, viscous and incompressible biomagnetic fluid over a non-linear stretching sheet in the presence of magnetic dipole. Our model is consistent with blood fluid namely biomagnetic fluid dynamics (BFD). This model based on the principles of ferrohydrodynamic (FHD). The temperature at the stretching surface is assumed to follow a power law variation, and stretching velocity is assumed to have a nonlinear form with signum function or sign function. The governing boundary layer equations with boundary conditions are simplified to couple higher order equations using usual transformations. Numerical solutions for the governing momentum and energy equations are obtained by efficient numerical techniques based on the common finite difference method with central differencing, on a tridiagonal matrix manipulation and on an iterative procedure. Computations are performed for a wide range of the governing parameters such as magnetic field parameter, power law exponent temperature parameter, and other involved parameters and the effect of these parameters on the velocity and temperature field is presented. It is observed that for different values of the magnetic parameter, the velocity distribution decreases while temperature distribution increases. Besides, the finite difference solutions results for skin-friction coefficient and rate of heat transfer are discussed. This study will have an important bearing on a high targeting efficiency, a high magnetic field is required in the targeted body compartment.Keywords: biomagnetic fluid, FHD, MHD, nonlinear stretching sheet
Procedia PDF Downloads 1614305 Maintenance Performance Measurement Derived Optimization: A Case Study
Authors: James M. Wakiru, Liliane Pintelon, Peter Muchiri, Stanley Mburu
Abstract:
Maintenance performance measurement (MPM) represents an integrated aspect that considers both operational and maintenance related aspects while evaluating the effectiveness and efficiency of maintenance to ensure assets are working as they should. Three salient issues require to be addressed for an asset-intensive organization to employ an MPM-based framework to optimize maintenance. Firstly, the organization should establish important perfomance metric(s), in this case the maintenance objective(s), which they will be focuss on. The second issue entails aligning the maintenance objective(s) with maintenance optimization. This is achieved by deriving maintenance performance indicators that subsequently form an objective function for the optimization program. Lastly, the objective function is employed in an optimization program to derive maintenance decision support. In this study, we develop a framework that initially identifies the crucial maintenance performance measures, and employs them to derive maintenance decision support. The proposed framework is demonstrated in a case study of a geothermal drilling rig, where the objective function is evaluated utilizing a simulation-based model whose parameters are derived from empirical maintenance data. Availability, reliability and maintenance inventory are depicted as essential objectives requiring further attention. A simulation model is developed mimicking a drilling rig operations and maintenance where the sub-systems are modelled undergoing imperfect maintenance, corrective (CM) and preventive (PM), with the total cost as the primary performance measurement. Moreover, three maintenance spare inventory policies are considered; classical (retaining stocks for a contractual period), vendor-managed inventory with consignment stock and periodic monitoring order-to-stock (s, S) policy. Optimization results infer that the adoption of (s, S) inventory policy, increased PM interval and reduced reliance of CM actions offers improved availability and total costs reduction.Keywords: maintenance, vendor-managed, decision support, performance, optimization
Procedia PDF Downloads 1254304 On the Grid Technique by Approximating the Derivatives of the Solution of the Dirichlet Problems for (1+1) Dimensional Linear Schrodinger Equation
Authors: Lawrence A. Farinola
Abstract:
Four point implicit schemes for the approximation of the first and pure second order derivatives for the solution of the Dirichlet problem for one dimensional Schrodinger equation with respect to the time variable t were constructed. Also, special four-point implicit difference boundary value problems are proposed for the first and pure second derivatives of the solution with respect to the spatial variable x. The Grid method is also applied to the mixed second derivative of the solution of the Linear Schrodinger time-dependent equation. It is assumed that the initial function belongs to the Holder space C⁸⁺ᵃ, 0 < α < 1, the Schrodinger wave function given in the Schrodinger equation is from the Holder space Cₓ,ₜ⁶⁺ᵃ, ³⁺ᵃ/², the boundary functions are from C⁴⁺ᵃ, and between the initial and the boundary functions the conjugation conditions of orders q = 0,1,2,3,4 are satisfied. It is proven that the solution of the proposed difference schemes converges uniformly on the grids of the order O(h²+ k) where h is the step size in x and k is the step size in time. Numerical experiments are illustrated to support the analysis made.Keywords: approximation of derivatives, finite difference method, Schrödinger equation, uniform error
Procedia PDF Downloads 1204303 Evaluation of the Impact of Neuropathic Pain on the Quality of Life of Patients
Authors: A. Ibovi Mouondayi, S. Zaher, R. Assadi, K. Erraoui, S. Sboul, J. Daoudim, S. Bousselham, K. Nassar, S. Janani
Abstract:
Introduction: Neuropathic pain (NP) is chronic pain; it can be observed in a large number of clinical situations. This pain results from a lesion of the peripheral or central nervous system. It is a frequent reason for consultations in rheumatology. This pain being chronic, can become disabling for the patient, thereby altering his quality of life. Objective: The objective of this study was to evaluate the impact of neuropathic pain on the quality of life of patients followed-up for chronic neuropathic pain. Material and Method: This is a monocentric, cross-sectional, descriptive, retrospective study conducted in our department over a period of 19 months from October 2020 to April 2022. The missing parameters were collected during phone calls of the patients concerned. The diagnostic tool adopted was the DN4 questionnaire in the dialectal Arabic version. The impact of NP was assessed by the visual analog scale (VAS) on pain, sleep, and function. The impact of PN on mood was assessed by the hospital anxiety, and depression scale (HAD) score in the validated Arabic version. The exclusion criteria were patients followed up for depression and other psychiatric pathologies. Results: A total of 1528 patient data were collected; the average age of the patients was 57 years (standard deviation: 13 years) with extremes ranging from 17 years to 94 years, 91% were women and 9% men with a sex ratio man/woman equal to 0.10. 67% of our patients were married, and 63% of our patients were housewives. 43% of patients were followed-up for degenerative pathology. The NP was cervical radiculopathy in 26%, lumbosacral radiculopathy in 51%, and carpal tunnel syndrome in 20%. 23% of our patients had poor sleep quality, and 54% had average sleep quality. The pain was very intense in 5% of patients; 33% had severe pain, and 58% had moderate pain. The function was limited in 55% of patients. The average HAD score for anxiety and depression was 4.39 (standard deviation: 2.77) and 3.21 (standard deviation: 2.89), respectively. Conclusion: Our data clearly illustrate that neuropathic pain has a negative impact on the quality of sleep and function, as well as the mood of patients, thus influencing their quality of life.Keywords: neuropathic pain, sleep, quality of life, chronic pain
Procedia PDF Downloads 1314302 Systematic Identification and Quantification of Substrate Specificity Determinants in Human Protein Kinases
Authors: Manuel A. Alonso-Tarajano, Roberto Mosca, Patrick Aloy
Abstract:
Protein kinases participate in a myriad of cellular processes of major biomedical interest. The in vivo substrate specificity of these enzymes is a process determined by several factors, and despite several years of research on the topic, is still far from being totally understood. In the present work, we have quantified the contributions to the kinase substrate specificity of i) the phosphorylation sites and their surrounding residues in the sequence and of ii) the association of kinases to adaptor or scaffold proteins. We have used position-specific scoring matrices (PSSMs), to represent the stretches of sequences phosphorylated by 93 families of kinases. We have found negative correlations between the number of sequences from which a PSSM is generated and the statistical significance and the performance of that PSSM. Using a subset of 22 statistically significant PSSMs, we have identified specificity determinant residues (SDRs) for 86% of the corresponding kinase families. Our results suggest that different SDRs can function as positive or negative elements of substrate recognition by the different families of kinases. Additionally, we have found that human proteins with known function as adaptors or scaffolds (kAS) tend to interact with a significantly large fraction of the substrates of the kinases to which they associate. Based on this characteristic we have identified a set of 279 potential adaptors/scaffolds (pAS) for human kinases, which is enriched in Pfam domains and functional terms tightly related to the proposed function. Moreover, our results show that for 74.6% of the kinase– pAS association found, the pAS colocalize with the substrates of the kinases they are associated to. Finally, we have found evidence suggesting that the association of kinases to adaptors and scaffolds, may contribute significantly to diminish the in vivo substrate crossed- specificity of protein kinases. In general, our results indicate the relevance of several SDRs for both the positive and negative selection of phosphorylation sites by kinase families and also suggest that the association of kinases to pAS proteins may be an important factor for the localization of the enzymes with their set of substrates.Keywords: kinase, phosphorylation, substrate specificity, adaptors, scaffolds, cellular colocalization
Procedia PDF Downloads 3434301 The Digital Living Archive and the Construction of a Participatory Cultural Memory in the DARE-UIA Project: Digital Environment for Collaborative Alliances to Regenerate Urban Ecosystems in Middle-Sized Cities
Authors: Giulia Cardoni, Francesca Fabbrii
Abstract:
Living archives perform a function of social memory sharing, which contributes to building social bonds, communities, and identities. This potential lies in the ability to live archives to put together an archival function, which allows the conservation and transmission of memory with an artistic, performative and creative function linked to the present. As part of the DARE-UIA (Digital environment for collaborative alliances to regenerate urban ecosystems in middle-sized cities) project the creation of a living digital archive made it possible to create a narrative that would consolidate the cultural memory of the Darsena district of the city of Ravenna. The aim of the project is to stimulate the urban regeneration of a suburban area of a city, enhancing its cultural memory and identity heritage through digital heritage tools. The methodology used involves various digital storytelling actions necessary for the overall narrative using georeferencing systems (GIS), storymaps and 3D reconstructions for a transversal narration of historical content such as personal and institutional historical photos and to enhance the industrial archeology heritage of the neighborhood. The aim is the creation of an interactive and replicable narrative in similar contexts to the Darsena district in Ravenna. The living archive, in which all the digital contents are inserted, finds its manifestation towards the outside in the form of a museum spread throughout the neighborhood, making the contents usable on smartphones via QR codes and totems inserted on-site, creating thematic itineraries spread around the neighborhood. The construction of an interactive and engaging digital narrative has made it possible to enhance the material and immaterial heritage of the neighborhood by recreating the community that has historically always distinguished it.Keywords: digital living archive, digital storytelling, GIS, 3D, open-air museum, urban regeneration, cultural memory
Procedia PDF Downloads 1064300 Composing Method of Decision-Making Function for Construction Management Using Active 4D/5D/6D Objects
Authors: Hyeon-Seung Kim, Sang-Mi Park, Sun-Ju Han, Leen-Seok Kang
Abstract:
As BIM (Building Information Modeling) application continually expands, the visual simulation techniques used for facility design and construction process information are becoming increasingly advanced and diverse. For building structures, BIM application is design - oriented to utilize 3D objects for conflict management, whereas for civil engineering structures, the usability of nD object - oriented construction stage simulation is important in construction management. Simulations of 5D and 6D objects, for which cost and resources are linked along with process simulation in 4D objects, are commonly used, but they do not provide a decision - making function for process management problems that occur on site because they mostly focus on the visual representation of current status for process information. In this study, an nD CAD system is constructed that facilitates an optimized schedule simulation that minimizes process conflict, a construction duration reduction simulation according to execution progress status, optimized process plan simulation according to project cost change by year, and optimized resource simulation for field resource mobilization capability. Through this system, the usability of conventional simple simulation objects is expanded to the usability of active simulation objects with which decision - making is possible. Furthermore, to close the gap between field process situations and planned 4D process objects, a technique is developed to facilitate a comparative simulation through the coordinated synchronization of an actual video object acquired by an on - site web camera and VR concept 4D object. This synchronization and simulation technique can also be applied to smartphone video objects captured in the field in order to increase the usability of the 4D object. Because yearly project costs change frequently for civil engineering construction, an annual process plan should be recomposed appropriately according to project cost decreases/increases compared with the plan. In the 5D CAD system provided in this study, an active 5D object utilization concept is introduced to perform a simulation in an optimized process planning state by finding a process optimized for the changed project cost without changing the construction duration through a technique such as genetic algorithm. Furthermore, in resource management, an active 6D object utilization function is introduced that can analyze and simulate an optimized process plan within a possible scope of moving resources by considering those resources that can be moved under a given field condition, instead of using a simple resource change simulation by schedule. The introduction of an active BIM function is expected to increase the field utilization of conventional nD objects.Keywords: 4D, 5D, 6D, active BIM
Procedia PDF Downloads 2764299 Bright, Dark N-Soliton Solution of Fokas-Lenells Equation Using Hirota Bilinearization Method
Authors: Sagardeep Talukdar, Riki Dutta, Gautam Kumar Saharia, Sudipta Nandy
Abstract:
In non-linear optics, the Fokas-Lenells equation (FLE) is a well-known integrable equation that describes how ultrashort pulses move across the optical fiber. It admits localized wave solutions, just like any other integrable equation. We apply the Hirota bilinearization method to obtain the soliton solution of FLE. The proposed bilinearization makes use of an auxiliary function. We apply the method to FLE with a vanishing boundary condition, that is, to obtain a bright soliton solution. We have obtained bright 1-soliton and 2-soliton solutions and propose a scheme for obtaining an N-soliton solution. We have used an additional parameter that is responsible for the shift in the position of the soliton. Further analysis of the 2-soliton solution is done by asymptotic analysis. In the non-vanishing boundary condition, we obtain the dark 1-soliton solution. We discover that the suggested bilinearization approach, which makes use of the auxiliary function, greatly simplifies the process while still producing the desired outcome. We think that the current analysis will be helpful in understanding how FLE is used in nonlinear optics and other areas of physics.Keywords: asymptotic analysis, fokas-lenells equation, hirota bilinearization method, soliton
Procedia PDF Downloads 1124298 Cultural Transformation in Interior Design in Commercial Space in India
Authors: Siddhi Pedamkar, Reenu Singh
Abstract:
This report is based on how a culture transforms from one era to another era in commercial space. This transformation is observed in commercial as well as residential spaces. The spaces have specific color concepts, surface detailing furniture, and function-specific layouts. But the cultural impact is very rarely seen in commercial spaces, mostly because the interior is divine by function to a large extent. Information was collected from books and research papers. A quantitative survey was conducted to understand people's perceptions about the impact of culture on design entities and how culture dictates the different types of space and their character. The survey also highlights the impact of types of interior lighting, colour schemes, and furniture types on the interior environment. The questionnaire survey helped in framing design parameters for contemporary interior design. The design parameters are used to propose design options for new-age furniture that can be used in co-working spaces. For the new and contemporary working spaces, new age design furniture, interior elements such as visual partition, semi-visual partition, lighting, and layout can be transformed by cultural changes in the working style of people and organization.Keywords: commercial space, culture, environment, furniture, interior
Procedia PDF Downloads 1174297 Revalidation and Hormonization of Existing IFCC Standardized Hepatic, Cardiac, and Thyroid Function Tests by Precison Optimization and External Quality Assurance Programs
Authors: Junaid Mahmood Alam
Abstract:
Revalidating and harmonizing clinical chemistry analytical principles and optimizing methods through quality control programs and assessments is the preeminent means to attain optimal outcome within the clinical laboratory services. Present study reports revalidation of our existing IFCC regularized analytical methods, particularly hepatic and thyroid function tests, by optimization of precision analyses and processing through external and internal quality assessments and regression determination. Parametric components of hepatic (Bilirubin ALT, γGT, ALP), cardiac (LDH, AST, Trop I) and thyroid/pituitary (T3, T4, TSH, FT3, FT4) function tests were used to validate analytical techniques on automated chemistry and immunological analyzers namely Hitachi 912, Cobas 6000 e601, Cobas c501, Cobas e411 with UV kinetic, colorimetric dry chemistry principles and Electro-Chemiluminescence immunoassay (ECLi) techniques. Process of validation and revalidation was completed with evaluating and assessing the precision analyzed Preci-control data of various instruments plotting against each other with regression analyses R2. Results showed that: Revalidation and optimization of respective parameters that were accredited through CAP, CLSI and NEQAPP assessments depicted 99.0% to 99.8% optimization, in addition to the methodology and instruments used for analyses. Regression R2 analysis of BilT was 0.996, whereas that of ALT, ALP, γGT, LDH, AST, Trop I, T3, T4, TSH, FT3, and FT4 exhibited R2 0.998, 0.997, 0.993, 0.967, 0.970, 0.980, 0.976, 0.996, 0.997, 0.997, and R2 0.990, respectively. This confirmed marked harmonization of analytical methods and instrumentations thus revalidating optimized precision standardization as per IFCC recommended guidelines. It is concluded that practices of revalidating and harmonizing the existing or any new services should be followed by all clinical laboratories, especially those associated with tertiary care hospital. This is will ensure deliverance of standardized, proficiency tested, optimized services for prompt and better patient care that will guarantee maximum patients’ confidence.Keywords: revalidation, standardized, IFCC, CAP, harmonized
Procedia PDF Downloads 2694296 A Partially Accelerated Life Test Planning with Competing Risks and Linear Degradation Path under Tampered Failure Rate Model
Authors: Fariba Azizi, Firoozeh Haghighi, Viliam Makis
Abstract:
In this paper, we propose a method to model the relationship between failure time and degradation for a simple step stress test where underlying degradation path is linear and different causes of failure are possible. It is assumed that the intensity function depends only on the degradation value. No assumptions are made about the distribution of the failure times. A simple step-stress test is used to shorten failure time of products and a tampered failure rate (TFR) model is proposed to describe the effect of the changing stress on the intensities. We assume that some of the products that fail during the test have a cause of failure that is only known to belong to a certain subset of all possible failures. This case is known as masking. In the presence of masking, the maximum likelihood estimates (MLEs) of the model parameters are obtained through an expectation-maximization (EM) algorithm by treating the causes of failure as missing values. The effect of incomplete information on the estimation of parameters is studied through a Monte-Carlo simulation. Finally, a real example is analyzed to illustrate the application of the proposed methods.Keywords: cause of failure, linear degradation path, reliability function, expectation-maximization algorithm, intensity, masked data
Procedia PDF Downloads 3334295 Dynamic Distribution Calibration for Improved Few-Shot Image Classification
Authors: Majid Habib Khan, Jinwei Zhao, Xinhong Hei, Liu Jiedong, Rana Shahzad Noor, Muhammad Imran
Abstract:
Deep learning is increasingly employed in image classification, yet the scarcity and high cost of labeled data for training remain a challenge. Limited samples often lead to overfitting due to biased sample distribution. This paper introduces a dynamic distribution calibration method for few-shot learning. Initially, base and new class samples undergo normalization to mitigate disparate feature magnitudes. A pre-trained model then extracts feature vectors from both classes. The method dynamically selects distribution characteristics from base classes (both adjacent and remote) in the embedding space, using a threshold value approach for new class samples. Given the propensity of similar classes to share feature distributions like mean and variance, this research assumes a Gaussian distribution for feature vectors. Subsequently, distributional features of new class samples are calibrated using a corrected hyperparameter, derived from the distribution features of both adjacent and distant base classes. This calibration augments the new class sample set. The technique demonstrates significant improvements, with up to 4% accuracy gains in few-shot classification challenges, as evidenced by tests on miniImagenet and CUB datasets.Keywords: deep learning, computer vision, image classification, few-shot learning, threshold
Procedia PDF Downloads 664294 Haemobiogram after Intramuscular Administration of Amoxicillin to Sheep
Authors: Amer Elgerwi, Abdelrazzag El-Magdoub, Abubakr El-Mahmoudy
Abstract:
There are many bacterial infections affecting sheep that necessitates antibiotic intervention. Amoxicillin is among commonly used antibiotics in such case for its broad spectrum of activity. However, the side alterations in blood and organ function that may be associated during or after treatment are questionable. Therefore, the aim of the present study was to assess the possible alterations in blood parameters and organ function bio markers of sheep that may occur following intramuscular injection of amoxicillin. Amoxicillin has been administered intramuscularly to 10 sheep at a dosage regimen of 7 mg/kg of body weight for 5 successive days. Two types of blood samples (with and without anticoagulant) were collected from the jugular vein pre- and post-administration of the drug. Amoxicillin significantly (P < 0.001) increased total leukocyte count and (P < 0.05) absolute eosinophilic count when compared with those of the control samples. Aspartate aminotransferase, alkaline phosphatase and cholesterol were significantly (P < 0.05) higher than the corresponding control values. In addition, amoxicillin significantly (P < 0.05) increased blood urea nitrogen and creatinine but decreased phosphorus level when compared with those of prior-administration samples. These data may indicate that although the side changes caused by amoxicillin are minor in sheep, yet the liver and kidney functions should be monitored during its usage in therapy and it should be used with care for treatment of sheep with renal and/or hepatic impairments.Keywords: amoxicillin, biogram, haemogram, sheep
Procedia PDF Downloads 4584293 Application of a Hybrid QFD-FEA Methodology for Nigerian Garment Designs
Authors: Adepeju A. Opaleye, Adekunle Kolawole, Muyiwa A. Opaleye
Abstract:
Consumers’ perceived quality of imported product has been an impediment to business in the Nigeria garment industry. To improve patronage of made- in-Nigeria designs, the first step is to understand what the consumer expects, then proffer ways to meet this expectation through product redesign or improvement of the garment production process. The purpose of this study is to investigate drivers of consumers’ value for typical Nigerian garment design (NGD). An integrated quality function deployment (QFD) and functional, expressive and aesthetic (FEA) Consumer Needs methodology helps to minimize incorrect understanding of potential consumer’s requirements in mass customized garments. Six themes emerged as drivers of consumer’s satisfaction: (1) Style variety (2) Dimensions (3) Finishing (4) Fabric quality (5) Garment Durability and (6) Aesthetics. Existing designs found to lead foreign designs in terms of its acceptance for informal events, style variety and fit. The latter may be linked to its mode of acquisition. A conceptual model of NGD acceptance in the context of consumer’s inherent characteristics, social and the business environment is proposed.Keywords: Perceived quality, Garment design, Quality function deployment, FEA Model , Mass customisation
Procedia PDF Downloads 137