Search results for: reference model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 17987

Search results for: reference model

17837 Evaluation of a Piecewise Linear Mixed-Effects Model in the Analysis of Randomized Cross-over Trial

Authors: Moses Mwangi, Geert Verbeke, Geert Molenberghs

Abstract:

Cross-over designs are commonly used in randomized clinical trials to estimate efficacy of a new treatment with respect to a reference treatment (placebo or standard). The main advantage of using cross-over design over conventional parallel design is its flexibility, where every subject become its own control, thereby reducing confounding effect. Jones & Kenward, discuss in detail more recent developments in the analysis of cross-over trials. We revisit the simple piecewise linear mixed-effects model, proposed by Mwangi et. al, (in press) for its first application in the analysis of cross-over trials. We compared performance of the proposed piecewise linear mixed-effects model with two commonly cited statistical models namely, (1) Grizzle model; and (2) Jones & Kenward model, used in estimation of the treatment effect, in the analysis of randomized cross-over trial. We estimate two performance measurements (mean square error (MSE) and coverage probability) for the three methods, using data simulated from the proposed piecewise linear mixed-effects model. Piecewise linear mixed-effects model yielded lowest MSE estimates compared to Grizzle and Jones & Kenward models for both small (Nobs=20) and large (Nobs=600) sample sizes. It’s coverage probability were highest compared to Grizzle and Jones & Kenward models for both small and large sample sizes. A piecewise linear mixed-effects model is a better estimator of treatment effect than its two competing estimators (Grizzle and Jones & Kenward models) in the analysis of cross-over trials. The data generating mechanism used in this paper captures two time periods for a simple 2-Treatments x 2-Periods cross-over design. Its application is extendible to more complex cross-over designs with multiple treatments and periods. In addition, it is important to note that, even for single response models, adding more random effects increases the complexity of the model and thus may be difficult or impossible to fit in some cases.

Keywords: Evaluation, Grizzle model, Jones & Kenward model, Performance measures, Simulation

Procedia PDF Downloads 92
17836 Design and Development of Real-Time Optimal Energy Management System for Hybrid Electric Vehicles

Authors: Masood Roohi, Amir Taghavipour

Abstract:

This paper describes a strategy to develop an energy management system (EMS) for a charge-sustaining power-split hybrid electric vehicle. This kind of hybrid electric vehicles (HEVs) benefit from the advantages of both parallel and series architecture. However, it gets relatively more complicated to manage power flow between the battery and the engine optimally. The applied strategy in this paper is based on nonlinear model predictive control approach. First of all, an appropriate control-oriented model which was accurate enough and simple was derived. Towards utilization of this controller in real-time, the problem was solved off-line for a vast area of reference signals and initial conditions and stored the computed manipulated variables inside look-up tables. Look-up tables take a little amount of memory. Also, the computational load dramatically decreased, because to find required manipulated variables the controller just needed a simple interpolation between tables.

Keywords: hybrid electric vehicles, energy management system, nonlinear model predictive control, real-time

Procedia PDF Downloads 315
17835 Mathematical Model to Quantify the Phenomenon of Democracy

Authors: Mechlouch Ridha Fethi

Abstract:

This paper presents a recent mathematical model in political sciences concerning democracy. The model is represented by a logarithmic equation linking the Relative Index of Democracy (RID) to Participation Ratio (PR). Firstly the meanings of the different parameters of the model were presented; and the variation curve of the RID according to PR with different critical areas was discussed. Secondly, the model was applied to a virtual group where we show that the model can be applied depending on the gender. Thirdly, it was observed that the model can be extended to different language models of democracy and that little use to assess the state of democracy for some International organizations like UNO.

Keywords: democracy, mathematic, modelization, quantification

Procedia PDF Downloads 327
17834 Construction of Submerged Aquatic Vegetation Index through Global Sensitivity Analysis of Radiative Transfer Model

Authors: Guanhua Zhou, Zhongqi Ma

Abstract:

Submerged aquatic vegetation (SAV) in wetlands can absorb nitrogen and phosphorus effectively to prevent the eutrophication of water. It is feasible to monitor the distribution of SAV through remote sensing, but for the reason of weak vegetation signals affected by water body, traditional terrestrial vegetation indices are not applicable. This paper aims at constructing SAV index to enhance the vegetation signals and distinguish SAV from water body. The methodology is as follows: (1) select the bands sensitive to the vegetation parameters based on global sensitivity analysis of SAV canopy radiative transfer model; (2) take the soil line concept as reference, analyze the distribution of SAV and water reflectance simulated by SAV canopy model and semi-analytical water model in the two-dimensional space built by different sensitive bands; (3)select the band combinations which have better separation performance between SAV and water, and use them to build the SAVI indices in the form of normalized difference vegetation index(NDVI); (4)analyze the sensitivity of indices to the water and vegetation parameters, choose the one more sensitive to vegetation parameters. It is proved that index formed of the bands with central wavelengths in 705nm and 842nm has high sensitivity to chlorophyll content in leaves while it is less affected by water constituents. The model simulation shows a general negative, little correlation of SAV index with increasing water depth. Moreover, the index enhances capabilities in separating SAV from water compared to NDVI. The SAV index is expected to have potential in parameter inversion of wetland remote sensing.

Keywords: global sensitivity analysis, radiative transfer model, submerged aquatic vegetation, vegetation indices

Procedia PDF Downloads 228
17833 The Achievement Model of University Social Responsibility

Authors: Le Kang

Abstract:

On the research question of 'how to achieve USR', this contribution reflects the concept of university social responsibility, identify three achievement models of USR as the society - diversified model, the university-cooperation model, the government - compound model, also conduct a case study to explore characteristics of Chinese achievement model of USR. The contribution concludes with discussion of how the university, government and society balance demands and roles, make necessarily strategic adjustment and innovative approach to repair the shortcomings of each achievement model.

Keywords: modern university, USR, achievement model, compound model

Procedia PDF Downloads 717
17832 Managing of Work Risk in Small and Medium-Size Companies

Authors: Janusz K. Grabara, Bartłomiej Okwiet, Sebastian Kot

Abstract:

The purpose of the article is presentation and analysis of the aspect of job security in small and medium-size enterprises in Poland with reference to other EU countries. We show the theoretical aspects of the risk with reference to managing small and medium enterprises, next risk management in small and medium enterprises in Poland, which were subjected to a detailed analysis. We show in detail the risk associated with the operation of the mentioned above companies, as well as analyses its levels on various stages and for different kinds of conducted activity.

Keywords: job safety, SME, work risk, risk management

Procedia PDF Downloads 463
17831 Probing Language Models for Multiple Linguistic Information

Authors: Bowen Ding, Yihao Kuang

Abstract:

In recent years, large-scale pre-trained language models have achieved state-of-the-art performance on a variety of natural language processing tasks. The word vectors produced by these language models can be viewed as dense encoded presentations of natural language that in text form. However, it is unknown how much linguistic information is encoded and how. In this paper, we construct several corresponding probing tasks for multiple linguistic information to clarify the encoding capabilities of different language models and performed a visual display. We firstly obtain word presentations in vector form from different language models, including BERT, ELMo, RoBERTa and GPT. Classifiers with a small scale of parameters and unsupervised tasks are then applied on these word vectors to discriminate their capability to encode corresponding linguistic information. The constructed probe tasks contain both semantic and syntactic aspects. The semantic aspect includes the ability of the model to understand semantic entities such as numbers, time, and characters, and the grammatical aspect includes the ability of the language model to understand grammatical structures such as dependency relationships and reference relationships. We also compare encoding capabilities of different layers in the same language model to infer how linguistic information is encoded in the model.

Keywords: language models, probing task, text presentation, linguistic information

Procedia PDF Downloads 67
17830 Analysis of the Annual Proficiency Testing Procedure for Intermediate Reference Laboratories Conducted by the National Reference Laboratory from 2013 to 2017

Authors: Reena K., Mamatha H. G., Somshekarayya, P. Kumar

Abstract:

Objectives: The annual proficiency testing of intermediate reference laboratories is conducted by the National Reference Laboratory (NRL) to assess the efficiency of the laboratories to correctly identify Mycobacterium tuberculosis and to determine its drug susceptibility pattern. The proficiency testing results from 2013 to 2017 were analyzed to determine laboratories that were consistent in reporting quality results and those that had difficulty in doing so. Methods: A panel of twenty cultures were sent out to each of these laboratories. The laboratories were expected to grow the cultures in their own laboratories, set up drug susceptibly testing by all the methods they were certified for and report the results within the stipulated time period. The turnaround time for reporting results, specificity, sensitivity positive and negative predictive values and efficiency of the laboratory in identifying the cultures were analyzed. Results: Most of the laboratories had reported their results within the stipulated time period. However, there was enormous delay in reporting results from few of the laboratories. This was mainly due to improper functioning of the biosafety level III laboratory. Only 40% of the laboratories had 100% efficiency in solid culture using Lowenstein Jensen medium. This was expected as a solid culture, and drug susceptibility testing is not used for diagnosing drug resistance. Rapid molecular methods such as Line probe assay and Genexpert are used to determine drug resistance. Automated liquid culture system such as the Mycobacterial growth indicator tube is used to determine prognosis of the patient while on treatment. It was observed that 90% of the laboratories had achieved 100% in the liquid culture method. Almost all laboratories had achieved 100% efficiency in the line probe assay method which is the method of choice for determining drug-resistant tuberculosis. Conclusion: Since the liquid culture and line probe assay technologies are routinely used for the detection of drug-resistant tuberculosis the laboratories exhibited higher level of efficiency as compared to solid culture and drug susceptibility testing which are rarely used. The infrastructure of the laboratory should be maintained properly so that samples can be processed safely and results could be declared on time.

Keywords: annual proficiency testing, drug susceptibility testing, intermediate reference laboratory, national reference laboratory

Procedia PDF Downloads 160
17829 An Absolute Femtosecond Rangefinder for Metrological Support in Coordinate Measurements

Authors: Denis A. Sokolov, Andrey V. Mazurkevich

Abstract:

In the modern world, there is an increasing demand for highly precise measurements in various fields, such as aircraft, shipbuilding, and rocket engineering. This has resulted in the development of appropriate measuring instruments that are capable of measuring the coordinates of objects within a range of up to 100 meters, with an accuracy of up to one micron. The calibration process for such optoelectronic measuring devices (trackers and total stations) involves comparing the measurement results from these devices to a reference measurement based on a linear or spatial basis. The reference used in such measurements could be a reference base or a reference range finder with the capability to measure angle increments (EDM). The base would serve as a set of reference points for this purpose. The concept of the EDM for replicating the unit of measurement has been implemented on a mobile platform, which allows for angular changes in the direction of laser radiation in two planes. To determine the distance to an object, a high-precision interferometer with its own design is employed. The laser radiation travels to the corner reflectors, which form a spatial reference with precisely known positions. When the femtosecond pulses from the reference arm and the measuring arm coincide, an interference signal is created, repeating at the frequency of the laser pulses. The distance between reference points determined by interference signals is calculated in accordance with recommendations from the International Bureau of Weights and Measures for the indirect measurement of time of light passage according to the definition of a meter. This distance is D/2 = c/2nF, approximately 2.5 meters, where c is the speed of light in a vacuum, n is the refractive index of a medium, and F is the frequency of femtosecond pulse repetition. The achieved uncertainty of type A measurement of the distance to reflectors 64 m (N•D/2, where N is an integer) away and spaced apart relative to each other at a distance of 1 m does not exceed 5 microns. The angular uncertainty is calculated theoretically since standard high-precision ring encoders will be used and are not a focus of research in this study. The Type B uncertainty components are not taken into account either, as the components that contribute most do not depend on the selected coordinate measuring method. This technology is being explored in the context of laboratory applications under controlled environmental conditions, where it is possible to achieve an advantage in terms of accuracy. In general, the EDM tests showed high accuracy, and theoretical calculations and experimental studies on an EDM prototype have shown that the uncertainty type A of distance measurements to reflectors can be less than 1 micrometer. The results of this research will be utilized to develop a highly accurate mobile absolute range finder designed for the calibration of high-precision laser trackers and laser rangefinders, as well as other equipment, using a 64 meter laboratory comparator as a reference.

Keywords: femtosecond laser, pulse correlation, interferometer, laser absolute range finder, coordinate measurement

Procedia PDF Downloads 15
17828 Percentile Reference Values of Vertical Jumping Performances and Anthropometric Characteristics in Athletic Tunisian Children and Adolescents

Authors: Chirine Aouichaoui, Mohamed Tounsi, Ines Mrizak, Zouhair Tabka, Yassine Trabelsi

Abstract:

The aim of this study was to provide percentile values for vertical jumping performances and anthropometric characteristics for athletic Tunisian children. One thousand and fifty-five athletic Tunisian children and adolescents (643 boys and 412 girls) aged 7-18 years were randomly selected to participate in our study. They were asked to perform squat jumps and countermovement jumps. For each measurement, a least square regression model with high order polynomials was fitted to predict mean and standard deviation of vertical jumping parameters and anthropometric variables. Smoothed percentile curves and percentile values for the 5th, 10th, 25th, 50th, 75th, 90th, and 95th percentiles are presented for boys and girls. In conclusion, percentiles values of vertical jumping performances and anthropometric characteristics are provided. The new Tunisian reference charts obtained can be used as a screening tool to determine growth disorders and to estimate the proportion of adolescents with high or low muscular strength levels. This study may help in verifying the effectiveness of a specific training program and detecting highly talented athletes.

Keywords: percentile values, jump height, leg muscle power, athletes, anthropometry

Procedia PDF Downloads 392
17827 FPGA Implementation of Adaptive Clock Recovery for TDMoIP Systems

Authors: Semih Demir, Anil Celebi

Abstract:

Circuit switched networks widely used until the end of the 20th century have been transformed into packages switched networks. Time Division Multiplexing over Internet Protocol (TDMoIP) is a system that enables Time Division Multiplexing (TDM) traffic to be carried over packet switched networks (PSN). In TDMoIP systems, devices that send TDM data to the PSN and receive it from the network must operate with the same clock frequency. In this study, it was aimed to implement clock synchronization process in Field Programmable Gate Array (FPGA) chips using time information attached to the packages received from PSN. The designed hardware is verified using the datasets obtained for the different carrier types and comparing the results with the software model. Field tests are also performed by using the real time TDMoIP system.

Keywords: clock recovery on TDMoIP, FPGA, MATLAB reference model, clock synchronization

Procedia PDF Downloads 240
17826 Establishing Reference Intervals for Routine Coagulation Tests

Authors: Santina Sahibon, Sivasooriar Sivaneson, Martin Giddy, Nelson Nheu, Siti Sazeelah, Choo Kok Ming, Thuhairah Abdul Rahman, Fatmawati Binti Kamal

Abstract:

Introduction: Establishing population-based reference intervals (RI) are essential when evaluating laboratory test results and for method verification. Our laboratory initiated an exercise to establish RI for routine coagulation profile as part of the method verification procedure and to determine any differences in RI between three analyzers planned to be used in the laboratory. Methodology: 145 blood samples were collected and analysed for activated partial thromboplastin time (aPTT), prothrombin time (PT), international normalized ratio (INR), and fibrinogen] using three coagulation analysers which were CA104, CA660, and CS-2500 (Sysmex, USA). RI was established at 2.5th and 97.5th percentiles. Results: The RI for aPTT between C104, C660 and CS-2500 are (RI: 20.5-30.2 sec), (RI: 21.5-29.2 sec) and (RI: 22.7-30.3 sec) respectively. The RI for PT were (RI: 7.5-10.3 sec), (RI: 9.2- 11.1 sec) and (RI: 9.8-11.9 sec) for C104, CA660 and CS-2500 respectively. INR had an RI of (RI: 0.87- 1.16), (RI: 0.89-1.10) and (0.90-1.11) respectively on CA104, C660 and CS-2500. Fibrinogen RI was (RI: 2.04-4.62 g/L) and (2.05-4.76 g/L) on the CA660 and CS-2500, respectively. Conclusion: The RI was similar across the analytical platforms for aPTT, INR, and fibrinogen. However, CA104 showed lower RI compared to the other two analysers for PT. This highlights the potential variability in results between instruments that need to be addressed when verifying RI.

Keywords: coagulation, reference interval, APTT, PT, INR, fibrinogen

Procedia PDF Downloads 153
17825 Development of a Tilt-Rotor Aircraft Model Using System Identification Technique

Authors: Ferdinando Montemari, Antonio Vitale, Nicola Genito, Giovanni Cuciniello

Abstract:

The introduction of tilt-rotor aircraft into the existing civilian air transportation system will provide beneficial effects due to tilt-rotor capability to combine the characteristics of a helicopter and a fixed-wing aircraft into one vehicle. The disposability of reliable tilt-rotor simulation models supports the development of such vehicle. Indeed, simulation models are required to design automatic control systems that increase safety, reduce pilot's workload and stress, and ensure the optimal aircraft configuration with respect to flight envelope limits, especially during the most critical flight phases such as conversion from helicopter to aircraft mode and vice versa. This article presents a process to build a simplified tilt-rotor simulation model, derived from the analysis of flight data. The model aims to reproduce the complex dynamics of tilt-rotor during the in-flight conversion phase. It uses a set of scheduled linear transfer functions to relate the autopilot reference inputs to the most relevant rigid body state variables. The model also computes information about the rotor flapping dynamics, which are useful to evaluate the aircraft control margin in terms of rotor collective and cyclic commands. The rotor flapping model is derived through a mixed theoretical-empirical approach, which includes physical analytical equations (applicable to helicopter configuration) and parametric corrective functions. The latter are introduced to best fit the actual rotor behavior and balance the differences existing between helicopter and tilt-rotor during flight. Time-domain system identification from flight data is exploited to optimize the model structure and to estimate the model parameters. The presented model-building process was applied to simulated flight data of the ERICA Tilt-Rotor, generated by using a high fidelity simulation model implemented in FlightLab environment. The validation of the obtained model was very satisfying, confirming the validity of the proposed approach.

Keywords: flapping dynamics, flight dynamics, system identification, tilt-rotor modeling and simulation

Procedia PDF Downloads 168
17824 An Empirical Study of Factors that Impact Government E-Services Acceptance by Citizens: Case Study from UAE

Authors: Emad Bataineh, Sara Al-Mutawa

Abstract:

The primary focus of this study is to investigate and identify the perceptions of potential end users relating to factors which impact on e-services acceptance. Technology Acceptance Model (TAM) has been adopted in this study as it can be extended when technologies are introduced. This research validates the developed TAM model and evaluates the variance of the outcome variable (acceptance of e-services). Five factors were adopted as determinants of acceptance of e-services: ease of use, security, trust, web skills, and language. The study was undertaken in the General Directorate of Residency and Foreigners Affairs (GDRFA) in the UAE. A quantitative survey methodology was adopted in this study, which surveyed 466 customers who use the GDRFA e-services. The overall findings revealed that security language, web skills and support significantly affected ease of use and perceived usefulness. However, the trust doesn’t affect the ease of use. Further, ease of use significantly affects intention to use and perceived usefulness while in turn intention to use was influenced by perceived usefulness. This study offers an understanding of people’s adoption of e-government services with the help of established theories such as TAM and various factors that influence the e-government adoption with reference to UAE.

Keywords: e-government portal, e-service, usability, TAM model

Procedia PDF Downloads 390
17823 Urban Big Data: An Experimental Approach to Building-Value Estimation Using Web-Based Data

Authors: Sun-Young Jang, Sung-Ah Kim, Dongyoun Shin

Abstract:

Current real-estate value estimation, difficult for laymen, usually is performed by specialists. This paper presents an automated estimation process based on big data and machine-learning technology that calculates influences of building conditions on real-estate price measurement. The present study analyzed actual building sales sample data for Nonhyeon-dong, Gangnam-gu, Seoul, Korea, measuring the major influencing factors among the various building conditions. Further to that analysis, a prediction model was established and applied using RapidMiner Studio, a graphical user interface (GUI)-based tool for derivation of machine-learning prototypes. The prediction model is formulated by reference to previous examples. When new examples are applied, it analyses and predicts accordingly. The analysis process discerns the crucial factors effecting price increases by calculation of weighted values. The model was verified, and its accuracy determined, by comparing its predicted values with actual price increases.

Keywords: apartment complex, big data, life-cycle building value analysis, machine learning

Procedia PDF Downloads 350
17822 Inclusion in Rabbinic and Protestant Translations of the Hebrew book of Proverbs (1865) History of Translations and Cultural Inclusion Terms of Reference

Authors: Mh. D Tammam Ayoubi

Abstract:

The Old Testament has been translated into many languages, including Arabic. There have been consecutive translations of it since Islamic antiquity. The Rabbinic translation, which rendered the Hebrew text into Arabic without a linguistic medium, appeared later. It was followed by several Orthodox and Jesuit trials, including the Protestant translation. Those two translations were chosen to study the book of Proverbs, which is classified as one of the books of Wisdom; something that distances it from being either symbolical or historical and makes the translation the subject of the translator's ideology starting from the incorporated cultural element be it Jewish, Aramaic or Islamist (Mu'tazila) of the first translation, or through the choice of the equivalent signs of origin, and the neutralization of the Rabbinic, Arabic, and Greek element of the second translation. The various Protestant translation of different authors has contributed to the multiplicity of the term of reference, mostly Christian, in contrast with the single reference of one author, which carries multiple conflicting cultural facades when it comes to the Rabbinic translation. This has led to a change in the origin through the inclusion of those various verbal or interpretative elements in the book of Proverbs, which will be examined in the verses through a comparative study with the original Hebrew text or the cultural terms or references.

Keywords: rabbinic and protestant translations, book of proverbs, hebrew, protestant translation

Procedia PDF Downloads 40
17821 Social Comparison at the Workplace: Evidence from a Field Experiment in Kolkata, India

Authors: Pooja Balasubramanian, Ghida Karbala

Abstract:

Unfair treatment at the workplace encourages workers to adjust their behavior in order to restore fairness and align monetary returns to costs. This adjustment of behavior, however, may differ depending on the reference group considered to evaluate equity. In this aspect two main questions are to be considered: How do workers respond to unfair treatment at the workplace? And how does this response change depending on the identity of the reference group? To answer the above research questions, this paper utilizes data from a randomized field experiment conducted in Kolkata, India where student assistants were hired to help in a data entry task. Recruited workers were working in a team of two and were offered a fixed wage per hour. Workers were randomly assigned to one of the following treatment groups: A control group not subjected (1) to wage cuts (2) A general wage cut (3) A unilateral wage cut in reference to colleagues with similar social identity (4) A unilateral wage cut in reference to colleagues with a different social identity, where social identity is defined in terms of gender. Results show a significant decrease in the quantity and quality of work following a general wage cut. A more severe drop in productivity was presented by men in the case of unilateral wage cut, while women experiencing unilateral wage cuts didn’t exhibit a similar behavior regardless of the gender of the team member. To the contrary, women matched with a male colleague and experiencing unilateral wage cut show a slight increase in productivity, a result that contributes to the discussion regarding the paradox of the 'content female worker'. These findings highlight the necessity of a better understanding of the social comparison processes prevailing at the workplace, given the major role they play in determining the level of productivity supplied.

Keywords: effort supply, fairness, reference groups, social comparison

Procedia PDF Downloads 120
17820 Using Computational Fluid Dynamics to Model and Design a Preventative Application for Strong Wind

Authors: Ming-Hwi Yao, Su-Szu Yang

Abstract:

Typhoons are one of the major types of disasters that affect Taiwan each year and that cause severe damage to agriculture. Indeed, the damage exacted during a typical typhoon season can be up to $1 billion, and is responsible for nearly 75% of yearly agricultural losses. However, there is no consensus on how to reduce the damage caused by the strong winds and heavy precipitation engendered by typhoons. One suggestion is the use of windbreak nets, which are a low-cost and easy-to-use disaster mitigation strategy for crop production. In the present study, we conducted an evaluation to determine the optimal conditions of a windbreak net by using a computational fluid dynamics (CFD) model. This model may be used as a reference for crop protection. The results showed that CFD simulation validated windbreak nets of different mesh sizes and heights in the experimental area; thus, CFD is an efficient tool for evaluating the effectiveness of windbreak nets. Specifically, the effective wind protection length and height were found to be 6 and 1.3 times the length and height of the windbreak net, respectively. During a real typhoon, maximum wind gusts of 18 m s-1 can be reduced to 4 m s-1 by using a windbreak net that has a 70% blocking rate. In short, windbreak nets are significantly effective in protecting typhoon-affected areas.

Keywords: computational fluid dynamics, disaster, typhoon, windbreak net

Procedia PDF Downloads 161
17819 Impact of Artificial Intelligence Technologies on Information-Seeking Behaviors and the Need for a New Information Seeking Model

Authors: Mohammed Nasser Al-Suqri

Abstract:

Former information-seeking models are proposed more than two decades ago. These already existed models were given prior to the evolution of digital information era and Artificial Intelligence (AI) technologies. Lack of current information seeking models within Library and Information Studies resulted in fewer advancements for teaching students about information-seeking behaviors, design of library tools and services. In order to better facilitate the aforementioned concerns, this study aims to propose state-of-the-art model while focusing on the information seeking behavior of library users in the Sultanate of Oman. This study aims for the development, designing and contextualizing the real-time user-centric information seeking model capable of enhancing information needs and information usage along with incorporating critical insights for the digital library practices. Another aim is to establish far-sighted and state-of-the-art frame of reference covering Artificial Intelligence (AI) while synthesizing digital resources and information for optimizing information-seeking behavior. The proposed study is empirically designed based on a mix-method process flow, technical surveys, in-depth interviews, focus groups evaluations and stakeholder investigations. The study data pool is consist of users and specialist LIS staff at 4 public libraries and 26 academic libraries in Oman. The designed research model is expected to facilitate LIS by assisting multi-dimensional insights with AI integration for redefining the information-seeking process, and developing a technology rich model.

Keywords: artificial intelligence, information seeking, information behavior, information seeking models, libraries, Sultanate of Oman

Procedia PDF Downloads 79
17818 Modelling and Control of Milk Fermentation Process in Biochemical Reactor

Authors: Jožef Ritonja

Abstract:

The biochemical industry is one of the most important modern industries. Biochemical reactors are crucial devices of the biochemical industry. The essential bioprocess carried out in bioreactors is the fermentation process. A thorough insight into the fermentation process and the knowledge how to control it are essential for effective use of bioreactors to produce high quality and quantitatively enough products. The development of the control system starts with the determination of a mathematical model that describes the steady state and dynamic properties of the controlled plant satisfactorily, and is suitable for the development of the control system. The paper analyses the fermentation process in bioreactors thoroughly, using existing mathematical models. Most existing mathematical models do not allow the design of a control system for controlling the fermentation process in batch bioreactors. Due to this, a mathematical model was developed and presented that allows the development of a control system for batch bioreactors. Based on the developed mathematical model, a control system was designed to ensure optimal response of the biochemical quantities in the fermentation process. Due to the time-varying and non-linear nature of the controlled plant, the conventional control system with a proportional-integral-differential controller with constant parameters does not provide the desired transient response. The improved adaptive control system was proposed to improve the dynamics of the fermentation. The use of the adaptive control is suggested because the parameters’ variations of the fermentation process are very slow. The developed control system was tested to produce dairy products in the laboratory bioreactor. A carbon dioxide concentration was chosen as the controlled variable. The carbon dioxide concentration correlates well with the other, for the quality of the fermentation process in significant quantities. The level of the carbon dioxide concentration gives important information about the fermentation process. The obtained results showed that the designed control system provides minimum error between reference and actual values of carbon dioxide concentration during a transient response and in a steady state. The recommended control system makes reference signal tracking much more efficient than the currently used conventional control systems which are based on linear control theory. The proposed control system represents a very effective solution for the improvement of the milk fermentation process.

Keywords: biochemical reactor, fermentation process, modelling, adaptive control

Procedia PDF Downloads 103
17817 Model Averaging for Poisson Regression

Authors: Zhou Jianhong

Abstract:

Model averaging is a desirable approach to deal with model uncertainty, which, however, has rarely been explored for Poisson regression. In this paper, we propose a model averaging procedure based on an unbiased estimator of the expected Kullback-Leibler distance for the Poisson regression. Simulation study shows that the proposed model average estimator outperforms some other commonly used model selection and model average estimators in some situations. Our proposed methods are further applied to a real data example and the advantage of this method is demonstrated again.

Keywords: model averaging, poission regression, Kullback-Leibler distance, statistics

Procedia PDF Downloads 483
17816 Problems of Water Resources : Vulnerability to Climate Change, Modeling with Software WEAP 21 (Upper and Middle Cheliff)

Authors: Mehaiguene Madjid, Meddi Mohamed

Abstract:

The results of applying the model WEAP 21 or 'Water Evaluation and Planning System' in Upper and Middle Cheliff are presented in cartographic and graphic forms by considering two scenarios: -Reference scenario 1961-1990, -Climate change scenarios (low and high) for 2020 and 2050. These scenarios are presented together in the results and compared them to know the impact on aquatic systems and water resources. For the low scenario for 2050, a decrease in the rate of runoff / infiltration will be 81.4 to 3.7 Hm3 between 2010 and 2050. While for the high scenario for 2050, the reduction will be 87.2 to 78.9 Hm3 between 2010 and 2050. Comparing the two scenarios, shows that the water supplied will increase by 216.7 Hm3 to 596 Hm3 up to 2050 if we do not take account of climate change. Whereas, if climate change will decrease step by step: from 2010 to 2026: for the climate change scenario (high scenario) by 2050, water supplied from 346 Hm3 to 361 Hm3. That of the reference scenario (1961-1990) will increase to 379.7 Hm3 in 2050. This is caused by the increased demand (increased population, irrigated area, etc ). The balance water management basin is positive for the different Horizons and different situations. If we do not take account of climate change will be the outflow of 5881.4 Hm3. This excess at the basin can be used as part of a transfer for example.

Keywords: balance water, management basin, climate change scenario, Upper and Middle Cheliff

Procedia PDF Downloads 284
17815 Designing Price Stability Model of Red Cayenne Pepper Price in Wonogiri District, Centre Java, Using ARCH/GARCH Method

Authors: Fauzia Dianawati, Riska W. Purnomo

Abstract:

Food and agricultural sector become the biggest sector contributing to inflation in Indonesia. Especially in Wonogiri district, red cayenne pepper was the biggest sector contributing to inflation on 2016. A national statistic proved that in recent five years red cayenne pepper has the highest average level of fluctuation among all commodities. Some factors, like supply chain, price disparity, production quantity, crop failure, and oil price become the possible factor causes high volatility level in red cayenne pepper price. Therefore, this research tries to find the key factor causing fluctuation on red cayenne pepper by using ARCH/GARCH method. The method could accommodate the presence of heteroscedasticity in time series data. At the end of the research, it is statistically found that the second level of supply chain becomes the biggest part contributing to inflation with 3,35 of coefficient in fluctuation forecasting model of red cayenne pepper price. This model could become a reference to the government to determine the appropriate policy in maintaining the price stability of red cayenne pepper.

Keywords: ARCH/GARCH, forecasting, red cayenne pepper, volatility, supply chain

Procedia PDF Downloads 158
17814 Blind Data Hiding Technique Using Interpolation of Subsampled Images

Authors: Singara Singh Kasana, Pankaj Garg

Abstract:

In this paper, a blind data hiding technique based on interpolation of sub sampled versions of a cover image is proposed. Sub sampled image is taken as a reference image and an interpolated image is generated from this reference image. Then difference between original cover image and interpolated image is used to embed secret data. Comparisons with the existing interpolation based techniques show that proposed technique provides higher embedding capacity and better visual quality marked images. Moreover, the performance of the proposed technique is more stable for different images.

Keywords: interpolation, image subsampling, PSNR, SIM

Procedia PDF Downloads 540
17813 Decay Analysis of 118Xe* Nucleus Formed in 28Si Induced Reaction

Authors: Manoj K. Sharma, Neha Grover

Abstract:

Dynamical cluster decay model (DCM) is applied to study the decay mechanism of 118Xe* nucleus in reference to recent data on 28Si + 90Zr → 118Xe* reaction, as an extension of our previous work on the dynamics of 112Xe* nucleus. It is relevant to mention here that DCM is based on collective clusterization approach, where emission probability of different decay paths such as evaporation residue (ER), intermediate mass fragments (IMF) and fission etc. is worked out on parallel scale. Calculations have been done over a wide range of center of mass energies with Ec.m. = 65 - 92 MeV. The evaporation residue (ER) cross-sections of 118Xe* compound nucleus are fitted in reference to available data, using spherical and quadrupole (β2) deformed choice of decaying fragments within the optimum orientations approach. It may be noted that our calculated cross-sections find decent agreement with experimental data and hence provide an opportunity to analyze the exclusive role of deformations in view of fragmentation behavior of 118Xe* nucleus. The possible contribution of IMF fragments is worked out and an extensive effort is being made to analyze the role of excitation energy, angular momentum, diffuseness parameter and level density parameter to have better understanding of the decay patterns governed in the dynamics of 28Si + 90Zr → 118Xe* reaction.

Keywords: cross-sections, deformations, fragmentation, angular momentum

Procedia PDF Downloads 278
17812 Implementation and Validation of a Damage-Friction Constitutive Model for Concrete

Authors: L. Madouni, M. Ould Ouali, N. E. Hannachi

Abstract:

Two constitutive models for concrete are available in ABAQUS/Explicit, the Brittle Cracking Model and the Concrete Damaged Plasticity Model, and their suitability and limitations are well known. The aim of the present paper is to implement a damage-friction concrete constitutive model and to evaluate the performance of this model by comparing the predicted response with experimental data. The constitutive formulation of this material model is reviewed. In order to have consistent results, the parameter identification and calibration for the model have been performed. Several numerical simulations are presented in this paper, whose results allow for validating the capability of the proposed model for reproducing the typical nonlinear performances of concrete structures under different monotonic and cyclic load conditions. The results of the evaluation will be used for recommendations concerning the application and further improvements of the investigated model.

Keywords: Abaqus, concrete, constitutive model, numerical simulation

Procedia PDF Downloads 329
17811 ROSgeoregistration: Aerial Multi-Spectral Image Simulator for the Robot Operating System

Authors: Andrew R. Willis, Kevin Brink, Kathleen Dipple

Abstract:

This article describes a software package called ROS-georegistration intended for use with the robot operating system (ROS) and the Gazebo 3D simulation environment. ROSgeoregistration provides tools for the simulation, test, and deployment of aerial georegistration algorithms and is available at github.com/uncc-visionlab/rosgeoregistration. A model creation package is provided which downloads multi-spectral images from the Google Earth Engine database and, if necessary, incorporates these images into a single, possibly very large, reference image. Additionally a Gazebo plugin which uses the real-time sensor pose and image formation model to generate simulated imagery using the specified reference image is provided along with related plugins for UAV relevant data. The novelty of this work is threefold: (1) this is the first system to link the massive multi-spectral imaging database of Google’s Earth Engine to the Gazebo simulator, (2) this is the first example of a system that can simulate geospatially and radiometrically accurate imagery from multiple sensor views of the same terrain region, and (3) integration with other UAS tools creates a new holistic UAS simulation environment to support UAS system and subsystem development where real-world testing would generally be prohibitive. Sensed imagery and ground truth registration information is published to client applications which can receive imagery synchronously with telemetry from other payload sensors, e.g., IMU, GPS/GNSS, barometer, and windspeed sensor data. To highlight functionality, we demonstrate ROSgeoregistration for simulating Electro-Optical (EO) and Synthetic Aperture Radar (SAR) image sensors and an example use case for developing and evaluating image-based UAS position feedback, i.e., pose for image-based Guidance Navigation and Control (GNC) applications.

Keywords: EO-to-EO, EO-to-SAR, flight simulation, georegistration, image generation, robot operating system, vision-based navigation

Procedia PDF Downloads 76
17810 Effect of Geometry on the Aerodynamic Performance of Darrieus H Yype Vertical Axis Wind Turbine

Authors: Belkheir Noura, Rabah Kerfah, Boumehani Abdellah

Abstract:

The influence of solidity variations on the aerodynamic performance of H type vertical axis wind turbine is studied in this paper. The wind turbine model used in this paper is the three-blade wind turbine with the symmetrical airfoil, NACA0021. The length of the chord is 0.265m. Numerical investigations were implemented for the different solidity by changing the radius and blade number. A two-dimensional model of the wind turbine is employed. The approach a Reynolds-Averaged Navier–Stokes equations, completed by the K- ώ SST turbulence model, is used. Motion mesh model capability of a computational fluid dynamics (CFD) solver is used. For each value of the solidity, the aerodynamics performances and the characteristics of the flow field are studied at several values of the tip speed ratio, λ = 0.5 to λ = 3, with an incoming wind speed of 8 m/s. The results show that increasing the number of blades will reduce the maximum value of the power coefficient of the wind turbine. Also, for the VAWT with a lower solidity can obtain the maximum Cp at a high tip speed ratio. The effects of changing the radius and blade number on aerodynamic performance are almost the same. Finally, for the validation, experimental data from the literature and computational results were compared. In conclusion, to study the influence of the solidity in the performances of the wind turbine is to provide the reference for the design of H type vertical axis wind turbines.

Keywords: wind energy, darrieus h type vertical axis wind turbine, computational fluid dynamic, solidity

Procedia PDF Downloads 60
17809 Effect of Assumptions of Normal Shock Location on the Design of Supersonic Ejectors for Refrigeration

Authors: Payam Haghparast, Mikhail V. Sorin, Hakim Nesreddine

Abstract:

The complex oblique shock phenomenon can be simply assumed as a normal shock at the constant area section to simulate a sharp pressure increase and velocity decrease in 1-D thermodynamic models. The assumed normal shock location is one of the greatest sources of error in ejector thermodynamic models. Most researchers consider an arbitrary location without justifying it. Our study compares the effect of normal shock place on ejector dimensions in 1-D models. To this aim, two different ejector experimental test benches, a constant area-mixing ejector (CAM) and a constant pressure-mixing (CPM) are considered, with different known geometries, operating conditions and working fluids (R245fa, R141b). In the first step, in order to evaluate the real value of the efficiencies in the different ejector parts and critical back pressure, a CFD model was built and validated by experimental data for two types of ejectors. These reference data are then used as input to the 1D model to calculate the lengths and the diameters of the ejectors. Afterwards, the design output geometry calculated by the 1D model is compared directly with the corresponding experimental geometry. It was found that there is a good agreement between the ejector dimensions obtained by the 1D model, for both CAM and CPM, with experimental ejector data. Furthermore, it is shown that normal shock place affects only the constant area length as it is proven that the inlet normal shock assumption results in more accurate length. Taking into account previous 1D models, the results suggest the use of the assumed normal shock location at the inlet of the constant area duct to design the supersonic ejectors.

Keywords: 1D model, constant area-mixing, constant pressure-mixing, normal shock location, ejector dimensions

Procedia PDF Downloads 168
17808 Model Driven Architecture Methodologies: A Review

Authors: Arslan Murtaza

Abstract:

Model Driven Architecture (MDA) is technique presented by OMG (Object Management Group) for software development in which different models are proposed and converted them into code. The main plan is to identify task by using PIM (Platform Independent Model) and transform it into PSM (Platform Specific Model) and then converted into code. In this review paper describes some challenges and issues that are faced in MDA, type and transformation of models (e.g. CIM, PIM and PSM), and evaluation of MDA-based methodologies.

Keywords: OMG, model driven rrchitecture (MDA), computation independent model (CIM), platform independent model (PIM), platform specific model(PSM), MDA-based methodologies

Procedia PDF Downloads 411