Search results for: operational error
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3177

Search results for: operational error

2487 Localization of Buried People Using Received Signal Strength Indication Measurement of Wireless Sensor

Authors: Feng Tao, Han Ye, Shaoyi Liao

Abstract:

City constructions collapse after earthquake and people will be buried under ruins. Search and rescue should be conducted as soon as possible to save them. Therefore, according to the complicated environment, irregular aftershocks and rescue allow of no delay, a kind of target localization method based on RSSI (Received Signal Strength Indication) is proposed in this article. The target localization technology based on RSSI with the features of low cost and low complexity has been widely applied to nodes localization in WSN (Wireless Sensor Networks). Based on the theory of RSSI transmission and the environment impact to RSSI, this article conducts the experiments in five scenes, and multiple filtering algorithms are applied to original RSSI value in order to establish the signal propagation model with minimum test error respectively. Target location can be calculated from the distance, which can be estimated from signal propagation model, through improved centroid algorithm. Result shows that the localization technology based on RSSI is suitable for large-scale nodes localization. Among filtering algorithms, mixed filtering algorithm (average of average, median and Gaussian filtering) performs better than any other single filtering algorithm, and by using the signal propagation model, the minimum error of distance between known nodes and target node in the five scene is about 3.06m.

Keywords: signal propagation model, centroid algorithm, localization, mixed filtering, RSSI

Procedia PDF Downloads 289
2486 Classification of Barley Varieties by Artificial Neural Networks

Authors: Alper Taner, Yesim Benal Oztekin, Huseyin Duran

Abstract:

In this study, an Artificial Neural Network (ANN) was developed in order to classify barley varieties. For this purpose, physical properties of barley varieties were determined and ANN techniques were used. The physical properties of 8 barley varieties grown in Turkey, namely thousand kernel weight, geometric mean diameter, sphericity, kernel volume, surface area, bulk density, true density, porosity and colour parameters of grain, were determined and it was found that these properties were statistically significant with respect to varieties. As ANN model, three models, N-l, N-2 and N-3 were constructed. The performances of these models were compared. It was determined that the best-fit model was N-1. In the N-1 model, the structure of the model was designed to be 11 input layers, 2 hidden layers and 1 output layer. Thousand kernel weight, geometric mean diameter, sphericity, kernel volume, surface area, bulk density, true density, porosity and colour parameters of grain were used as input parameter; and varieties as output parameter. R2, Root Mean Square Error and Mean Error for the N-l model were found as 99.99%, 0.00074 and 0.009%, respectively. All results obtained by the N-l model were observed to have been quite consistent with real data. By this model, it would be possible to construct automation systems for classification and cleaning in flourmills.

Keywords: physical properties, artificial neural networks, barley, classification

Procedia PDF Downloads 169
2485 Of an 80 Gbps Passive Optical Network Using Time and Wavelength Division Multiplexing

Authors: Malik Muhammad Arslan, Muneeb Ullah, Dai Shihan, Faizan Khan, Xiaodong Yang

Abstract:

Internet Service Providers are driving endless demands for higher bandwidth and data throughput as new services and applications require higher bandwidth. Users want immediate and accurate data delivery. This article focuses on converting old conventional networks into passive optical networks based on time division and wavelength division multiplexing. The main focus of this research is to use a hybrid of time-division multiplexing and wavelength-division multiplexing to improve network efficiency and performance. In this paper, we design an 80 Gbps Passive Optical Network (PON), which meets the need of the Next Generation PON Stage 2 (NGPON2) proposed in this paper. The hybrid of the Time and Wavelength division multiplexing (TWDM) is said to be the best solution for the implementation of NGPON2, according to Full-Service Access Network (FSAN). To co-exist with or replace the current PON technologies, many wavelengths of the TWDM can be implemented simultaneously. By utilizing 8 pairs of wavelengths that are multiplexed and then transmitted over optical fiber for 40 Kms and on the receiving side, they are distributed among 256 users, which shows that the solution is reliable for implementation with an acceptable data rate. From the results, it can be concluded that the overall performance, Quality Factor, and bandwidth of the network are increased, and the Bit Error rate is minimized by the integration of this approach.

Keywords: bit error rate, fiber to the home, passive optical network, time and wavelength division multiplexing

Procedia PDF Downloads 63
2484 Change Management as a Critical Success Factor In E-Government initiatives

Authors: Mohammed Alassim

Abstract:

In 2014, a UN survey stated that: "The greatest challenge to the adoption of whole-of government, which fundamentally rests on increased collaboration, is resistance to change among government actors". Change management has experienced both theoretically and practically many transformation over the years. When organizations have to implement radical changes, they have to encounter a plethora of issues which leads to ineffective or inefficient implementation of change in most cases. 70% of change projects fail because of human issues. It has been cited that” most studies still show a 60-70% failure rate for organizational change projects — a statistic that has stayed constant from the 1970’s to the present.”. E-government involves not just technical change but cultural, policy, social and organizational evolution. Managing change and overcoming resistance to change is seen as crucial in the success of E-government projects. Resistance can be from different levels in the organization (top management, middle management or employees at operational levels). There can be many reasons for resistance including fear of change and insecurity, lack of knowledge and absence of commitment from management to implement the change. The purpose of this study is to conduct in-depth research to understand the process of change and to identify the critical factors that have led to resistance from employees at different levels (top management, Middle management and operational employees) during e-government initiatives in the public sector in Saudi Arabia. The study is based on qualitative and empirical research methods conducted in the public sector in the Kingdom of Saudi Arabia. This research will use triangulation in data method (interview, group discussion and document review). This research will contribute significantly to knowledge in this field and will identify the measures that can be taken to reduce resistance to change, Upon analysis recommendations or model will be offered which can enable decision makers in public sector in Saudi Arabia how to plan, implement and evaluate change in e-government initiatives via change management strategy.

Keywords: change management, e-government, managing change, resistance to change

Procedia PDF Downloads 308
2483 Impact Position Method Based on Distributed Structure Multi-Agent Coordination with JADE

Authors: YU Kaijun, Liang Dong, Zhang Yarong, Jin Zhenzhou, Yang Zhaobao

Abstract:

For the impact monitoring of distributed structures, the traditional positioning methods are based on the time difference, which includes the four-point arc positioning method and the triangulation positioning method. But in the actual operation, these two methods have errors. In this paper, the Multi-Agent Blackboard Coordination Principle is used to combine the two methods. Fusion steps: (1) The four-point arc locating agent calculates the initial point and records it to the Blackboard Module.(2) The triangulation agent gets its initial parameters by accessing the initial point.(3) The triangulation agent constantly accesses the blackboard module to update its initial parameters, and it also logs its calculated point into the blackboard.(4) When the subsequent calculation point and the initial calculation point are within the allowable error, the whole coordination fusion process is finished. This paper presents a Multi-Agent collaboration method whose agent framework is JADE. The JADE platform consists of several agent containers, with the agent running in each container. Because of the perfect management and debugging tools of the JADE, it is very convenient to deal with complex data in a large structure. Finally, based on the data in Jade, the results show that the impact location method based on Multi-Agent coordination fusion can reduce the error of the two methods.

Keywords: impact monitoring, structural health monitoring(SHM), multi-agent system(MAS), black-board coordination, JADE

Procedia PDF Downloads 170
2482 Benchmarking of Petroleum Tanker Discharge Operations at a Nigerian Coastal Terminal and Jetty Facilitates Optimization of the Ship–Shore Interface

Authors: Bassey O. Bassey

Abstract:

Benchmarking has progressively become entrenched as a requisite activity for process improvement and enhancing service delivery at petroleum jetties and terminals, most especially during tanker discharge operations at the ship – shore interface, as avoidable delays result in extra operating costs, non-productive time, high demurrage payments and ultimate product scarcity. The jetty and terminal in focus had been operational for 3 and 8 years respectively, with proper operational and logistic records maintained to evaluate their progress over time in order to plan and implement modifications and review of procedures for greater technical and economic efficiency. Regular and emergency staff meetings were held on a team, departmental and company-wide basis to progressively address major challenges that were encountered during each operation. The process and outcome of the resultant collectively planned changes carried out within the past two years forms the basis of this paper, which mirrors the initiatives effected to enhance operational and maintenance excellence at the affected facilities. Operational modifications included a second cargo receipt line designated for gasoline, product loss control at jetty and shore ends, enhanced product recovery and quality control, and revival of terminal–jetty backloading operations. Logistic improvements were the incorporation of an internal logistics firm and shipping agency, fast tracking of discharge procedures for tankers, optimization of tank vessel selection process, and third party product receipt and throughput. Maintenance excellence was achieved through construction of two new lay barges and refurbishment of the existing one; revamping of existing booster pump and purchasing of a modern one as reserve capacity; extension of Phase 1 of the jetty to accommodate two vessels and construction of Phase 2 for two more vessels; regular inspection, draining, drying and replacement of cargo hoses; corrosion management program for all process facilities; and an improved, properly planned and documented maintenance culture. Safety, environmental and security compliance were enhanced by installing state-of-the-art fire fighting facilities and equipment, seawater intake line construction as backup for borehole at the terminal, remediation of the shoreline and marine structures, modern spill containment equipment, improved housekeeping and accident prevention practices, and installation of hi-technology security enhancements, among others. The end result has been observed over the past two years to include improved tanker turnaround time, higher turnover on product sales, consistent product availability, greater indigenous human capacity utilisation by way of direct hires and contracts, as well as customer loyalty. The lessons learnt from this exercise would, therefore, serve as a model to be adapted by other operators of similar facilities, contractors, academics and consultants in a bid to deliver greater sustainability and profitability of operations at the ship – shore interface to this strategic industry.

Keywords: benchmarking, optimisation, petroleum jetty, petroleum terminal

Procedia PDF Downloads 356
2481 Relationship between Electricity Consumption and Economic Growth: Evidence from Nigeria (1971-2012)

Authors: N. E Okoligwe, Okezie A. Ihugba

Abstract:

Few scholars disagrees that electricity consumption is an important supporting factor for economy growth. However, the relationship between electricity consumption and economy growth has different manifestation in different countries according to previous studies. This paper examines the causal relationship between electricity consumption and economic growth for Nigeria. In an attempt to do this, the paper tests the validity of the modernization or depending hypothesis by employing various econometric tools such as Augmented Dickey Fuller (ADF) and Johansen Co-integration test, the Error Correction Mechanism (ECM) and Granger Causality test on time series data from 1971-2012. The Granger causality is found not to run from electricity consumption to real GDP and from GDP to electricity consumption during the year of study. The null hypothesis is accepted at the 5 per cent level of significance where the probability value (0.2251 and 0.8251) is greater than five per cent level of significance because both of them are probably determined by some other factors like; increase in urban population, unemployment rate and the number of Nigerians that benefit from the increase in GDP and increase in electricity demand is not determined by the increase in GDP (income) over the period of study because electricity demand has always been greater than consumption. Consequently; the policy makers in Nigeria should place priority in early stages of reconstruction on building capacity additions and infrastructure development of the electric power sector as this would force the sustainable economic growth in Nigeria.

Keywords: economic growth, electricity consumption, error correction mechanism, granger causality test

Procedia PDF Downloads 300
2480 Modeling and Design of E-mode GaN High Electron Mobility Transistors

Authors: Samson Mil'shtein, Dhawal Asthana, Benjamin Sullivan

Abstract:

The wide energy gap of GaN is the major parameter justifying the design and fabrication of high-power electronic components made of this material. However, the existence of a piezo-electrics in nature sheet charge at the AlGaN/GaN interface complicates the control of carrier injection into the intrinsic channel of GaN HEMTs (High Electron Mobility Transistors). As a result, most of the transistors created as R&D prototypes and all of the designs used for mass production are D-mode devices which introduce challenges in the design of integrated circuits. This research presents the design and modeling of an E-mode GaN HEMT with a very low turn-on voltage. The proposed device includes two critical elements allowing the transistor to achieve zero conductance across the channel when Vg = 0V. This is accomplished through the inclusion of an extremely thin, 2.5nm intrinsic Ga₀.₇₄Al₀.₂₆N spacer layer. The added spacer layer does not create piezoelectric strain but rather elastically follows the variations of the crystal structure of the adjacent GaN channel. The second important factor is the design of a gate metal with a high work function. The use of a metal gate with a work function (Ni in this research) greater than 5.3eV positioned on top of n-type doped (Nd=10¹⁷cm⁻³) Ga₀.₇₄Al₀.₂₆N creates the necessary built-in potential, which controls the injection of electrons into the intrinsic channel as the gate voltage is increased. The 5µm long transistor with a 0.18µm long gate and a channel width of 30µm operate at Vd=10V. At Vg =1V, the device reaches the maximum drain current of 0.6mA, which indicates a high current density. The presented device is operational at frequencies greater than 10GHz and exhibits a stable transconductance over the full range of operational gate voltages.

Keywords: compound semiconductors, device modeling, enhancement mode HEMT, gallium nitride

Procedia PDF Downloads 252
2479 Research on Pilot Sequence Design Method of Multiple Input Multiple Output Orthogonal Frequency Division Multiplexing System Based on High Power Joint Criterion

Authors: Linyu Wang, Jiahui Ma, Jianhong Xiang, Hanyu Jiang

Abstract:

For the pilot design of the sparse channel estimation model in Multiple Input Multiple Output Orthogonal Frequency Division Multiplexing (MIMO-OFDM) systems, the observation matrix constructed according to the matrix cross-correlation criterion, total correlation criterion and other optimization criteria are not optimal, resulting in inaccurate channel estimation and high bit error rate at the receiver. This paper proposes a pilot design method combining high-power sum and high-power variance criteria, which can more accurately estimate the channel. First, the pilot insertion position is designed according to the high-power variance criterion under the condition of equal power. Then, according to the high power sum criterion, the pilot power allocation is converted into a cone programming problem, and the power allocation is carried out. Finally, the optimal pilot is determined by calculating the weighted sum of the high power sum and the high power variance. Compared with the traditional pilot frequency, under the same conditions, the constructed MIMO-OFDM system uses the optimal pilot frequency for channel estimation, and the communication bit error rate performance obtains a gain of 6~7dB.

Keywords: MIMO-OFDM, pilot optimization, compressed sensing, channel estimation

Procedia PDF Downloads 137
2478 Case Study of Sexual Violence Victim Assessment in Semarang Regency

Authors: Sujana T, Kurniasari MD, Ayakeding AM

Abstract:

Background: Sexual violence is one of the violence with high incidence in Indonesia. Purpose: This research aims to describe the implementation of sexual violence victim assessment in Semarang Regency. Method: This research is a qualitative research with embeded single case study design. Data is analized with two units of analysis. The first unit of analysis is victim’s examiner with minimum one year of work experience. Semi-structured interview method is used to obtain the data. The second unit of analysis is document related. The data is taken by observing the pathway and description of every document and how it supported each implementation of assessment. Results: This study is resulted with three themes, which are: The first theme is assessments of sexual violence in Semarang regency has been standardized. The laws of the Republic of Indonesia have regulated the handling of victims of sexual violence in outline. Victims of sexual violence can be dealt with by the police, the Integrated Service Center for Women and Children Empowerment and the Regional General Hospital. Each examination site has different operational procedures standards for dealing with victims of sexual violence. Cooperation with family and witnesses is also required in the review process to obtain accurate results and evidence; The second idea that resulted from this study is there are inhibits factors in the assessments process. Victims sometimes feel embarrassed and reluctant to recount the chronological events during reporting. The examining officer should be able to approach and build a trust to convince the victim to be able to cooperate. The third theme is there are other things to consider in the process of assessing victims of sexual violence. Ensuring implementation in accordance with applicable operational procedures standards, providing exclusive examination rooms, counseling and safeguarding the privacy of victims are important to be considered in the assessment.

Keywords: assessment, case study, Semarang regency, sexual violence

Procedia PDF Downloads 131
2477 Usage the Point Analysis Algorithm (SANN) on Drought Analysis

Authors: Khosro Shafie Motlaghi, Amir Reza Salemian

Abstract:

In arid and semi-arid regions like our country Evapotranspiration is the greatestportion of water resource. Therefor knowlege of its changing and other climate parameters plays an important role for planning, development, and management of water resource. In this search the Trend of long changing of Evapotranspiration (ET0), average temprature, monthly rainfall were tested. To dose, all synoptic station s in iran were divided according to the climate with Domarton climate. The present research was done in semi-arid climate of Iran, and in which 14 synoptic with 30 years period of statistics were investigated with 3 methods of minimum square error, Mann Kendoll, and Vald-Volfoytz Evapotranspiration was calculated by using the method of FAO-Penman. The results of investigation in periods of statistic has shown that the process Evapotranspiration parameter of 24 percent of stations is positive, and for 2 percent is negative, and for 47 percent. It was without any Trend. Similary for 22 percent of stations was positive the Trend of parameter of temperature for 19 percent , the trend was negative and for 64 percent, it was without any Trend. The results of rainfall trend has shown that the amount of rainfall in most stations was not considered as a meaningful trend. The result of Mann-kendoll method similar to minimum square error method. regarding the acquired result was can admit that in future years Some regions will face increase of temperature and Evapotranspiration.

Keywords: analysis, algorithm, SANN, ET0

Procedia PDF Downloads 288
2476 Error Analysis of Pronunciation of French by Sinhala Speaking Learners

Authors: Chandeera Gunawardena

Abstract:

The present research analyzes the pronunciation errors encountered by thirty Sinhala speaking learners of French on the assumption that the pronunciation errors were systematic and they reflect the interference of the native language of the learners. The thirty participants were selected using random sampling method. By the time of the study, the subjects were studying French as a foreign language for their Bachelor of Arts Degree at University of Kelaniya, Sri Lanka. The participants were from a homogenous linguistics background. All participants speak the same native language (Sinhala) thus they had completed their secondary education in Sinhala medium and during which they had also learnt French as a foreign language. A battery operated audio tape recorder and a 120-minute blank cassettes were used for recording. A list comprised of 60 words representing all French phonemes was used to diagnose pronunciation difficulties. Before the recording process commenced, the subjects were requested to familiarize themselves with the words through reading them several times. The recording was conducted individually in a quiet classroom and each recording approximately took fifteen minutes. Each subject was required to read at a normal speed. After the completion of recording, the recordings were replayed to identify common errors which were immediately transcribed using the International Phonetic Alphabet. Results show that Sinhala speaking learners face problems with French nasal vowels and French initial consonants clusters. The learners also exhibit errors which occur because of their second language (English) interference.

Keywords: error analysis, pronunciation difficulties, pronunciation errors, Sinhala speaking learners of French

Procedia PDF Downloads 202
2475 A Comparative Study of Sampling-Based Uncertainty Propagation with First Order Error Analysis and Percentile-Based Optimization

Authors: M. Gulam Kibria, Shourav Ahmed, Kais Zaman

Abstract:

In system analysis, the information on the uncertain input variables cause uncertainty in the system responses. Different probabilistic approaches for uncertainty representation and propagation in such cases exist in the literature. Different uncertainty representation approaches result in different outputs. Some of the approaches might result in a better estimation of system response than the other approaches. The NASA Langley Multidisciplinary Uncertainty Quantification Challenge (MUQC) has posed challenges about uncertainty quantification. Subproblem A, the uncertainty characterization subproblem, of the challenge posed is addressed in this study. In this subproblem, the challenge is to gather knowledge about unknown model inputs which have inherent aleatory and epistemic uncertainties in them with responses (output) of the given computational model. We use two different methodologies to approach the problem. In the first methodology we use sampling-based uncertainty propagation with first order error analysis. In the other approach we place emphasis on the use of Percentile-Based Optimization (PBO). The NASA Langley MUQC’s subproblem A is developed in such a way that both aleatory and epistemic uncertainties need to be managed. The challenge problem classifies each uncertain parameter as belonging to one the following three types: (i) An aleatory uncertainty modeled as a random variable. It has a fixed functional form and known coefficients. This uncertainty cannot be reduced. (ii) An epistemic uncertainty modeled as a fixed but poorly known physical quantity that lies within a given interval. This uncertainty is reducible. (iii) A parameter might be aleatory but sufficient data might not be available to adequately model it as a single random variable. For example, the parameters of a normal variable, e.g., the mean and standard deviation, might not be precisely known but could be assumed to lie within some intervals. It results in a distributional p-box having the physical parameter with an aleatory uncertainty, but the parameters prescribing its mathematical model are subjected to epistemic uncertainties. Each of the parameters of the random variable is an unknown element of a known interval. This uncertainty is reducible. From the study, it is observed that due to practical limitations or computational expense, the sampling is not exhaustive in sampling-based methodology. That is why the sampling-based methodology has high probability of underestimating the output bounds. Therefore, an optimization-based strategy to convert uncertainty described by interval data into a probabilistic framework is necessary. This is achieved in this study by using PBO.

Keywords: aleatory uncertainty, epistemic uncertainty, first order error analysis, uncertainty quantification, percentile-based optimization

Procedia PDF Downloads 230
2474 In-Flight Aircraft Performance Model Enhancement Using Adaptive Lookup Tables

Authors: Georges Ghazi, Magali Gelhaye, Ruxandra Botez

Abstract:

Over the years, the Flight Management System (FMS) has experienced a continuous improvement of its many features, to the point of becoming the pilot’s primary interface for flight planning operation on the airplane. With the assistance of the FMS, the concept of distance and time has been completely revolutionized, providing the crew members with the determination of the optimized route (or flight plan) from the departure airport to the arrival airport. To accomplish this function, the FMS needs an accurate Aircraft Performance Model (APM) of the aircraft. In general, APMs that equipped most modern FMSs are established before the entry into service of an individual aircraft, and results from the combination of a set of ordinary differential equations and a set of performance databases. Unfortunately, an aircraft in service is constantly exposed to dynamic loads that degrade its flight characteristics. These degradations endow two main origins: airframe deterioration (control surfaces rigging, seals missing or damaged, etc.) and engine performance degradation (fuel consumption increase for a given thrust). Thus, after several years of service, the performance databases and the APM associated to a specific aircraft are no longer representative enough of the actual aircraft performance. It is important to monitor the trend of the performance deterioration and correct the uncertainties of the aircraft model in order to improve the accuracy the flight management system predictions. The basis of this research lies in the new ability to continuously update an Aircraft Performance Model (APM) during flight using an adaptive lookup table technique. This methodology was developed and applied to the well-known Cessna Citation X business aircraft. For the purpose of this study, a level D Research Aircraft Flight Simulator (RAFS) was used as a test aircraft. According to Federal Aviation Administration the level D is the highest certification level for the flight dynamics modeling. Basically, using data available in the Flight Crew Operating Manual (FCOM), a first APM describing the variation of the engine fan speed and aircraft fuel flow w.r.t flight conditions was derived. This model was next improved using the proposed methodology. To do that, several cruise flights were performed using the RAFS. An algorithm was developed to frequently sample the aircraft sensors measurements during the flight and compare the model prediction with the actual measurements. Based on these comparisons, a correction was performed on the actual APM in order to minimize the error between the predicted data and the measured data. In this way, as the aircraft flies, the APM will be continuously enhanced, making the FMS more and more precise and the prediction of trajectories more realistic and more reliable. The results obtained are very encouraging. Indeed, using the tables initialized with the FCOM data, only a few iterations were needed to reduce the fuel flow prediction error from an average relative error of 12% to 0.3%. Similarly, the FCOM prediction regarding the engine fan speed was reduced from a maximum error deviation of 5.0% to 0.2% after only ten flights.

Keywords: aircraft performance, cruise, trajectory optimization, adaptive lookup tables, Cessna Citation X

Procedia PDF Downloads 256
2473 An Application of Vector Error Correction Model to Assess Financial Innovation Impact on Economic Growth of Bangladesh

Authors: Md. Qamruzzaman, Wei Jianguo

Abstract:

Over the decade, it is observed that financial development, through financial innovation, not only accelerated development of efficient and effective financial system but also act as a catalyst in the economic development process. In this study, we try to explore insight about how financial innovation causes economic growth in Bangladesh by using Vector Error Correction Model (VECM) for the period of 1990-2014. Test of Cointegration confirms the existence of a long-run association between financial innovation and economic growth. For investigating directional causality, we apply Granger causality test and estimation explore that long-run growth will be affected by capital flow from non-bank financial institutions and inflation in the economy but changes of growth rate do not have any impact on Capital flow in the economy and level of inflation in long-run. Whereas, growth and Market capitalization, as well as market capitalization and capital flow, confirm feedback hypothesis. Variance decomposition suggests that any innovation in the financial sector can cause GDP variation fluctuation in both long run and short run. Financial innovation promotes efficiency and cost in financial transactions in the financial system, can boost economic development process. The study proposed two policy recommendations for further development. First, innovation friendly financial policy should formulate to encourage adaption and diffusion of financial innovation in the financial system. Second, operation of financial market and capital market should be regulated with implementation of rules and regulation to create conducive environment.

Keywords: financial innovation, economic growth, GDP, financial institution, VECM

Procedia PDF Downloads 260
2472 The Effect of Vertical Integration on Operational Performance: Evaluating Physician Employment in Hospitals

Authors: Gary Young, David Zepeda, Gilbert Nyaga

Abstract:

This study investigated whether vertical integration of hospitals and physicians is associated with better care for patients with cardiac conditions. A dramatic change in the U.S. hospital industry is the integration of hospital and physicians through hospital acquisition of physician practices. Yet, there is little evidence regarding whether this form of vertical integration leads to better operational performance of hospitals. The study was conducted as an observational investigation based on a pooled, cross-sectional database. The study sample comprised over hospitals in the State of California. The time frame for the study was 2010 to 2012. The key performance measure was hospitals’ degree of compliance with performance criteria set out by the federal government for managing patients with cardiac conditions. These criteria relate to the types of clinical tests and medications that hospitals should follow for cardiac patients but hospital compliance requires the cooperation of a hospital’s physicians. Data for this measure was obtained from a federal website that presents performance scores for U.S. hospitals. The key independent variable was the percentage of cardiologists that a hospital employs (versus cardiologists who are affiliated but not employed by the hospital). Data for this measure was obtained from the State of California which requires hospitals to report financial and operation data each year including numbers of employed physicians. Other characteristics of hospitals (e.g., information technology for cardiac care, volume of cardiac patients) were also evaluated as possible complements or substitutes for physician employment by hospitals. Additional sources of data included the American Hospital Association and the U.S. Census. Empirical models were estimated with generalized estimating equations (GEE). Findings suggest that physician employment is positively associated with better hospital performance for cardiac care. However, findings also suggest that information technology is a substitute for physician employment.

Keywords: physician employment, hospitals, verical integration, cardiac care

Procedia PDF Downloads 390
2471 Neural Network Models for Actual Cost and Actual Duration Estimation in Construction Projects: Findings from Greece

Authors: Panagiotis Karadimos, Leonidas Anthopoulos

Abstract:

Predicting the actual cost and duration in construction projects concern a continuous and existing problem for the construction sector. This paper addresses this problem with modern methods and data available from past public construction projects. 39 bridge projects, constructed in Greece, with a similar type of available data were examined. Considering each project’s attributes with the actual cost and the actual duration, correlation analysis is performed and the most appropriate predictive project variables are defined. Additionally, the most efficient subgroup of variables is selected with the use of the WEKA application, through its attribute selection function. The selected variables are used as input neurons for neural network models through correlation analysis. For constructing neural network models, the application FANN Tool is used. The optimum neural network model, for predicting the actual cost, produced a mean squared error with a value of 3.84886e-05 and it was based on the budgeted cost and the quantity of deck concrete. The optimum neural network model, for predicting the actual duration, produced a mean squared error with a value of 5.89463e-05 and it also was based on the budgeted cost and the amount of deck concrete.

Keywords: actual cost and duration, attribute selection, bridge construction, neural networks, predicting models, FANN TOOL, WEKA

Procedia PDF Downloads 124
2470 Development a Forecasting System and Reliable Sensors for River Bed Degradation and Bridge Pier Scouring

Authors: Fong-Zuo Lee, Jihn-Sung Lai, Yung-Bin Lin, Xiaoqin Liu, Kuo-Chun Chang, Zhi-Xian Yang, Wen-Dar Guo, Jian-Hao Hong

Abstract:

In recent years, climate change is a major factor to increase rainfall intensity and extreme rainfall frequency. The increased rainfall intensity and extreme rainfall frequency will increase the probability of flash flood with abundant sediment transport in a river basin. The floods caused by heavy rainfall may cause damages to the bridge, embankment, hydraulic works, and the other disasters. Therefore, the foundation scouring of bridge pier, embankment and spur dike caused by floods has been a severe problem in the worldwide. This severe problem has happened in many East Asian countries such as Taiwan and Japan because of these areas are suffered in typhoons, earthquakes, and flood events every year. Results from the complex interaction between fluid flow patterns caused by hydraulic works and the sediment transportation leading to the formation of river morphology, it is extremely difficult to develop a reliable and durable sensor to measure river bed degradation and bridge pier scouring. Therefore, an innovative scour monitoring sensor using vibration-based Micro-Electro Mechanical Systems (MEMS) was developed. This vibration-based MEMS sensor was packaged inside a stainless sphere with the proper protection of the full-filled resin, which can measure free vibration signals to detect scouring/deposition processes at the bridge pier. In addition, a friendly operational system includes rainfall runoff model, one-dimensional and two-dimensional numerical model, and the applicability of sediment transport equation and local scour formulas of bridge pier are included in this research. The friendly operational system carries out the simulation results of flood events that includes the elevation changes of river bed erosion near the specified bridge pier and the erosion depth around bridge piers. In addition, the system is developed with easy operation and integrated interface, the system can supplies users to calibrate and verify numerical model and display simulation results through the interface comparing to the scour monitoring sensors. To achieve the forecast of the erosion depth of river bed and main bridge pier in the study area, the system also connects the rainfall forecast data from Taiwan Typhoon and Flood Research Institute. The results can be provided available information for the management unit of river and bridge engineering in advance.

Keywords: flash flood, river bed degradation, bridge pier scouring, a friendly operational system

Procedia PDF Downloads 185
2469 A Comparative Study of Optimization Techniques and Models to Forecasting Dengue Fever

Authors: Sudha T., Naveen C.

Abstract:

Dengue is a serious public health issue that causes significant annual economic and welfare burdens on nations. However, enhanced optimization techniques and quantitative modeling approaches can predict the incidence of dengue. By advocating for a data-driven approach, public health officials can make informed decisions, thereby improving the overall effectiveness of sudden disease outbreak control efforts. The National Oceanic and Atmospheric Administration and the Centers for Disease Control and Prevention are two of the U.S. Federal Government agencies from which this study uses environmental data. Based on environmental data that describe changes in temperature, precipitation, vegetation, and other factors known to affect dengue incidence, many predictive models are constructed that use different machine learning methods to estimate weekly dengue cases. The first step involves preparing the data, which includes handling outliers and missing values to make sure the data is prepared for subsequent processing and the creation of an accurate forecasting model. In the second phase, multiple feature selection procedures are applied using various machine learning models and optimization techniques. During the third phase of the research, machine learning models like the Huber Regressor, Support Vector Machine, Gradient Boosting Regressor (GBR), and Support Vector Regressor (SVR) are compared with several optimization techniques for feature selection, such as Harmony Search and Genetic Algorithm. In the fourth stage, the model's performance is evaluated using Mean Square Error (MSE), Mean Absolute Error (MAE), and Root Mean Square Error (RMSE) as assistance. Selecting an optimization strategy with the least number of errors, lowest price, biggest productivity, or maximum potential results is the goal. In a variety of industries, including engineering, science, management, mathematics, finance, and medicine, optimization is widely employed. An effective optimization method based on harmony search and an integrated genetic algorithm is introduced for input feature selection, and it shows an important improvement in the model's predictive accuracy. The predictive models with Huber Regressor as the foundation perform the best for optimization and also prediction.

Keywords: deep learning model, dengue fever, prediction, optimization

Procedia PDF Downloads 49
2468 Performance Analysis of Geophysical Database Referenced Navigation: The Combination of Gravity Gradient and Terrain Using Extended Kalman Filter

Authors: Jisun Lee, Jay Hyoun Kwon

Abstract:

As an alternative way to compensate the INS (inertial navigation system) error in non-GNSS (Global Navigation Satellite System) environment, geophysical database referenced navigation is being studied. In this study, both gravity gradient and terrain data were combined to complement the weakness of sole geophysical data as well as to improve the stability of the positioning. The main process to compensate the INS error using geophysical database was constructed on the basis of the EKF (Extended Kalman Filter). In detail, two type of combination method, centralized and decentralized filter, were applied to check the pros and cons of its algorithm and to find more robust results. The performance of each navigation algorithm was evaluated based on the simulation by supposing that the aircraft flies with precise geophysical DB and sensors above nine different trajectories. Especially, the results were compared to the ones from sole geophysical database referenced navigation to check the improvement due to a combination of the heterogeneous geophysical database. It was found that the overall navigation performance was improved, but not all trajectories generated better navigation result by the combination of gravity gradient with terrain data. Also, it was found that the centralized filter generally showed more stable results. It is because that the way to allocate the weight for the decentralized filter could not be optimized due to the local inconsistency of geophysical data. In the future, switching of geophysical data or combining different navigation algorithm are necessary to obtain more robust navigation results.

Keywords: Extended Kalman Filter, geophysical database referenced navigation, gravity gradient, terrain

Procedia PDF Downloads 337
2467 Leadership and Management Strategies of Sports Administrator in Asia

Authors: Mark Christian Inductivo Siwa, Jesrelle Ormoc Bontuyan

Abstract:

This study was conducted in selected tertiary schools in selected universities in Asian countries such as Philippines, Thailand, and China, which are the top performing countries in Southeast Asian Games or SEA Games and Asian School Games (ASG), also known as the Youth SEA Games and Asian Games. The respondents of the study are sports administrators/directors and coaches in selected Southeast Asian countries such as Philippines, Thailand, and in Asia which is China. This study has generated a progressive sports operational model of Sports Leadership and Management in Selected Universities in Asia. This study utilized mixed-method research. It is a methodology for conducting research that involves collecting, analyzing and integrating quantitative (e.g., experiments, surveys) and qualitative (e.g., focus groups, interviews) research. This approach to research is used to provide integration for a better understanding of the research problem than either of each alone. This study particularly employed the explanatory sequential design of mixed methods, which involved two phases: the quantitative phase, which involves the collection and analysis of quantitative data, followed by the qualitative phase, which involves the collection and analysis of qualitative data. This study will prioritize the quantitative data and the findings will be followed up during the interpretation phase in the qualitative data of the study. The qualitative data help explain or build upon initial quantitative results. In phase I, the researcher began with the collection and analysis of the quantitative data. His investigation gave greater emphasis on the quantitative methods, particularly employed surveys with the coaches and sports directors of the three selected universities in Asia. In Phase II, the researcher subsequently collected and analyzed the qualitative data obtained through an interview with the sports directors to follow from or connect to the results of the quantitative phase. This study followed the data analysis spiral so that the researcher could follow – up or explain the quantitative results. The researcher engaged in the process of moving in analytic circles. Based on the school's mission and vision, the sports leadership and management consistently followed the key factors to take into account when leading the organization and managing the process in sports leadership and management when formulating objectives/goals, budget, equipment care and maintenance, facilities, training matrix, and consideration. Also, sports management demonstrates the need for development in terms of the upkeep and care of equipment as well as athlete funding. The development of goals or sports management goals, sports facilities and equipment, as well as improvements in demonstrating training and consideration, and incentives, should also include a maintenance plan. The study concluded with a progressive sports operational model that was created based on the result of the study.

Keywords: sports leadership and management, formulating objectives, budget, equipment care and maintenance, training, consideration, incentives, progressive sports operational model

Procedia PDF Downloads 81
2466 Novel Animal Drawn Wheel-Axle Mechanism Actuated Knapsack Boom Sprayer

Authors: Ibrahim O. Abdulmalik, Michael C. Amonye, Mahdi Makoyo

Abstract:

Manual knapsack sprayer is the most popular means of farm spraying in Nigeria. It has its limitations. Apart from the human fatigue, which leads to unsteady walking steps, their field capacities are small. They barely cover about 0.2hectare per hour. Their small swath implies that a sizeable farm would take several days to cover. Weather changes are erratic and often it is desired to spray a large farm within hours or few days for even effect, uniformity and to avoid adverse weather interference. It is also often required that a large farm be covered within a short period to avoid re-emergence of weeds before crop emergence. Deployment of many knapsack operators to large farms has not been successful. Human error in taking equally spaced swaths usually result in over dosage of overlaps and in unapplied areas due to error at edges overlaps. Large farm spraying require boom equipment with larger swath. Reduced error in swath overlaps and spraying within the shortest possible time are then assured. Tractor boom sprayers would readily overcome these problems and achieve greater coverage, but they are not available in the country. Tractor hire for cultivation is very costly with the attendant lack of spare parts and specialized technicians for maintenance wherefore farmers find it difficult to engage tractors for cultivation and would avoid considering the employment of a tractor boom sprayer. Animal traction in farming is predominant in Nigeria, especially in the Northern part of the country. Development of boom sprayers drawn by work animals surely implies the maximization of animal utilization in farming. The Hydraulic Equipment Development Institute, Kano, in keeping to its mandate of targeted R&D in hydraulic and pneumatic systems, has developed an Animal Drawn Knapsack Boom Sprayer with four nozzles using the axle mechanism of a two wheeled cart to actuate the piston pump of two knapsack sprayers in line with appropriate technology demand of the country. It is hoped that the introduction of this novel contrivance shall enhance crop protection practice and lead to greater crop and food production in Nigeria.

Keywords: boom, knapsack, farm, sprayer, wheel axle

Procedia PDF Downloads 278
2465 Challenges in Achieving Profitability for MRO Companies in the Aviation Industry: An Analytical Approach

Authors: Nur Sahver Uslu, Ali̇ Hakan Büyüklü

Abstract:

Maintenance, Repair, and Overhaul (MRO) costs are significant in the aviation industry. On the other hand, companies that provide MRO services to the aviation industry but are not dominant in the sector, need to determine the right strategies for sustainable profitability in a competitive environment. This study examined the operational real data of a small medium enterprise (SME) MRO company where analytical methods are not widely applied. The company's customers were divided into two categories: airline companies and non-airline companies, and the variables that best explained profitability were analyzed with Logistic Regression for each category and the results were compared. First, data reduction was applied to the transformed variables that went through the data cleaning and preparation stages, and the variables to be included in the model were decided. The misclassification rates for the logistic regression results concerning both customer categories are similar, indicating consistent model performance across different segments. Less profit margin is obtained from airline customers, which can be explained by the variables part description, time to quotation (TTQ), turnaround time (TAT), manager, part cost, and labour cost. The higher profit margin obtained from non-airline customers is explained only by the variables part description, part cost, and labour cost. Based on the two models, it can be stated that it is significantly more challenging for the MRO company, which is the subject of our study, to achieve profitability from Airline customers. While operational processes and organizational structure also affect the profit from airline customers, only the type of parts and costs determine the profit for non-airlines.

Keywords: aircraft, aircraft components, aviation, data analytics, data science, gini index, maintenance, repair, and overhaul, MRO, logistic regression, profit, variable clustering, variable reduction

Procedia PDF Downloads 12
2464 The Influence of Different Flux Patterns on Magnetic Losses in Electric Machine Cores

Authors: Natheer Alatawneh

Abstract:

The finite element analysis of magnetic fields in electromagnetic devices shows that the machine cores experience different flux patterns including alternating and rotating fields. The rotating fields are generated in different configurations range between circular and elliptical with different ratios between the major and minor axis of the flux locus. Experimental measurements on electrical steel exposed to different flux patterns disclose different magnetic losses in the samples under test. Consequently, electric machines require special attention during the cores loss calculation process to consider the flux patterns. In this study, a circular rotational single sheet tester is employed to measure the core losses in electric steel sample of M36G29. The sample was exposed to alternating field, circular field, and elliptical fields with axis ratios of 0.2, 0.4, 0.6 and 0.8. The measured data was implemented on 6-4 switched reluctance motor at three different frequencies of interest to the industry as 60 Hz, 400 Hz, and 1 kHz. The results disclose a high margin of error that may occur during the loss calculations if the flux patterns issue is neglected. The error in different parts of the machine associated with considering the flux patterns can be around 50%, 10%, and 2% at 60Hz, 400Hz, and 1 kHz, respectively. The future work will focus on the optimization of machine geometrical shape which has a primary effect on the flux pattern in order to minimize the magnetic losses in machine cores.

Keywords: alternating core losses, electric machines, finite element analysis, rotational core losses

Procedia PDF Downloads 242
2463 Video Compression Using Contourlet Transform

Authors: Delara Kazempour, Mashallah Abasi Dezfuli, Reza Javidan

Abstract:

Video compression used for channels with limited bandwidth and storage devices has limited storage capabilities. One of the most popular approaches in video compression is the usage of different transforms. Discrete cosine transform is one of the video compression methods that have some problems such as blocking, noising and high distortion inappropriate effect in compression ratio. wavelet transform is another approach is better than cosine transforms in balancing of compression and quality but the recognizing of curve curvature is so limit. Because of the importance of the compression and problems of the cosine and wavelet transforms, the contourlet transform is most popular in video compression. In the new proposed method, we used contourlet transform in video image compression. Contourlet transform can save details of the image better than the previous transforms because this transform is multi-scale and oriented. This transform can recognize discontinuity such as edges. In this approach we lost data less than previous approaches. Contourlet transform finds discrete space structure. This transform is useful for represented of two dimension smooth images. This transform, produces compressed images with high compression ratio along with texture and edge preservation. Finally, the results show that the majority of the images, the parameters of the mean square error and maximum signal-to-noise ratio of the new method based contourlet transform compared to wavelet transform are improved but in most of the images, the parameters of the mean square error and maximum signal-to-noise ratio in the cosine transform is better than the method based on contourlet transform.

Keywords: video compression, contourlet transform, discrete cosine transform, wavelet transform

Procedia PDF Downloads 433
2462 Prediction of PM₂.₅ Concentration in Ulaanbaatar with Deep Learning Models

Authors: Suriya

Abstract:

Rapid socio-economic development and urbanization have led to an increasingly serious air pollution problem in Ulaanbaatar (UB), the capital of Mongolia. PM₂.₅ pollution has become the most pressing aspect of UB air pollution. Therefore, monitoring and predicting PM₂.₅ concentration in UB is of great significance for the health of the local people and environmental management. As of yet, very few studies have used models to predict PM₂.₅ concentrations in UB. Using data from 0:00 on June 1, 2018, to 23:00 on April 30, 2020, we proposed two deep learning models based on Bayesian-optimized LSTM (Bayes-LSTM) and CNN-LSTM. We utilized hourly observed data, including Himawari8 (H8) aerosol optical depth (AOD), meteorology, and PM₂.₅ concentration, as input for the prediction of PM₂.₅ concentrations. The correlation strengths between meteorology, AOD, and PM₂.₅ were analyzed using the gray correlation analysis method; the comparison of the performance improvement of the model by using the AOD input value was tested, and the performance of these models was evaluated using mean absolute error (MAE) and root mean square error (RMSE). The prediction accuracies of Bayes-LSTM and CNN-LSTM deep learning models were both improved when AOD was included as an input parameter. Improvement of the prediction accuracy of the CNN-LSTM model was particularly enhanced in the non-heating season; in the heating season, the prediction accuracy of the Bayes-LSTM model slightly improved, while the prediction accuracy of the CNN-LSTM model slightly decreased. We propose two novel deep learning models for PM₂.₅ concentration prediction in UB, Bayes-LSTM, and CNN-LSTM deep learning models. Pioneering the use of AOD data from H8 and demonstrating the inclusion of AOD input data improves the performance of our two proposed deep learning models.

Keywords: deep learning, AOD, PM2.5, prediction, Ulaanbaatar

Procedia PDF Downloads 40
2461 Track and Trace Solution on Land Certificate Production: Indonesian Land Certificate

Authors: Adrian Rifqi, Febe Napitupulu, Erdi Hermawan, Edwin Putra, Yang Leprilian

Abstract:

This article focuses on the implementation of the production improvement process of the Indonesian land certificate product that printed in Perum Peruri as the state-owned enterprises. Based on the data obtained, there are several complaints from customers of the 2019 land certificate production. The complaints become a negative value to loyal customers of Perum Peruri. Almost all the complaints are referring to ‘defective printouts and the difference between products in packaging and packaging labels both in terms of type and quantity’. To overcome this problem, we intend to make an improvement to the production process that focuses on complaints ‘there is a difference between products in packaging with packaging labels’. Improvements in the land certificate production process are relying on the technology of the scales and QR code on the packaging label. In addition, using the QR code on the packaging label will facilitate the process of tracking product data. With this method, we hope to reduce the error rate between products in packaging with the packaging label both in terms of quantity, type, and product number on the land certificate and error rate of sending land certificates, which will be sent to many places to 0%. With this solution, we also hope to get precise data and real-time reports on the production of land certificates in the near future, so track and trace implementation can be done as the solution of the land certificate production.

Keywords: land certificates, QR code, track and trace, packaging

Procedia PDF Downloads 150
2460 Pilot-Assisted Direct-Current Biased Optical Orthogonal Frequency Division Multiplexing Visible Light Communication System

Authors: Ayad A. Abdulkafi, Shahir F. Nawaf, Mohammed K. Hussein, Ibrahim K. Sileh, Fouad A. Abdulkafi

Abstract:

Visible light communication (VLC) is a new approach of optical wireless communication proposed to support the congested radio frequency (RF) spectrum. VLC systems are combined with orthogonal frequency division multiplexing (OFDM) to achieve high rate transmission and high spectral efficiency. In this paper, we investigate the Pilot-Assisted Channel Estimation for DC biased Optical OFDM (PACE-DCO-OFDM) systems to reduce the effects of the distortion on the transmitted signal. Least-square (LS) and linear minimum mean-squared error (LMMSE) estimators are implemented in MATLAB/Simulink to enhance the bit-error-rate (BER) of PACE-DCO-OFDM. Results show that DCO-OFDM system based on PACE scheme has achieved better BER performance compared to conventional system without pilot assisted channel estimation. Simulation results show that the proposed PACE-DCO-OFDM based on LMMSE algorithm can more accurately estimate the channel and achieves better BER performance when compared to the LS based PACE-DCO-OFDM and the traditional system without PACE. For the same signal to noise ratio (SNR) of 25 dB, the achieved BER is about 5×10-4 for LMMSE-PACE and 4.2×10-3 with LS-PACE while it is about 2×10-1 for system without PACE scheme.

Keywords: channel estimation, OFDM, pilot-assist, VLC

Procedia PDF Downloads 171
2459 High Accuracy Analytic Approximations for Modified Bessel Functions I₀(x)

Authors: Pablo Martin, Jorge Olivares, Fernando Maass

Abstract:

A method to obtain analytic approximations for special function of interest in engineering and physics is described here. Each approximate function will be valid for every positive value of the variable and accuracy will be high and increasing with the number of parameters to determine. The general technique will be shown through an application to the modified Bessel function of order zero, I₀(x). The form and the calculation of the parameters are performed with the simultaneous use of the power series and asymptotic expansion. As in Padé method rational functions are used, but now they are combined with other elementary functions as; fractional powers, hyperbolic, trigonometric and exponential functions, and others. The elementary function is determined, considering that the approximate function should be a bridge between the power series and the asymptotic expansion. In the case of the I₀(x) function two analytic approximations have been already determined. The simplest one is (1+x²/4)⁻¹/⁴(1+0.24273x²) cosh(x)/(1+0.43023x²). The parameters of I₀(x) were determined using the leading term of the asymptotic expansion and two coefficients of the power series, and the maximum relative error is 0.05. In a second case, two terms of the asymptotic expansion were used and 4 of the power series and the maximum relative error is 0.001 at x≈9.5. Approximations with much higher accuracy will be also shown. In conclusion a new technique is described to obtain analytic approximations to some functions of interest in sciences, such that they have a high accuracy, they are valid for every positive value of the variable, they can be integrated and differentiated as the usual, functions, and furthermore they can be calculated easily even with a regular pocket calculator.

Keywords: analytic approximations, mathematical-physics applications, quasi-rational functions, special functions

Procedia PDF Downloads 244
2458 Monte Carlo Estimation of Heteroscedasticity and Periodicity Effects in a Panel Data Regression Model

Authors: Nureni O. Adeboye, Dawud A. Agunbiade

Abstract:

This research attempts to investigate the effects of heteroscedasticity and periodicity in a Panel Data Regression Model (PDRM) by extending previous works on balanced panel data estimation within the context of fitting PDRM for Banks audit fee. The estimation of such model was achieved through the derivation of Joint Lagrange Multiplier (LM) test for homoscedasticity and zero-serial correlation, a conditional LM test for zero serial correlation given heteroscedasticity of varying degrees as well as conditional LM test for homoscedasticity given first order positive serial correlation via a two-way error component model. Monte Carlo simulations were carried out for 81 different variations, of which its design assumed a uniform distribution under a linear heteroscedasticity function. Each of the variation was iterated 1000 times and the assessment of the three estimators considered are based on Variance, Absolute bias (ABIAS), Mean square error (MSE) and the Root Mean Square (RMSE) of parameters estimates. Eighteen different models at different specified conditions were fitted, and the best-fitted model is that of within estimator when heteroscedasticity is severe at either zero or positive serial correlation value. LM test results showed that the tests have good size and power as all the three tests are significant at 5% for the specified linear form of heteroscedasticity function which established the facts that Banks operations are severely heteroscedastic in nature with little or no periodicity effects.

Keywords: audit fee lagrange multiplier test, heteroscedasticity, lagrange multiplier test, Monte-Carlo scheme, periodicity

Procedia PDF Downloads 134