Search results for: Implied adjusted volatility
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 276

Search results for: Implied adjusted volatility

66 Adjustment and Scale-Up Strategy of Pilot Liquid Fermentation Process of Azotobacter sp.

Authors: G. Quiroga-Cubides, A. Díaz, M. Gómez

Abstract:

The genus Azotobacter has been widely used as bio-fertilizer due to its significant effects on the stimulation and promotion of plant growth in various agricultural species of commercial interest. In order to obtain significantly viable cellular concentration, a scale-up strategy for a liquid fermentation process (SmF) with two strains of A. chroococcum (named Ac1 and Ac10) was validated and adjusted at laboratory and pilot scale. A batch fermentation process under previously defined conditions was carried out on a biorreactor Infors®, model Minifors of 3.5 L, which served as a baseline for this research. For the purpose of increasing process efficiency, the effect of the reduction of stirring speed was evaluated in combination with a fed-batch-type fermentation laboratory scale. To reproduce the efficiency parameters obtained, a scale-up strategy with geometric and fluid dynamic behavior similarities was evaluated. According to the analysis of variance, this scale-up strategy did not have significant effect on cellular concentration and in laboratory and pilot fermentations (Tukey, p > 0.05). Regarding air consumption, fermentation process at pilot scale showed a reduction of 23% versus the baseline. The percentage of reduction related to energy consumption reduction under laboratory and pilot scale conditions was 96.9% compared with baseline.

Keywords: Azotobacter chroococcum, scale-up, liquid fermentation, fed-batch process.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1255
65 Monitoring and Fault-Recovery Capacity with Waveguide Grating-based Optical Switch over WDM/OCDMA-PON

Authors: Yao-Tang Chang, Chuen-Ching Wang, Shu-Han Hu

Abstract:

In order to implement flexibility as well as survivable capacities over passive optical network (PON), a new automatic random fault-recovery mechanism with array-waveguide-grating based (AWG-based) optical switch (OSW) is presented. Firstly, wavelength-division-multiplexing and optical code-division multiple-access (WDM/OCDMA) scheme are configured to meet the various geographical locations requirement between optical network unit (ONU) and optical line terminal (OLT). The AWG-base optical switch is designed and viewed as central star-mesh topology to prohibit/decrease the duplicated redundant elements such as fiber and transceiver as well. Hence, by simple monitoring and routing switch algorithm, random fault-recovery capacity is achieved over bi-directional (up/downstream) WDM/OCDMA scheme. When error of distribution fiber (DF) takes place or bit-error-rate (BER) is higher than 10-9 requirement, the primary/slave AWG-based OSW are adjusted and controlled dynamically to restore the affected ONU groups via the other working DFs immediately.

Keywords: Random fault recovery mechanism, Array-waveguide-grating based optical switch (AWG- based OSW), wavelength-division-multiplexing and optical code-divisionmultiple-access (WDM/ OCDMA)

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1599
64 The Moderation Effect of Smart Phone Addiction in Relationship between Self-Leadership and Innovative Behavior

Authors: Gi-Ryun Park, Gye-Wan Moon, Dong-Hoon Yang

Abstract:

This study aims to explore the positive effects of self-leadership and innovative behavior that'd been proven in the existing researches proactively and understand the regulation effects of smartphone addiction which has recently become an issue in Korea. This study conducted a convenient sampling of college students attending the four colleges located at Daegu. A total of 210 questionnaires in 5-point Likert scale were distributed to college students. Among which, a total of 200 questionnaires were collected for our final analysis data. Both correlation analysis and regression analysis were carried out to verify those questionnaires through SPSS 20.0. As a result, college students' self-leadership had a significantly positive impact on innovative behavior (B= .210, P= .003). In addition, it is found that the relationship between self-leadership and innovative behavior can be adjusted depending on the degree of smartphone addiction in college students (B= .264, P= .000). This study could first understand the negative effects of smartphone addiction and find that if students' self-leadership is improved in terms of self-management and unnecessary use of smartphone is controlled properly, innovative behavior can be improved. In addition, this study is significant in that it attempts to identify a new impact of smartphone addiction with the recent environmental changes, unlike the existing researches that'd been carried out from the perspective of organizational behavior theory.

Keywords: Innovative Behavior, Revolutionary Behavior, Self-leadership, Smartphone Addiction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3010
63 Academic Influence of Social Network Sites on the Collegiate Performance of Technical College Students

Authors: Jameson McFarlane, Thorne J. McFarlane, Leon Bernard

Abstract:

Social network sites (SNS) is an emerging phenomenon that is here to stay. The popularity and the ubiquity of the SNS technology are undeniable. Because most SNS are free and easy to use people from all walks of life and from almost any age are attracted to that technology. College age students are by far the largest segment of the population using SNS. Since most SNS have been adapted for mobile devices, not only do you find students using this technology in their study, while working on labs or on projects, a substantial number of students have been found to use SNS even while listening to lectures. This study found that SNS use has a significant negative impact on the grade point average of college students particularly in the first semester. However, this negative impact is greatly diminished by the end of the third semester partly because the students have adjusted satisfactorily to the challenges of college or because they have learned how to adequately manage their time. It was established that the kinds of activities the students are engaged in during the SNS use are the leading factor affecting academic performance. Of those activities, using SNS during a lecture or while studying is the foremost contributing factor to lower academic performance. This is due to “cognitive” or “information” bottleneck, a condition in which the students find it very difficult to multitask or to switch between resources leading to inefficiency in information retention and thus, educational performance.

Keywords: Social network sites, social network analysis, regression coefficient, psychological engagement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 860
62 Moving Area Filter to Detect Object in Video Sequence from Moving Platform

Authors: Sallama Athab, Hala Bahjat

Abstract:

Detecting object in video sequence is a challenging mission for identifying, tracking moving objects. Background removal considered as a basic step in detected moving objects tasks. Dual static cameras placed in front and rear moving platform gathered information which is used to detect objects. Background change regarding with speed and direction moving platform, so moving objects distinguished become complicated. In this paper, we propose framework allows detection moving object with variety of speed and direction dynamically. Object detection technique built on two levels the first level apply background removal and edge detection to generate moving areas. The second level apply Moving Areas Filter (MAF) then calculate Correlation Score (CS) for adjusted moving area. Merging moving areas with closer CS and marked as moving object. Experiment result is prepared on real scene acquired by dual static cameras without overlap in sense. Results showing accuracy in detecting objects compared with optical flow and Mixture Module Gaussian (MMG), Accurate ratio produced to measure accurate detection moving object.

Keywords: Background Removal, Correlation, Mixture Module Gaussian, Moving Platform, Object Detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2066
61 Stochastic Subspace Modelling of Turbulence

Authors: M. T. Sichani, B. J. Pedersen, S. R. K. Nielsen

Abstract:

Turbulence of the incoming wind field is of paramount importance to the dynamic response of civil engineering structures. Hence reliable stochastic models of the turbulence should be available from which time series can be generated for dynamic response and structural safety analysis. In the paper an empirical cross spectral density function for the along-wind turbulence component over the wind field area is taken as the starting point. The spectrum is spatially discretized in terms of a Hermitian cross-spectral density matrix for the turbulence state vector which turns out not to be positive definite. Since the succeeding state space and ARMA modelling of the turbulence rely on the positive definiteness of the cross-spectral density matrix, the problem with the non-positive definiteness of such matrices is at first addressed and suitable treatments regarding it are proposed. From the adjusted positive definite cross-spectral density matrix a frequency response matrix is constructed which determines the turbulence vector as a linear filtration of Gaussian white noise. Finally, an accurate state space modelling method is proposed which allows selection of an appropriate model order, and estimation of a state space model for the vector turbulence process incorporating its phase spectrum in one stage, and its results are compared with a conventional ARMA modelling method.

Keywords: Turbulence, wind turbine, complex coherence, state space modelling, ARMA modelling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1596
60 Reducing the Need for Multi-Input Multi-Output in Multi-Beam Base Transceiver Station Antennas Using Orthogonally-Polarized Feeds with an Arbitrary Number of Ports

Authors: Mohamed Sanad, Noha Hassan

Abstract:

A multi-beam BTS (Base Transceiver Station) antenna has been developed using dual parabolic cylindrical reflectors. The ±45° polarization feeds are used in spatial diversity MIMO (Multi-Input Multi-Output). They can be replaced by single-port orthogonally polarized feeds. Then, with two sets of beams generated above each other, the ± 45° polarization ports of any conventional transceiver can be connected to two of these beam sets. Thus, with two-port transceivers, the system will be equivalent to 4x4 MIMO, instead of 2x2. Radio Frequency (RF) power combiners/splitters can also be used to combine the multiple beams into a single beam or any arbitrary number of beams/ports. The gain of the combined-beam will be more than 20-24 dBi instead of 17-18 dBi of conventional wide-beam antennas. Furthermore, the gain of the combined beam will be high over the whole beam angle. Moreover, the users will always be close to the peak gain value of the combined beam regardless of their location within the combined beam angle. The frequency bands of all the combined beams are adjusted such that they all have the same frequency band. Different configurations of RF power splitter/combiners can be used to provide any arbitrary number of beams/ports according to the requirements of any existing base station configuration.

Keywords: 5G mobile communications, BTS antennas, MIMO, orthogonally polarized antennas, multi-beam antennas.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 608
59 Optimization of Two Quality Characteristics in Injection Molding Processes via Taguchi Methodology

Authors: Joseph C. Chen, Venkata Karthik Jakka

Abstract:

The main objective of this research is to optimize tensile strength and dimensional accuracy in injection molding processes using Taguchi Parameter Design. An L16 orthogonal array (OA) is used in Taguchi experimental design with five control factors at four levels each and with non-controllable factor vibration. A total of 32 experiments were designed to obtain the optimal parameter setting for the process. The optimal parameters identified for the shrinkage are shot volume, 1.7 cubic inch (A4); mold term temperature, 130 ºF (B1); hold pressure, 3200 Psi (C4); injection speed, 0.61 inch3/sec (D2); and hold time of 14 seconds (E2). The optimal parameters identified for the tensile strength are shot volume, 1.7 cubic inch (A4); mold temperature, 160 ºF (B4); hold pressure, 3100 Psi (C3); injection speed, 0.69 inch3/sec (D4); and hold time of 14 seconds (E2). The Taguchi-based optimization framework was systematically and successfully implemented to obtain an adjusted optimal setting in this research. The mean shrinkage of the confirmation runs is 0.0031%, and the tensile strength value was found to be 3148.1 psi. Both outcomes are far better results from the baseline, and defects have been further reduced in injection molding processes.

Keywords: Injection molding processes, Taguchi Parameter Design, tensile strength, shrinkage test, high-density polyethylene, HDPE.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 765
58 Basic Research for Electroretinogram Moving the Center of the Multifocal Hexagonal Stimulus Array

Authors: Naoto Suzuki

Abstract:

Many ophthalmologists can examine declines in visual sensitivity at arbitrary points on the retina using a precise perimetry device with a fundus camera function. However, the retinal layer causing the decline in visual sensitivity cannot be identified by this method. We studied an electroretinogram (ERG) function that can move the center of the multifocal hexagonal stimulus array in order to investigate cryptogenic diseases, such as macular dystrophy, acute zonal occult outer retinopathy, and multiple evanescent white dot syndrome. An electroretinographic optical system, specifically a perimetric optical system, was added to an experimental device carrying the same optical system as a fundus camera. We also added an infrared camera, a cold mirror, a halogen lamp, and a monitor. The software was generated to show the multifocal hexagonal stimulus array on the monitor using C++Builder XE8 and to move the center of the array up and down as well as back and forth. We used a multifunction I/O device and its design platform LabVIEW for data retrieval. The plate electrodes were used to measure electrodermal activities around the eyes. We used a multifocal hexagonal stimulus array with 37 elements in the software. The center of the multifocal hexagonal stimulus array could be adjusted to the same position as the examination target of the precise perimetry. We successfully added the moving ERG function to the experimental ophthalmologic device.

Keywords: Moving ERG, precise perimetry, retinal layers, visual sensitivity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 726
57 Trend Analysis for Extreme Rainfall Events in New South Wales, Australia

Authors: Evan Hajani, Ataur Rahman, Khaled Haddad

Abstract:

Climate change will affect the hydrological cycle in many different ways such as increase in evaporation and rainfalls. There have been growing interests among researchers to identify the nature of trends in historical rainfall data in many different parts of the world. This paper examines the trends in annual maximum rainfall data from 30 stations in New South Wales, Australia by using two non-parametric tests, Mann-Kendall (MK) and Spearman’s Rho (SR). Rainfall data were analyzed for fifteen different durations ranging from 6 min to 3 days. It is found that the sub-hourly durations (6, 12, 18, 24, 30 and 48 minutes) show statistically significant positive (upward) trends whereas longer duration (subdaily and daily) events generally show a statistically significant negative (downward) trend. It is also found that the MK test and SR test provide notably different results for some rainfall event durations considered in this study. Since shorter duration sub-hourly rainfall events show positive trends at many stations, the design rainfall data based on stationary frequency analysis for these durations need to be adjusted to account for the impact of climate change. These shorter durations are more relevant to many urban development projects based on smaller catchments having a much shorter response time.

Keywords: Climate change, Mann-Kendall test, Spearman’s Rho test, trends, design rainfall.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2863
56 An Observer-Based Direct Adaptive Fuzzy Sliding Control with Adjustable Membership Functions

Authors: Alireza Gholami, Amir H. D. Markazi

Abstract:

In this paper, an observer-based direct adaptive fuzzy sliding mode (OAFSM) algorithm is proposed. In the proposed algorithm, the zero-input dynamics of the plant could be unknown. The input connection matrix is used to combine the sliding surfaces of individual subsystems, and an adaptive fuzzy algorithm is used to estimate an equivalent sliding mode control input directly. The fuzzy membership functions, which were determined by time consuming try and error processes in previous works, are adjusted by adaptive algorithms. The other advantage of the proposed controller is that the input gain matrix is not limited to be diagonal, i.e. the plant could be over/under actuated provided that controllability and observability are preserved. An observer is constructed to directly estimate the state tracking error, and the nonlinear part of the observer is constructed by an adaptive fuzzy algorithm. The main advantage of the proposed observer is that, the measured outputs is not limited to the first entry of a canonical-form state vector. The closed-loop stability of the proposed method is proved using a Lyapunov-based approach. The proposed method is applied numerically on a multi-link robot manipulator, which verifies the performance of the closed-loop control. Moreover, the performance of the proposed algorithm is compared with some conventional control algorithms.

Keywords: Adaptive algorithm, fuzzy systems, membership functions, observer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 714
55 Safe and Efficient Deep Reinforcement Learning Control Model: A Hydroponics Case Study

Authors: Almutasim Billa A. Alanazi, Hal S. Tharp

Abstract:

Safe performance and efficient energy consumption are essential factors for designing a control system. This paper presents a reinforcement learning (RL) model that can be applied to control applications to improve safety and reduce energy consumption. As hardware constraints and environmental disturbances are imprecise and unpredictable, conventional control methods may not always be effective in optimizing control designs. However, RL has demonstrated its value in several artificial intelligence (AI) applications, especially in the field of control systems. The proposed model intelligently monitors a system's success by observing the rewards from the environment, with positive rewards counting as a success when the controlled reference is within the desired operating zone. Thus, the model can determine whether the system is safe to continue operating based on the designer/user specifications, which can be adjusted as needed. Additionally, the controller keeps track of energy consumption to improve energy efficiency by enabling the idle mode when the controlled reference is within the desired operating zone, thus reducing the system energy consumption during the controlling operation. Water temperature control for a hydroponic system is taken as a case study for the RL model, adjusting the variance of disturbances to show the model’s robustness and efficiency. On average, the model showed safety improvement by up to 15% and energy efficiency improvements by 35%-40% compared to a traditional RL model.

Keywords: Control system, hydroponics, machine learning, reinforcement learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 36
54 Prediction of Cutting Tool Life in Drilling of Reinforced Aluminum Alloy Composite Using a Fuzzy Method

Authors: Mohammed T. Hayajneh

Abstract:

Machining of Metal Matrix Composites (MMCs) is very significant process and has been a main problem that draws many researchers to investigate the characteristics of MMCs during different machining process. The poor machining properties of hard particles reinforced MMCs make drilling process a rather interesting task. Unlike drilling of conventional materials, many problems can be seriously encountered during drilling of MMCs, such as tool wear and cutting forces. Cutting tool wear is a very significant concern in industries. Cutting tool wear not only influences the quality of the drilled hole, but also affects the cutting tool life. Prediction the cutting tool life during drilling is essential for optimizing the cutting conditions. However, the relationship between tool life and cutting conditions, tool geometrical factors and workpiece material properties has not yet been established by any machining theory. In this research work, fuzzy subtractive clustering system has been used to model the cutting tool life in drilling of Al2O3 particle reinforced aluminum alloy composite to investigate of the effect of cutting conditions on cutting tool life. This investigation can help in controlling and optimizing of cutting conditions when the process parameters are adjusted. The built model for prediction the tool life is identified by using drill diameter, cutting speed, and cutting feed rate as input data. The validity of the model was confirmed by the examinations under various cutting conditions. Experimental results have shown the efficiency of the model to predict cutting tool life.

Keywords: Composite, fuzzy, tool life, wear.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2036
53 Response of Yield and Morphological Characteristic of Rice Cultivars to Heat Stress at Different Growth Stages

Authors: M. T. K. Aghamolki, M. K. Yusop, F. C. Oad, H. Zakikhani, Hawa. Ze Jaafar, S. Kharidah S.M., M. M. Hanafi

Abstract:

The high temperatures during sensitive growth phases are changing rice morphology as well as influencing yield. In the glass house study, the treatments were growing conditions [normal growing (32oC+2) and heat stress (38oC+2) day time and 22oC+2 night time], growth stages (booting, flowering and ripening) and four cultivars (Hovaze, Hashemi, Fajr, as exotic and MR219 as indigenous). The heat chamber was prepared covered with plastic, and automatic heater was adjusted for two weeks in every growth stages. Rice morphological and yield under the influence of heat stress during various growth stages showed taller plants in Hashemi due to its tall character. The total tillers per hill were significantly higher in Fajr. In all growing conditions, Hashemi recorded higher panicle exertion. The flag leaf width in all situations was found higher in Hovaze. The total tillers per hill were more in Fajr, although heat stress was imposed during booting and flowering stages. The indigenous MR219 in all situations of growing conditions, growth stages recorded higher grain yield. However, its grain yield decreased when heat stress was imposed during booting and flowering. However, plants had no effect on heat stress during ripening stage.

Keywords: Rice, growth, heat, stress, morphology, yield.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3299
52 An Experimental Procedure for Design and Construction of Monocopter and Its Control Using Optical and GPS-Aided AHRS Sensors

Authors: A. Safaee, M. S. Mehrabani, M. B. Menhaj, V. Mousavi, S. Z. Moussavi

Abstract:

Monocopter is a single-wing rotary flying vehicle which has the capability of hovering. This flying vehicle includes two dynamic parts in which more efficiency can be expected rather than other Micro UAVs due to the extended area of wing compared to its fuselage. Low cost and simple mechanism in comparison to other vehicles such as helicopter are the most important specifications of this flying vehicle. In the previous paper we discussed the introduction of the final system but in this paper, the experimental design process of Monocopter and its control algorithm has been investigated in general. Also the editorial bugs in the previous article have been corrected and some translational ambiguities have been resolved. Initially by constructing several prototypes and carrying out many flight tests the main design parameters of this air vehicle were obtained by experimental measurements. Eventually the required main monocopter for this project was constructed. After construction of the monocopter in order to design, implementation and testing of control algorithms first a simple optic system used for determining the heading angle. After doing numerous tests on Test Stand, the control algorithm designed and timing of applying control inputs adjusted. Then other control parameters of system were tuned in flight tests. Eventually the final control system designed and implemented using the AHRS sensor and the final operational tests performed successfully.

Keywords: Monocopter, Flap, Heading Angle, AHRS, Cyclic, Photo Diode.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3368
51 Multi-Modal Film Boiling Simulations on Adaptive Octree Grids

Authors: M. Wasy Akhtar

Abstract:

Multi-modal film boiling simulations are carried out on adaptive octree grids. The liquid-vapor interface is captured using the volume-of-fluid framework adjusted to account for exchanges of mass, momentum, and energy across the interface. Surface tension effects are included using a volumetric source term in the momentum equations. The phase change calculations are conducted based on the exact location and orientation of the interface; however, the source terms are calculated using the mixture variables to be consistent with the one field formulation used to represent the entire fluid domain. The numerical model on octree representation of the computational grid is first verified using test cases including advection tests in severely deforming velocity fields, gravity-based instabilities and bubble growth in uniformly superheated liquid under zero gravity. The model is then used to simulate both single and multi-modal film boiling simulations. The octree grid is dynamically adapted in order to maintain the highest grid resolution on the instability fronts using markers of interface location, volume fraction, and thermal gradients. The method thus provides an efficient platform to simulate fluid instabilities with or without phase change in the presence of body forces like gravity or shear layer instabilities.

Keywords: Boiling flows, dynamic octree grids, heat transfer, interface capturing, phase change.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 673
50 The Difficulties Witnessed by People with Intellectual Disability in Transition to Work in Saudi Arabia

Authors: Adel S. Alanazi

Abstract:

The transition of a student with a disability from school to work is the most crucial phase while moving from the stage of adolescence into early adulthood. In this process, young individuals face various difficulties and challenges in order to accomplish the next venture of life successfully. In this respect, this paper aims to examine the challenges encountered by the individuals with intellectual disabilities in transition to work in Saudi Arabia. For this purpose, this study has undertaken a qualitative research-based methodology; wherein interpretivist philosophy has been followed along with inductive approach and exploratory research design. The data for the research has been gathered with the help of semi-structured interviews, whose findings are analysed with the help of thematic analysis. Semi-structured interviews were conducted with parents of persons with intellectual disabilities, officials, supervisors and specialists of two vocational rehabilitation centres providing training to intellectually disabled students, in addition to that, directors of companies and websites in hiring those individuals. The total number of respondents for the interview was 15. The purposive sampling method was used to select the respondents for the interview. This sampling method is a non-probability sampling method which draws respondents from a known population and allows flexibility and suitability in selecting the participants for the study. The findings gathered from the interview revealed that the lack of awareness among their parents regarding the rights of their children who are intellectually disabled; the lack of adequate communication and coordination between various entities; concerns regarding their training and subsequent employment are the key difficulties experienced by the individuals with intellectual disabilities. Training in programmes such as bookbinding, carpentry, computing, agriculture, electricity and telephone exchange operations were involved as key training programmes. The findings of this study also revealed that information technology and media were playing a significant role in smoothing the transition to employment of individuals with intellectual disabilities. Furthermore, religious and cultural attitudes have been identified to be restricted for people with such disabilities in seeking advantages from job opportunities. On the basis of these findings, it can be implied that the information gathered through this study will serve to be highly beneficial for Saudi Arabian schools/ rehabilitation centres for individuals with intellectual disability to facilitate them in overcoming the problems they encounter during the transition to work.

Keywords: Intellectual disability, transition services, rehabilitation centre.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1257
49 The Low-Cost Design and 3D Printing of Structural Knee Orthotics for Athletic Knee Injury Patients

Authors: Alexander Hendricks, Sean Nevin, Clayton Wikoff, Melissa Dougherty, Jacob Orlita, Rafiqul Noorani

Abstract:

Knee orthotics play an important role in aiding in the recovery of those with knee injuries, especially athletes. However, structural knee orthotics is often very expensive, ranging between $300 and $800. The primary reason for this project was to answer the question: can 3D printed orthotics represent a viable and cost-effective alternative to present structural knee orthotics? The primary objective for this research project was to design a knee orthotic for athletes with knee injuries for a low-cost under $100 and evaluate its effectiveness. The initial design for the orthotic was done in SolidWorks, a computer-aided design (CAD) software available at Loyola Marymount University. After this design was completed, finite element analysis (FEA) was utilized to understand how normal stresses placed upon the knee affected the orthotic. The knee orthotic was then adjusted and redesigned to meet a specified factor-of-safety of 3.25 based on the data gathered during FEA and literature sources. Once the FEA was completed and the orthotic was redesigned based from the data gathered, the next step was to move on to 3D-printing the first design of the knee brace. Subsequently, physical therapy movement trials were used to evaluate physical performance. Using the data from these movement trials, the CAD design of the brace was refined to accommodate the design requirements. The final goal of this research means to explore the possibility of replacing high-cost, outsourced knee orthotics with a readily available low-cost alternative.

Keywords: Knee Orthotics, 3D printing, finite element analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 951
48 Tribological Investigation and the Effect of Karanja Biodiesel on Engine Wear in Compression Ignition Engine

Authors: Ajay V. Kolhe, R. E. Shelke, S. S. Khandare

Abstract:

Various biomass based resources, which can be used as an extender, or a complete substitute of diesel fuel may have very significant role in the development of agriculture, industrial and transport sectors in the energy crisis. Use of Karanja oil methyl ester biodiesel in a CI DI engine was found highly compatible with engine performance along with lower exhaust emission as compared to diesel fuel but with slightly higher NOx emission and low wear characteristics. The combustion related properties of vegetable oils are somewhat similar to diesel oil. Neat vegetable oils or their blends with diesel, however, pose various long-term problems in compression ignition engines. These undesirable features of vegetable oils are because of their inherent properties like high viscosity, low volatility, and polyunsaturated character. Pongamia methyl ester (PME) was prepared by transesterification process using methanol for long term engine operations. The physical and combustion-related properties of the fuels thus developed were found to be closer to that of the diesel. A neat biodiesel (PME) was selected as a fuel for the tribological study of biofuels. Two similar new engines were completely disassembled and subjected to dimensioning of various vital moving parts and then subjected to long-term endurance tests on neat biodiesel and diesel respectively. After completion of the test, both the engines were again disassembled for physical inspection and wear measurement of various vital parts. The lubricating oil samples drawn from both engines were subjected to atomic absorption spectroscopy (AAS) for measurement of various wear metal traces present. The additional lubricating property of biodiesel fuel due to higher viscosity as compared to diesel fuel resulted in lower wear of moving parts and thus improved the engine durability with a bio-diesel fuel. Results reported from AAS tests confirmed substantially lower wear and thus improved life for biodiesel operated engines.

Keywords: Transesterification, PME, wear of engine parts, Metal traces and AAS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2401
47 A Development of the Multiple Intelligences Measurement of Elementary Students

Authors: Chaiwat Waree

Abstract:

This research aims at development of the Multiple Intelligences Measurement of Elementary Students. The structural accuracy test and normality establishment are based on the Multiple Intelligences Theory of Gardner. This theory consists of eight aspects namely linguistics, logic and mathematics, visual-spatial relations, body and movement, music, human relations, self-realization/selfunderstanding and nature. The sample used in this research consists of elementary school students (aged between 5-11 years). The size of the sample group was determined by Yamane Table. The group has 2,504 students. Multistage Sampling was used. Basic statistical analysis and construct validity testing were done using confirmatory factor analysis. The research can be summarized as follows; 1. Multiple Intelligences Measurement consisting of 120 items is content-accurate. Internal consistent reliability according to the method of Kuder-Richardson of the whole Multiple Intelligences Measurement equals .91. The difficulty of the measurement test is between .39-.83. Discrimination is between .21-.85. 2). The Multiple Intelligences Measurement has construct validity in a good range, that is 8 components and all 120 test items have statistical significance level at .01. Chi-square value equals 4357.7; p=.00 at the degree of freedom of 244 and Goodness of Fit Index equals 1.00. Adjusted Goodness of Fit Index equals .92. Comparative Fit Index (CFI) equals .68. Root Mean Squared Residual (RMR) equals 0.064 and Root Mean Square Error of Approximation equals 0.82. 3). The normality of the Multiple Intelligences Measurement is categorized into 3 levels. Those with high intelligence are those with percentiles of more than 78. Those with moderate/medium intelligence are those with percentiles between 24 and 77.9. Those with low intelligence are those with percentiles from 23.9 downwards.

Keywords: Multiple Intelligences, Measurement, Elementary Students.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2907
46 Accuracy of Small Field of View CBCT in Determining Endodontic Working Length

Authors: N. L. S. Ahmad, Y. L. Thong, P. Nambiar

Abstract:

An in vitro study was carried out to evaluate the feasibility of small field of view (FOV) cone beam computed tomography (CBCT) in determining endodontic working length. The objectives were to determine the accuracy of CBCT in measuring the estimated preoperative working lengths (EPWL), endodontic working lengths (EWL) and file lengths. Access cavities were prepared in 27 molars. For each root canal, the baseline electronic working length was determined using an EAL (Raypex 5). The teeth were then divided into overextended, non-modified and underextended groups and the lengths were adjusted accordingly. Imaging and measurements were made using the respective software of the RVG (Kodak RVG 6100) and CBCT units (Kodak 9000 3D). Root apices were then shaved and the apical constrictions viewed under magnification to measure the control working lengths. The paired t-test showed a statistically significant difference between CBCT EPWL and control length but the difference was too small to be clinically significant. From the Bland Altman analysis, the CBCT method had the widest range of 95% limits of agreement, reflecting its greater potential of error. In measuring file lengths, RVG had a bigger window of 95% limits of agreement compared to CBCT. Conclusions: (1) The clinically insignificant underestimation of the preoperative working length using small FOV CBCT showed that it is acceptable for use in the estimation of preoperative working length. (2) Small FOV CBCT may be used in working length determination but it is not as accurate as the currently practiced method of using the EAL. (3) It is also more accurate than RVG in measuring file lengths.

Keywords: Accuracy, CBCT, endodontic, measurement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1547
45 New Simultaneous High Performance Liquid Chromatographic Method for Determination of NSAIDs and Opioid Analgesics in Advanced Drug Delivery Systems and Human Plasma

Authors: Asad Ullah Madni, Mahmood Ahmad, Naveed Akhtar, Muhammad Usman

Abstract:

A new and cost effective RP-HPLC method was developed and validated for simultaneous analysis of non steroidal anti inflammatory dugs Diclofenac sodium (DFS), Flurbiprofen (FLP) and an opioid analgesic Tramadol (TMD) in advanced drug delivery systems (Liposome and Microcapsules), marketed brands and human plasma. Isocratic system was employed for the flow of mobile phase consisting of 10 mM sodium dihydrogen phosphate buffer and acetonitrile in molar ratio of 67: 33 with adjusted pH of 3.2. The stationary phase was hypersil ODS column (C18, 250×4.6 mm i.d., 5 μm) with controlled temperature of 30 C°. DFS in liposomes, microcapsules and marketed drug products was determined in range of 99.76-99.84%. FLP and TMD in microcapsules and brands formulation were 99.78 - 99.94 % and 99.80 - 99.82 %, respectively. Single step liquid-liquid extraction procedure using combination of acetonitrile and trichloroacetic acid (TCA) as protein precipitating agent was employed. The detection limits (at S/N ratio 3) of quality control solutions and plasma samples were 10, 20, and 20 ng/ml for DFS, FLP and TMD, respectively. The Assay was acceptable in linear dynamic range. All other validation parameters were found in limits of FDA and ICH method validation guidelines. The proposed method is sensitive, accurate and precise and could be applicable for routine analysis in pharmaceutical industry as well as in human plasma samples for bioequivalence and pharmacokinetics studies.

Keywords: Diclofenac Sodium, Flurbiprofen, Tramadol, HPLCUV detection, Validation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1801
44 Digital Automatic Gain Control Integrated on WLAN Platform

Authors: Emilija Miletic, Milos Krstic, Maxim Piz, Michael Methfessel

Abstract:

In this work we present a solution for DAGC (Digital Automatic Gain Control) in WLAN receivers compatible to IEEE 802.11a/g standard. Those standards define communication in 5/2.4 GHz band using Orthogonal Frequency Division Multiplexing OFDM modulation scheme. WLAN Transceiver that we have used enables gain control over Low Noise Amplifier (LNA) and a Variable Gain Amplifier (VGA). The control over those signals is performed in our digital baseband processor using dedicated hardware block DAGC. DAGC in this process is used to automatically control the VGA and LNA in order to achieve better signal-to-noise ratio, decrease FER (Frame Error Rate) and hold the average power of the baseband signal close to the desired set point. DAGC function in baseband processor is done in few steps: measuring power levels of baseband samples of an RF signal,accumulating the differences between the measured power level and actual gain setting, adjusting a gain factor of the accumulation, and applying the adjusted gain factor the baseband values. Based on the measurement results of RSSI signal dependence to input power we have concluded that this digital AGC can be implemented applying the simple linearization of the RSSI. This solution is very simple but also effective and reduces complexity and power consumption of the DAGC. This DAGC is implemented and tested both in FPGA and in ASIC as a part of our WLAN baseband processor. Finally, we have integrated this circuit in a compact WLAN PCMCIA board based on MAC and baseband ASIC chips designed from us.

Keywords: WLAN, AGC, RSSI, baseband processor

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3899
43 The Use of Artificial Intelligence in Digital Forensics and Incident Response in a Constrained Environment

Authors: Dipo Dunsin, Mohamed C. Ghanem, Karim Ouazzane

Abstract:

Digital investigators often have a hard time spotting evidence in digital information. It has become hard to determine which source of proof relates to a specific investigation. A growing concern is that the various processes, technology, and specific procedures used in the digital investigation are not keeping up with criminal developments. Therefore, criminals are taking advantage of these weaknesses to commit further crimes. In digital forensics investigations, artificial intelligence (AI) is invaluable in identifying crime. Providing objective data and conducting an assessment is the goal of digital forensics and digital investigation, which will assist in developing a plausible theory that can be presented as evidence in court. This research paper aims at developing a multiagent framework for digital investigations using specific intelligent software agents (ISAs). The agents communicate to address particular tasks jointly and keep the same objectives in mind during each task. The rules and knowledge contained within each agent are dependent on the investigation type. A criminal investigation is classified quickly and efficiently using the case-based reasoning (CBR) technique. The proposed framework development is implemented using the Java Agent Development Framework, Eclipse, Postgres repository, and a rule engine for agent reasoning. The proposed framework was tested using the Lone Wolf image files and datasets. Experiments were conducted using various sets of ISAs and VMs. There was a significant reduction in the time taken for the Hash Set Agent to execute. As a result of loading the agents, 5% of the time was lost, as the File Path Agent prescribed deleting 1,510, while the Timeline Agent found multiple executable files. In comparison, the integrity check carried out on the Lone Wolf image file using a digital forensic tool kit took approximately 48 minutes (2,880 ms), whereas the MADIK framework accomplished this in 16 minutes (960 ms). The framework is integrated with Python, allowing for further integration of other digital forensic tools, such as AccessData Forensic Toolkit (FTK), Wireshark, Volatility, and Scapy.

Keywords: Artificial intelligence, computer science, criminal investigation, digital forensics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1062
42 Upgraded Rough Clustering and Outlier Detection Method on Yeast Dataset by Entropy Rough K-Means Method

Authors: P. Ashok, G. M. Kadhar Nawaz

Abstract:

Rough set theory is used to handle uncertainty and incomplete information by applying two accurate sets, Lower approximation and Upper approximation. In this paper, the rough clustering algorithms are improved by adopting the Similarity, Dissimilarity–Similarity and Entropy based initial centroids selection method on three different clustering algorithms namely Entropy based Rough K-Means (ERKM), Similarity based Rough K-Means (SRKM) and Dissimilarity-Similarity based Rough K-Means (DSRKM) were developed and executed by yeast dataset. The rough clustering algorithms are validated by cluster validity indexes namely Rand and Adjusted Rand indexes. An experimental result shows that the ERKM clustering algorithm perform effectively and delivers better results than other clustering methods. Outlier detection is an important task in data mining and very much different from the rest of the objects in the clusters. Entropy based Rough Outlier Factor (EROF) method is seemly to detect outlier effectively for yeast dataset. In rough K-Means method, by tuning the epsilon (ᶓ) value from 0.8 to 1.08 can detect outliers on boundary region and the RKM algorithm delivers better results, when choosing the value of epsilon (ᶓ) in the specified range. An experimental result shows that the EROF method on clustering algorithm performed very well and suitable for detecting outlier effectively for all datasets. Further, experimental readings show that the ERKM clustering method outperformed the other methods.

Keywords: Clustering, Entropy, Outlier, Rough K-Means, validity index.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1351
41 Parametric Approach for Reserve Liability Estimate in Mortgage Insurance

Authors: Rajinder Singh, Ram Valluru

Abstract:

Chain Ladder (CL) method, Expected Loss Ratio (ELR) method and Bornhuetter-Ferguson (BF) method, in addition to more complex transition-rate modeling, are commonly used actuarial reserving methods in general insurance. There is limited published research about their relative performance in the context of Mortgage Insurance (MI). In our experience, these traditional techniques pose unique challenges and do not provide stable claim estimates for medium to longer term liabilities. The relative strengths and weaknesses among various alternative approaches revolve around: stability in the recent loss development pattern, sufficiency and reliability of loss development data, and agreement/disagreement between reported losses to date and ultimate loss estimate. CL method results in volatile reserve estimates, especially for accident periods with little development experience. The ELR method breaks down especially when ultimate loss ratios are not stable and predictable. While the BF method provides a good tradeoff between the loss development approach (CL) and ELR, the approach generates claim development and ultimate reserves that are disconnected from the ever-to-date (ETD) development experience for some accident years that have more development experience. Further, BF is based on subjective a priori assumption. The fundamental shortcoming of these methods is their inability to model exogenous factors, like the economy, which impact various cohorts at the same chronological time but at staggered points along their life-time development. This paper proposes an alternative approach of parametrizing the loss development curve and using logistic regression to generate the ultimate loss estimate for each homogeneous group (accident year or delinquency period). The methodology was tested on an actual MI claim development dataset where various cohorts followed a sigmoidal trend, but levels varied substantially depending upon the economic and operational conditions during the development period spanning over many years. The proposed approach provides the ability to indirectly incorporate such exogenous factors and produce more stable loss forecasts for reserving purposes as compared to the traditional CL and BF methods.

Keywords: Actuarial loss reserving techniques, logistic regression, parametric function, volatility.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 358
40 Evaluation of Chiller Power Consumption Using Grey Prediction

Authors: Tien-Shun Chan, Yung-Chung Chang, Cheng-Yu Chu, Wen-Hui Chen, Yuan-Lin Chen, Shun-Chong Wang, Chang-Chun Wang

Abstract:

98% of the energy needed in Taiwan has been imported. The prices of petroleum and electricity have been increasing. In addition, facility capacity, amount of electricity generation, amount of electricity consumption and number of Taiwan Power Company customers have continued to increase. For these reasons energy conservation has become an important topic. In the past linear regression was used to establish the power consumption models for chillers. In this study, grey prediction is used to evaluate the power consumption of a chiller so as to lower the total power consumption at peak-load (so that the relevant power providers do not need to keep on increasing their power generation capacity and facility capacity). In grey prediction, only several numerical values (at least four numerical values) are needed to establish the power consumption models for chillers. If PLR, the temperatures of supply chilled-water and return chilled-water, and the temperatures of supply cooling-water and return cooling-water are taken into consideration, quite accurate results (with the accuracy close to 99% for short-term predictions) may be obtained. Through such methods, we can predict whether the power consumption at peak-load will exceed the contract power capacity signed by the corresponding entity and Taiwan Power Company. If the power consumption at peak-load exceeds the power demand, the temperature of the supply chilled-water may be adjusted so as to reduce the PLR and hence lower the power consumption.

Keywords: Gery system theory, grey prediction, chller.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2508
39 Detecting Financial Bubbles Using Gap between Common Stocks and Preferred Stocks

Authors: Changju Lee, Seungmo Ku, Sondo Kim, Woojin Chang

Abstract:

How to detecting financial bubble? Addressing this simple question has been the focus of a vast amount of empirical research spanning almost half a century. However, financial bubble is hard to observe and varying over the time; there needs to be more research on this area. In this paper, we used abnormal difference between common stocks price and those preferred stocks price to explain financial bubble. First, we proposed the ‘W-index’ which indicates spread between common stocks and those preferred stocks in stock market. Second, to prove that this ‘W-index’ is valid for measuring financial bubble, we showed that there is an inverse relationship between this ‘W-index’ and S&P500 rate of return. Specifically, our hypothesis is that when ‘W-index’ is comparably higher than other periods, financial bubbles are added up in stock market and vice versa; according to our hypothesis, if investors made long term investments when ‘W-index’ is high, they would have negative rate of return; however, if investors made long term investments when ‘W-index’ is low, they would have positive rate of return. By comparing correlation values and adjusted R-squared values of between W-index and S&P500 return, VIX index and S&P500 return, and TED index and S&P500 return, we showed only W-index has significant relationship between S&P500 rate of return. In addition, we figured out how long investors should hold their investment position regard the effect of financial bubble. Using this W-index, investors could measure financial bubble in the market and invest with low risk.

Keywords: Financial bubbles, detection, preferred stocks, pairs trading, future return, forecast.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1075
38 Effect of Aquatic and Land Plyometric Training on Selected Physical Fitness Variables in Intercollegiate Male Handball Players

Authors: Nisith K. Datta, Rakesh Bharti

Abstract:

The purpose of the study was to find out the effects of Aquatic and Land plyometric training on selected physical variables in intercollegiate male handball players. To achieve this purpose of the study, forty five handball players of Sardar Vallabhbhai National Institute of Technology, Surat, Gujarat were selected as players at random and their age ranged between 18 to 21 years. The selected players were divided into three equal groups of fifteen players each. Group I underwent Aquatic plyometric training, Group II underwent Land plyometric training and Group III Control group for three days per week for twelve weeks. Control Group did not participate in any special training programme apart from their regular activities as per their curriculum. The following physical fitness variables namely speed; leg explosive power and agility were selected as dependent variables. All the players of three groups were tested on selected dependent variables prior to and immediately after the training programme. The analysis of covariance was used to analyze the significant difference, if any among the groups. Since, three groups were compared, whenever the obtained ‘F’ ratio for adjusted posttest was found to be significant, the Scheffe’s test to find out the paired mean differences, if any. The 0.05 level of confidence was fixed as the level of significance to test the ‘F’ ratio obtained by the analysis of covariance, which was considered as an appropriate. The result of the study indicates due to Aquatic and Land plyometric training on speed, explosive power, and agility has been improved significantly.

Keywords: Aquatic training, explosive power, plyometric training, speed.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1660
37 An Obesity Index Derived from Waist and Hip Circumferences Well-Matched with Other Indices in Children with Obesity

Authors: Mustafa M. Donma, Orkide Donma

Abstract:

Indices derived from anthropometric measurements [waist-to-hip ratio (WHR)] or body fat mass compositions [trunk-to-leg fat ratio (TLFR)] are used for the evaluation of obesity. The best for clinical practices is still being investigated. The aim of this study is to derive an index, which best suits the purpose for the discrimination of children with normal body mass index (N-BMI) from obese (OB) children. 83 children participated in the study. Groups 1 and 2 comprised 42 children with N-BMI and 41 OB children, whose age- and sex-adjusted BMI percentile values vary between 15-85 and 95-99, respectively. The institutional ethics committee approved the study protocol. Informed consent forms were filled by the parents of the participants. Anthropometric measurements (weight, height (Ht), waist circumference (WC), hip circumference (HC), neck circumference (NC) values) were taken. BMI, WHR, (WC+HC)/2, WC/Ht, (WC/HC)/Ht, WC*NC were calculated. Bioelectrical impedance analysis was performed to obtain body’s fat compartments in terms of total fat, trunk fat, leg fat, arm fat masses. TLFR, trunk-to-appendicular fat ratio (TAFR), (trunk fat+leg fat)/2 ((TF+LF)/2), fat mass index (FMI) and diagnostic obesity notation model assessment-II (D2I) index values were calculated. Statistical analysis was performed. Significantly higher values of (WC+HC)/2, (TF+LF)/2, D2I and FMI were observed in OB group than N-BMI group. Significant correlations were found between BMI and WC, (WC+HC)/2, (TF+LF)/2, TLFR, TAFR, D2I, FMI in both groups. Similar correlations were obtained for WC. (WC+HC)/2 was correlated with TLFR, TAFR, (TF+LF)/2, D2I and FMI in N-BMI group. In OB group, the correlations were the same except those with TLFR and TAFR. These correlations were not present with WHR. Correlations were observed between TLFR as well as TAFR and BMI, WC, (WC+HC)/2, (TF+LF)/2, D2I, FMI in N-BMI group. In OB group, correlations between TLFR or TAFR and BMI, WC as well as (WC+HC)/2 were missing. None was noted with WHR. In conclusion, the only correlation valid in both groups was that exists between (TF+LF)/2 and (WC+HC)/2, which was suggested as a link between fat-based and anthropometric indices. (WC+HC)/2, but not WHR, was much more suitable as an anthropometric obesity index.

Keywords: Children, hip circumference, obesity, waist circumference.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 343