Search results for: linear predictive coding (LPC)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4694

Search results for: linear predictive coding (LPC)

3614 Comparative Study of Equivalent Linear and Non-Linear Ground Response Analysis for Rapar District of Kutch, India

Authors: Kulin Dave, Kapil Mohan

Abstract:

Earthquakes are considered to be the most destructive rapid-onset disasters human beings are exposed to. The amount of loss it brings in is sufficient to take careful considerations for designing of structures and facilities. Seismic Hazard Analysis is one such tool which can be used for earthquake resistant design. Ground Response Analysis is one of the most crucial and decisive steps for seismic hazard analysis. Rapar district of Kutch, Gujarat falls in Zone 5 of earthquake zone map of India and thus has high seismicity because of which it is selected for analysis. In total 8 bore-log data were studied at different locations in and around Rapar district. Different soil engineering properties were analyzed and relevant empirical correlations were used to calculate maximum shear modulus (Gmax) and shear wave velocity (Vs) for the soil layers. The soil was modeled using Pressure-Dependent Modified Kodner Zelasko (MKZ) model and the reference curve used for fitting was Seed and Idriss (1970) for sand and Darendeli (2001) for clay. Both Equivalent linear (EL), as well as Non-linear (NL) ground response analysis, has been carried out with Masing Hysteretic Re/Unloading formulation for comparison. Commercially available DEEPSOIL v. 7.0 software is used for this analysis. In this study an attempt is made to quantify ground response regarding generated acceleration time-history at top of the soil column, Response spectra calculation at 5 % damping and Fourier amplitude spectrum calculation. Moreover, the variation of Peak Ground Acceleration (PGA), Maximum Displacement, Maximum Strain (in %), Maximum Stress Ratio, Mobilized Shear Stress with depth is also calculated. From the study, PGA values estimated in rocky strata are nearly same as bedrock motion and marginal amplification is observed in sandy silt and silty clays by both analyses. The NL analysis gives conservative results of maximum displacement as compared to EL analysis. Maximum strain predicted by both studies is very close to each other. And overall NL analysis is more efficient and realistic because it follows the actual hyperbolic stress-strain relationship, considers stiffness degradation and mobilizes stresses generated due to pore water pressure.

Keywords: DEEPSOIL v 7.0, ground response analysis, pressure-dependent modified Kodner Zelasko model, MKZ model, response spectra, shear wave velocity

Procedia PDF Downloads 123
3613 Effects of Viscous Dissipation and Concentration Based Internal Heat Source on Convective Instability in A Porous Medium with Throughflow

Authors: N. Deepika, P. A. L. Narayana

Abstract:

Linear stability analysis of double diffusive convection in a horizontal porous layer saturated with fluid is examined by considering the effects of viscous dissipation, concentration based internal heat source and vertical throughflow. The basic steady state solution for Governing equations is computed. Linear stability analysis has been implemented numerically by using Runge-kutta method. Critical thermal Rayleigh number Rac is obtained for various values of solutal Rayleigh number Sa, vertical Peclet number Pe, Gebhart number Ge, Lewis number Le and measure of concentration based internal heat source $\gamma$. It is observed that Ge has destabilizing effect for upward throughflow and stabilizing effect for downward throughflow. For sufficient value of Pe, $\gamma$ has considerable destabilizing effect for upward throughflow, insignificant destabilizing effect for downward throughflow.

Keywords: porous medium, concentration based internal heat source, vertical throughflow, viscous dissipation

Procedia PDF Downloads 447
3612 Nonlinear Finite Element Modeling of Reinforced Concrete Flat Plate-Inclined Column Connection

Authors: Rabab Allouzi, Amer Alkloub

Abstract:

As the complex shaped buildings become a popular trend for architects, this paper is presented to investigate the performance of reinforced concrete flat plate-inclined column connection. The studies on the inclined column and flat plate connections are not sufficient in comparison to those on the conventional structures. The effect of column angle of inclination on the punching shear strength is found significant and studied herein. This paper presents a non-linear finite element based modeling approach to estimate behavior of RC flat plate inclined column connection. Results from simulations of RC flat plate-straight column connection show good agreement with experimental response of specimens tested by other researchers. The model is further used to study the response of inclined columns to punching at various ranges of inclination angles. The inclination angle can be included in the punching shear strength provisions provided by ACI 318-14 to account for the effect of column inclination.

Keywords: punching shear, non-linear finite element, inclined columns, reinforced concrete connection

Procedia PDF Downloads 230
3611 Nonparametric Path Analysis with Truncated Spline Approach in Modeling Rural Poverty in Indonesia

Authors: Usriatur Rohma, Adji Achmad Rinaldo Fernandes

Abstract:

Nonparametric path analysis is a statistical method that does not rely on the assumption that the curve is known. The purpose of this study is to determine the best nonparametric truncated spline path function between linear and quadratic polynomial degrees with 1, 2, and 3-knot points and to determine the significance of estimating the best nonparametric truncated spline path function in the model of the effect of population migration and agricultural economic growth on rural poverty through the variable unemployment rate using the t-test statistic at the jackknife resampling stage. The data used in this study are secondary data obtained from statistical publications. The results showed that the best model of nonparametric truncated spline path analysis is quadratic polynomial degree with 3-knot points. In addition, the significance of the best-truncated spline nonparametric path function estimation using jackknife resampling shows that all exogenous variables have a significant influence on the endogenous variables.

Keywords: nonparametric path analysis, truncated spline, linear, quadratic, rural poverty, jackknife resampling

Procedia PDF Downloads 20
3610 Customer Acquisition through Time-Aware Marketing Campaign Analysis in Banking Industry

Authors: Harneet Walia, Morteza Zihayat

Abstract:

Customer acquisition has become one of the critical issues of any business in the 21st century; having a healthy customer base is the essential asset of the bank business. Term deposits act as a major source of cheap funds for the banks to invest and benefit from interest rate arbitrage. To attract customers, the marketing campaigns at most financial institutions consist of multiple outbound telephonic calls with more than one contact to a customer which is a very time-consuming process. Therefore, customized direct marketing has become more critical than ever for attracting new clients. As customer acquisition is becoming more difficult to archive, having an intelligent and redefined list is necessary to sell a product smartly. Our aim of this research is to increase the effectiveness of campaigns by predicting customers who will most likely subscribe to the fixed deposit and suggest the most suitable month to reach out to customers. We design a Time Aware Upsell Prediction Framework (TAUPF) using two different approaches, with an aim to find the best approach and technique to build the prediction model. TAUPF is implemented using Upsell Prediction Approach (UPA) and Clustered Upsell Prediction Approach (CUPA). We also address the data imbalance problem by examining and comparing different methods of sampling (Up-sampling and down-sampling). Our results have shown building such a model is quite feasible and profitable for the financial institutions. The Time Aware Upsell Prediction Framework (TAUPF) can be easily used in any industry such as telecom, automobile, tourism, etc. where the TAUPF (Clustered Upsell Prediction Approach (CUPA) or Upsell Prediction Approach (UPA)) holds valid. In our case, CUPA books more reliable. As proven in our research, one of the most important challenges is to define measures which have enough predictive power as the subscription to a fixed deposit depends on highly ambiguous situations and cannot be easily isolated. While we have shown the practicality of time-aware upsell prediction model where financial institutions can benefit from contacting the customers at the specified month, further research needs to be done to understand the specific time of the day. In addition, a further empirical/pilot study on real live customer needs to be conducted to prove the effectiveness of the model in the real world.

Keywords: customer acquisition, predictive analysis, targeted marketing, time-aware analysis

Procedia PDF Downloads 106
3609 Modeling, Analysis and Control of a Smart Composite Structure

Authors: Nader H. Ghareeb, Mohamed S. Gaith, Sayed M. Soleimani

Abstract:

In modern engineering, weight optimization has a priority during the design of structures. However, optimizing the weight can result in lower stiffness and less internal damping, causing the structure to become excessively prone to vibration. To overcome this problem, active or smart materials are implemented. The coupled electromechanical properties of smart materials, used in the form of piezoelectric ceramics in this work, make these materials well-suited for being implemented as distributed sensors and actuators to control the structural response. The smart structure proposed in this paper is composed of a cantilevered steel beam, an adhesive or bonding layer, and a piezoelectric actuator. The static deflection of the structure is derived as function of the piezoelectric voltage, and the outcome is compared to theoretical and experimental results from literature. The relation between the voltage and the piezoelectric moment at both ends of the actuator is also investigated and a reduced finite element model of the smart structure is created and verified. Finally, a linear controller is implemented and its ability to attenuate the vibration due to the first natural frequency is demonstrated.

Keywords: active linear control, lyapunov stability theorem, piezoelectricity, smart structure, static deflection

Procedia PDF Downloads 372
3608 Adomian’s Decomposition Method to Generalized Magneto-Thermoelasticity

Authors: Hamdy M. Youssef, Eman A. Al-Lehaibi

Abstract:

Due to many applications and problems in the fields of plasma physics, geophysics, and other many topics, the interaction between the strain field and the magnetic field has to be considered. Adomian introduced the decomposition method for solving linear and nonlinear functional equations. This method leads to accurate, computable, approximately convergent solutions of linear and nonlinear partial and ordinary differential equations even the equations with variable coefficients. This paper is dealing with a mathematical model of generalized thermoelasticity of a half-space conducting medium. A magnetic field with constant intensity acts normal to the bounding plane has been assumed. Adomian’s decomposition method has been used to solve the model when the bounding plane is taken to be traction free and thermally loaded by harmonic heating. The numerical results for the temperature increment, the stress, the strain, the displacement, the induced magnetic, and the electric fields have been represented in figures. The magnetic field, the relaxation time, and the angular thermal load have significant effects on all the studied fields.

Keywords: Adomian’s decomposition method, magneto-thermoelasticity, finite conductivity, iteration method, thermal load

Procedia PDF Downloads 136
3607 Real-Time Quantitative Polymerase Chain Reaction Assay for the Detection of microRNAs Using Bi-Directional Extension Sequences

Authors: Kyung Jin Kim, Jiwon Kwak, Jae-Hoon Lee, Soo Suk Lee

Abstract:

MicroRNAs (miRNA) are a class of endogenous, single-stranded, small, and non-protein coding RNA molecules typically 20-25 nucleotides long. They are thought to regulate the expression of other genes in a broad range by binding to 3’- untranslated regions (3’-UTRs) of specific mRNAs. The detection of miRNAs is very important for understanding of the function of these molecules and in the diagnosis of variety of human diseases. However, detection of miRNAs is very challenging because of their short length and high sequence similarities within miRNA families. So, a simple-to-use, low-cost, and highly sensitive method for the detection of miRNAs is desirable. In this study, we demonstrate a novel bi-directional extension (BDE) assay. In the first step, a specific linear RT primer is hybridized to 6-10 base pairs from the 3’-end of a target miRNA molecule and then reverse transcribed to generate a cDNA strand. After reverse transcription, the cDNA was hybridized to the 3’-end which is BDE sequence; it played role as the PCR template. The PCR template was amplified in an SYBR green-based quantitative real-time PCR. To prove the concept, we used human brain total RNA. It could be detected quantitatively in the range of seven orders of magnitude with excellent linearity and reproducibility. To evaluate the performance of BDE assay, we contrasted sensitivity and specificity of the BDE assay against a commercially available poly (A) tailing method using miRNAs for let-7e extracted from A549 human epithelial lung cancer cells. The BDE assay displayed good performance compared with a poly (A) tailing method in terms of specificity and sensitivity; the CT values differed by 2.5 and the melting curve showed a sharper than poly (A) tailing methods. We have demonstrated an innovative, cost-effective BDE assay that allows improved sensitivity and specificity in detection of miRNAs. Dynamic range of the SYBR green-based RT-qPCR for miR-145 could be represented quantitatively over a range of 7 orders of magnitude from 0.1 pg to 1.0 μg of human brain total RNA. Finally, the BDE assay for detection of miRNA species such as let-7e shows good performance compared with a poly (A) tailing method in terms of specificity and sensitivity. Thus BDE proves a simple, low cost, and highly sensitive assay for various miRNAs and should provide significant contributions in research on miRNA biology and application of disease diagnostics with miRNAs as targets.

Keywords: bi-directional extension (BDE), microRNA (miRNA), poly (A) tailing assay, reverse transcription, RT-qPCR

Procedia PDF Downloads 150
3606 ¹⁸F-FDG PET/CT Impact on Staging of Pancreatic Cancer

Authors: Jiri Kysucan, Dusan Klos, Katherine Vomackova, Pavel Koranda, Martin Lovecek, Cestmir Neoral, Roman Havlik

Abstract:

Aim: The prognosis of patients with pancreatic cancer is poor. The median of survival after establishing diagnosis is 3-11 months without surgical treatment, 13-20 months with surgical treatment depending on the disease stage, 5-year survival is less than 5%. Radical surgical resection remains the only hope of curing the disease. Early diagnosis with valid establishment of tumor resectability is, therefore, the most important aim for patients with pancreatic cancer. The aim of the work is to evaluate the contribution and define the role of 18F-FDG PET/CT in preoperative staging. Material and Methods: In 195 patients (103 males, 92 females, median age 66,7 years, 32-88 years) with a suspect pancreatic lesion, as part of the standard preoperative staging, in addition to standard examination methods (ultrasonography, contrast spiral CT, endoscopic ultrasonography, endoscopic ultrasonographic biopsy), a hybrid 18F-FDG PET/CT was performed. All PET/CT findings were subsequently compared with standard staging (CT, EUS, EUS FNA), with peroperative findings and definitive histology in the operated patients as reference standards. Interpretation defined the extent of the tumor according to TNM classification. Limitations of resectability were local advancement (T4) and presence of distant metastases (M1). Results: PET/CT was performed in a total of 195 patients with a suspect pancreatic lesion. In 153 patients, pancreatic carcinoma was confirmed and of these patients, 72 were not indicated for radical surgical procedure due to local inoperability or generalization of the disease. The sensitivity of PET/CT in detecting the primary lesion was 92.2%, specificity was 90.5%. A false negative finding in 12 patients, a false positive finding was seen in 4 cases, positive predictive value (PPV) 97.2%, negative predictive value (NPV) 76,0%. In evaluating regional lymph nodes, sensitivity was 51.9%, specificity 58.3%, PPV 58,3%, NPV 51.9%. In detecting distant metastases, PET/CT reached a sensitivity of 82.8%, specificity was 97.8%, PPV 96.9%, NPV 87.0%. PET/CT found distant metastases in 12 patients, which were not detected by standard methods. In 15 patients (15.6%) with potentially radically resectable findings, the procedure was contraindicated based on PET/CT findings and the treatment strategy was changed. Conclusion: PET/CT is a highly sensitive and specific method useful in preoperative staging of pancreatic cancer. It improves the selection of patients for radical surgical procedures, who can benefit from it and decreases the number of incorrectly indicated operations.

Keywords: cancer, PET/CT, staging, surgery

Procedia PDF Downloads 236
3605 Classifying Time Independent Plane Symmetric Spacetime through Noether`s Approach

Authors: Nazish Iftikhar, Adil Jhangeer, Tayyaba Naz

Abstract:

The universe is expanding at an accelerated rate. Symmetries are useful in understanding universe’s behavior. Emmy Noether reported the relation between symmetries and conservation laws. These symmetries are known as Noether symmetries which correspond to a conserved quantity. In differential equations, conservation laws play an important role. Noether symmetries are helpful in modified theories of gravity. Time independent plane symmetric spacetime was classified by Noether`s theorem. By using Noether`s theorem, set of linear partial differential equations was obtained having A(r), B(r) and F(r) as unknown radial functions. The Lagrangian corresponding to considered spacetime in the Noether equation was used to get Noether operators. Different possibilities of radial functions were considered. Firstly, all functions were same. All the functions were considered as non-zero constant, linear, reciprocal and exponential respectively. Secondly, two functions were proportional to each other keeping third function different. Second case has four subcases in which four different relationships between A(r), B(r) and F(r) were discussed. In all cases, we obtained nontrivial Noether operators including gauge term. Conserved quantities for each Noether operators were also presented.

Keywords: Noether gauge symmetries, radial function, Noether operator, conserved quantities

Procedia PDF Downloads 215
3604 Analysis of Path Nonparametric Truncated Spline Maximum Cubic Order in Farmers Loyalty Modeling

Authors: Adji Achmad Rinaldo Fernandes

Abstract:

Path analysis tests the relationship between variables through cause and effect. Before conducting further tests on path analysis, the assumption of linearity must be met. If the shape of the relationship is not linear and the shape of the curve is unknown, then use a nonparametric approach, one of which is a truncated spline. The purpose of this study is to estimate the function and get the best model on the nonparametric truncated spline path of linear, quadratic, and cubic orders with 1 and 2-knot points and determine the significance of the best function estimator in modeling farmer loyalty through the jackknife resampling method. This study uses secondary data through questionnaires to farmers in Sumbawa Regency who use SP-36 subsidized fertilizer products as many as 100 respondents. Based on the results of the analysis, it is known that the best-truncated spline nonparametric path model is the quadratic order of 2 knots with a coefficient of determination of 85.50%; the significance of the best-truncated spline nonparametric path estimator shows that all exogenous variables have a significant effect on endogenous variables.

Keywords: nonparametric path analysis, farmer loyalty, jackknife resampling, truncated spline

Procedia PDF Downloads 28
3603 Support Services in Open and Distance Education: An Integrated Model of Open Universities

Authors: Evrim Genc Kumtepe, Elif Toprak, Aylin Ozturk, Gamze Tuna, Hakan Kilinc, Irem Aydin Menderis

Abstract:

Support services are very significant elements for all educational institutions in general; however, for distance learners, these services are more essential than traditional (face-to-face) counterparts. One of the most important reasons for this is that learners and instructors do not share the same physical environment and that distance learning settings generally require intrapersonal interactions rather than interpersonal ones. Some learners in distance learning programs feel isolated. Furthermore, some fail to feel a sense of belonging to the institution because of lack of self-management skills, lack of motivation levels, and the need of being socialized, so that they are more likely to fail or drop out of an online class. In order to overcome all these problems, support services have emerged as a critical element for an effective and sustainable distance education system. Within the context of distance education support services, it is natural to include technology-based and web-based services and also the related materials. Moreover, institutions in education sector are expected to use information and communication technologies effectively in order to be successful in educational activities and programs. In terms of the sustainability of the system, an institution should provide distance education services through ICT enabled processes to support all stakeholders in the system, particularly distance learners. In this study, it is envisaged to develop a model based on the current support services literature in the field of open and distance learning and the applications of the distance higher education institutions. Specifically, content analysis technique is used to evaluate the existing literature in the distance education support services, the information published on websites, and applications of distance higher education institutions across the world. A total of 60 institutions met the inclusion criteria which are language option (English) and availability of materials in the websites. The six field experts contributed to brainstorming process to develop and extract codes for the coding scheme. During the coding process, these preset and emergent codes are used to conduct analyses. Two coders independently reviewed and coded each assigned website to ensure that all coders are interpreting the data the same way and to establish inter-coder reliability. Once each web page is included in descriptive and relational analysis, a model of support services is developed by examining the generated codes and themes. It is believed that such a model would serve as a quality guide for future institutions, as well as the current ones.

Keywords: support services, open education, distance learning, support model

Procedia PDF Downloads 184
3602 Stability Analysis and Controller Design of Further Development of Miniaturized Mössbauer Spectrometer II for Space Applications with Focus on the Extended Lyapunov Method – Part I –

Authors: Mohammad Beyki, Justus Pawlak, Robert Patzke, Franz Renz

Abstract:

In the context of planetary exploration, the MIMOS II (miniaturized Mössbauer spectrometer) serves as a proven and reliable measuring instrument. The transmission behaviour of the electronics in the Mössbauer spectroscopy is newly developed and optimized. For this purpose, the overall electronics is split into three parts. This elaboration deals exclusively with the first part of the signal chain for the evaluation of photons in experiments with gamma radiation. Parallel to the analysis of the electronics, a new method for the stability consideration of linear and non-linear systems is presented: The extended method of Lyapunov’s stability criteria. The design helps to weigh advantages and disadvantages against other simulated circuits in order to optimize the MIMOS II for the terestric and extraterestric measurment. Finally, after stability analysis, the controller design according to Ackermann is performed, achieving the best possible optimization of the output variable through a skillful pole assignment.

Keywords: Mössbauer spectroscopy, electronic signal amplifier, light processing technology, photocurrent, trans-impedance amplifier, extended Lyapunov method

Procedia PDF Downloads 79
3601 Measurement Errors and Misclassifications in Covariates in Logistic Regression: Bayesian Adjustment of Main and Interaction Effects and the Sample Size Implications

Authors: Shahadut Hossain

Abstract:

Measurement errors in continuous covariates and/or misclassifications in categorical covariates are common in epidemiological studies. Regression analysis ignoring such mismeasurements seriously biases the estimated main and interaction effects of covariates on the outcome of interest. Thus, adjustments for such mismeasurements are necessary. In this research, we propose a Bayesian parametric framework for eliminating deleterious impacts of covariate mismeasurements in logistic regression. The proposed adjustment method is unified and thus can be applied to any generalized linear and non-linear regression models. Furthermore, adjustment for covariate mismeasurements requires validation data usually in the form of either gold standard measurements or replicates of the mismeasured covariates on a subset of the study population. Initial investigation shows that adequacy of such adjustment depends on the sizes of main and validation samples, especially when prevalences of the categorical covariates are low. Thus, we investigate the impact of main and validation sample sizes on the adjusted estimates, and provide a general guideline about these sample sizes based on simulation studies.

Keywords: measurement errors, misclassification, mismeasurement, validation sample, Bayesian adjustment

Procedia PDF Downloads 396
3600 Loading Factor Performance of a Centrifugal Compressor Impeller: Specific Features and Way of Modeling

Authors: K. Soldatova, Y. Galerkin

Abstract:

A loading factor performance is necessary for the modeling of centrifugal compressor gas dynamic performance curve. Measured loading factors are linear function of a flow coefficient at an impeller exit. The performance does not depend on the compressibility criterion. To simulate loading factor performances, the authors present two parameters: a loading factor at zero flow rate and an angle between an ordinate and performance line. The calculated loading factor performances of non-viscous are linear too and close to experimental performances. Loading factor performances of several dozens of impellers with different blade exit angles, blade thickness and number, ratio of blade exit/inlet height, and two different type of blade mean line configuration. There are some trends of influence, which are evident – comparatively small blade thickness influence, and influence of geometry parameters is more for impellers with bigger blade exit angles, etc. Approximating equations for both parameters are suggested. The next phase of work will be simulating of experimental performances with the suggested approximation equations as a base.

Keywords: loading factor performance, centrifugal compressor, impeller, modeling

Procedia PDF Downloads 334
3599 A Soft Error Rates (SER) Evaluation Method of Combinational Logic Circuit Based on Linear Energy Transfers

Authors: Man Li, Wanting Zhou, Lei Li

Abstract:

Communication stability is the primary concern of communication satellites. Communication satellites are easily affected by particle radiation to generate single event effects (SEE), which leads to soft errors (SE) of the combinational logic circuit. The existing research on soft error rates (SER) of the combined logic circuit is mostly based on the assumption that the logic gates being bombarded have the same pulse width. However, in the actual radiation environment, the pulse widths of the logic gates being bombarded are different due to different linear energy transfers (LET). In order to improve the accuracy of SER evaluation model, this paper proposes a soft error rate evaluation method based on LET. In this paper, the authors analyze the influence of LET on the pulse width of combinational logic and establish the pulse width model based on the LET. Based on this model, the error rate of test circuit ISCAS'85 is calculated. The effectiveness of the model is proved by comparing it with previous experiments.

Keywords: communication satellite, pulse width, soft error rates, LET

Procedia PDF Downloads 152
3598 Output-Feedback Control Design for a General Class of Systems Subject to Sampling and Uncertainties

Authors: Tomas Menard

Abstract:

The synthesis of output-feedback control law has been investigated by many researchers since the last century. While many results exist for the case of Linear Time Invariant systems whose measurements are continuously available, nowadays, control laws are usually implemented on micro-controller, then the measurements are discrete-time by nature. This fact has to be taken into account explicitly in order to obtain a satisfactory behavior of the closed-loop system. One considers here a general class of systems corresponding to an observability normal form and which is subject to uncertainties in the dynamics and sampling of the output. Indeed, in practice, the modeling of the system is never perfect, this results in unknown uncertainties in the dynamics of the model. We propose here an output feedback algorithm which is based on a linear state feedback and a continuous-discrete time observer. The main feature of the proposed control law is that only discrete-time measurements of the output are needed. Furthermore, it is formally proven that the state of the closed loop system exponentially converges toward the origin despite the unknown uncertainties. Finally, the performances of this control scheme are illustrated with simulations.

Keywords: dynamical systems, output feedback control law, sampling, uncertain systems

Procedia PDF Downloads 271
3597 Student Loan Debt among Students with Disabilities

Authors: Kaycee Bills

Abstract:

This study will determine if students with disabilities have higher student loan debt payments than other student populations. The hypothesis was that students with disabilities would have significantly higher student loan debt payments than other students due to the length of time they spend in school. Using the Bachelorette and Beyond Study Wave 2015/017 dataset, quantitative methods were employed. These data analysis methods included linear regression and a correlation matrix. Due to the exploratory nature of the study, the significance levels for the overall model and each variable were set at .05. The correlation matrix demonstrated that students with certain types of disabilities are more likely to fall under higher student loan payment brackets than students without disabilities. These results also varied among the different types of disabilities. The result of the overall linear regression model was statistically significant (p = .04). Despite the overall model being statistically significant, the majority of the significance values for the different types of disabilities were null. However, several other variables had statistically significant results, such as veterans, people of minority races, and people who attended private schools. Implications for how this impacts the economy, capitalism, and financial wellbeing of various students are discussed.

Keywords: disability, student loan debt, higher education, social work

Procedia PDF Downloads 154
3596 Modelling Spatial Dynamics of Terrorism

Authors: André Python

Abstract:

To this day, terrorism persists as a worldwide threat, exemplified by the recent deadly attacks in January 2015 in Paris and the ongoing massacres perpetrated by ISIS in Iraq and Syria. In response to this threat, states deploy various counterterrorism measures, the cost of which could be reduced through effective preventive measures. In order to increase the efficiency of preventive measures, policy-makers may benefit from accurate predictive models that are able to capture the complex spatial dynamics of terrorism occurring at a local scale. Despite empirical research carried out at country-level that has confirmed theories explaining the diffusion processes of terrorism across space and time, scholars have failed to assess diffusion’s theories on a local scale. Moreover, since scholars have not made the most of recent statistical modelling approaches, they have been unable to build up predictive models accurate in both space and time. In an effort to address these shortcomings, this research suggests a novel approach to systematically assess the theories of terrorism’s diffusion on a local scale and provide a predictive model of the local spatial dynamics of terrorism worldwide. With a focus on the lethal terrorist events that occurred after 9/11, this paper addresses the following question: why and how does lethal terrorism diffuse in space and time? Based on geolocalised data on worldwide terrorist attacks and covariates gathered from 2002 to 2013, a binomial spatio-temporal point process is used to model the probability of terrorist attacks on a sphere (the world), the surface of which is discretised in the form of Delaunay triangles and refined in areas of specific interest. Within a Bayesian framework, the model is fitted through an integrated nested Laplace approximation - a recent fitting approach that computes fast and accurate estimates of posterior marginals. Hence, for each location in the world, the model provides a probability of encountering a lethal terrorist attack and measures of volatility, which inform on the model’s predictability. Diffusion processes are visualised through interactive maps that highlight space-time variations in the probability and volatility of encountering a lethal attack from 2002 to 2013. Based on the previous twelve years of observation, the location and lethality of terrorist events in 2014 are statistically accurately predicted. Throughout the global scope of this research, local diffusion processes such as escalation and relocation are systematically examined: the former process describes an expansion from high concentration areas of lethal terrorist events (hotspots) to neighbouring areas, while the latter is characterised by changes in the location of hotspots. By controlling for the effect of geographical, economical and demographic variables, the results of the model suggest that the diffusion processes of lethal terrorism are jointly driven by contagious and non-contagious factors that operate on a local scale – as predicted by theories of diffusion. Moreover, by providing a quantitative measure of predictability, the model prevents policy-makers from making decisions based on highly uncertain predictions. Ultimately, this research may provide important complementary tools to enhance the efficiency of policies that aim to prevent and combat terrorism.

Keywords: diffusion process, terrorism, spatial dynamics, spatio-temporal modeling

Procedia PDF Downloads 337
3595 Developing a New Relationship between Undrained Shear Strength and Over-Consolidation Ratio

Authors: Wael M Albadri, Hassnen M Jafer, Ehab H Sfoog

Abstract:

Relationship between undrained shear strength (Su) and over consolidation ratio (OCR) of clay soil (marine clay) is very important in the field of geotechnical engineering to estimate the settlement behaviour of clay and to prepare a small scale physical modelling test. In this study, a relationship between shear strength and OCR parameters was determined using the laboratory vane shear apparatus and the fully automatic consolidated apparatus. The main objective was to establish non-linear correlation formula between shear strength and OCR and comparing it with previous studies. Therefore, in order to achieve this objective, three points were chosen to obtain 18 undisturbed samples which were collected with an increasing depth of 1.0 m to 3.5 m each 0.5 m. Clay samples were prepared under undrained condition for both tests. It was found that the OCR and shear strength are inversely proportional at similar depth and at same undrained conditions. However, a good correlation was obtained from the relationships where the R2 values were very close to 1.0 using polynomial equations. The comparison between the experimental result and previous equation from other researchers produced a non-linear correlation which has a similar pattern with this study.

Keywords: shear strength, over-consolidation ratio, vane shear test, clayey soil

Procedia PDF Downloads 259
3594 The Use of Stochastic Gradient Boosting Method for Multi-Model Combination of Rainfall-Runoff Models

Authors: Phanida Phukoetphim, Asaad Y. Shamseldin

Abstract:

In this study, the novel Stochastic Gradient Boosting (SGB) combination method is addressed for producing daily river flows from four different rain-runoff models of Ohinemuri catchment, New Zealand. The selected rainfall-runoff models are two empirical black-box models: linear perturbation model and linear varying gain factor model, two conceptual models: soil moisture accounting and routing model and Nedbør-Afrstrømnings model. In this study, the simple average combination method and the weighted average combination method were used as a benchmark for comparing the results of the novel SGB combination method. The models and combination results are evaluated using statistical and graphical criteria. Overall results of this study show that the use of combination technique can certainly improve the simulated river flows of four selected models for Ohinemuri catchment, New Zealand. The results also indicate that the novel SGB combination method is capable of accurate prediction when used in a combination method of the simulated river flows in New Zealand.

Keywords: multi-model combination, rainfall-runoff modeling, stochastic gradient boosting, bioinformatics

Procedia PDF Downloads 321
3593 Early Predictive Signs for Kasai Procedure Success

Authors: Medan Isaeva, Anna Degtyareva

Abstract:

Context: Biliary atresia is a common reason for liver transplants in children, and the Kasai procedure can potentially be successful in avoiding the need for transplantation. However, it is important to identify factors that influence surgical outcomes in order to optimize treatment and improve patient outcomes. Research aim: The aim of this study was to develop prognostic models to assess the outcomes of the Kasai procedure in children with biliary atresia. Methodology: This retrospective study analyzed data from 166 children with biliary atresia who underwent the Kasai procedure between 2002 and 2021. The effectiveness of the operation was assessed based on specific criteria, including post-operative stool color, jaundice reduction, and bilirubin levels. The study involved a comparative analysis of various parameters, such as gestational age, birth weight, age at operation, physical development, liver and spleen sizes, and laboratory values including bilirubin, ALT, AST, and others, measured pre- and post-operation. Ultrasonographic evaluations were also conducted pre-operation, assessing the hepatobiliary system and related quantitative parameters. The study was carried out by two experienced specialists in pediatric hepatology. Comparative analysis and multifactorial logistic regression were used as the primary statistical methods. Findings: The study identified several statistically significant predictors of a successful Kasai procedure, including the presence of the gallbladder and levels of cholesterol and direct bilirubin post-operation. A detectable gallbladder was associated with a higher probability of surgical success, while elevated post-operative cholesterol and direct bilirubin levels were indicative of a reduced chance of positive outcomes. Theoretical importance: The findings of this study contribute to the optimization of treatment strategies for children with biliary atresia undergoing the Kasai procedure. By identifying early predictive signs of success, clinicians can modify treatment plans and manage patient care more effectively and proactively. Data collection and analysis procedures: Data for this analysis were obtained from the health records of patients who received the Kasai procedure. Comparative analysis and multifactorial logistic regression were employed to analyze the data and identify significant predictors. Question addressed: The study addressed the question of identifying predictive factors for the success of the Kasai procedure in children with biliary atresia. Conclusion: The developed prognostic models serve as valuable tools for early detection of patients who are less likely to benefit from the Kasai procedure. This enables clinicians to modify treatment plans and manage patient care more effectively and proactively. Potential limitations of the study: The study has several limitations. Its retrospective nature may introduce biases and inconsistencies in data collection. Being single centered, the results might not be generalizable to wider populations due to variations in surgical and postoperative practices. Also, other potential influencing factors beyond the clinical, laboratory, and ultrasonographic parameters considered in this study were not explored, which could affect the outcomes of the Kasai operation. Future studies could benefit from including a broader range of factors.

Keywords: biliary atresia, kasai operation, prognostic model, native liver survival

Procedia PDF Downloads 36
3592 Influence of the Financial Crisis on the Month and the Trading Month Effects: Evidence from the Athens Stock Exchange

Authors: Aristeidis Samitas, Evangelos Vasileiou

Abstract:

The aim of this study is to examine the month and the trading month effect under changing financial trends. We choose the Greek stock market to implement our assumption because there are clear and long term periods of financial growth and recession. Daily financial data from Athens Exchange General Index for the period 2002-2012 are considered. The paper employs several linear and non-linear models, although the TGARCH asymmetry model best fits in this sample and for this reason we mainly present the TGARCH results. Empirical results show that changing economic and financial conditions influences the calendar effects. Especially, the trading month effect totally changes in each fortnight according to the financial trend. On the other hand, in Greece the January effect exists during the growth periods, although it does not exist when the financial trend changes. The findings are helpful to anybody who invest and deals with the Greek stock market. Moreover, they may pave the way for an alternative calendar anomalies research approach, so it may be useful to investors who take into account these anomalies when they draw their investment strategy.

Keywords: month effect, trading month effect, economic cycles, crisis

Procedia PDF Downloads 405
3591 Dietary Patterns and Hearing Loss in Older People

Authors: N. E. Gallagher, C. E. Neville, N. Lyner, J. Yarnell, C. C. Patterson, J. E. Gallacher, Y. Ben-Shlomo, A. Fehily, J. V. Woodside

Abstract:

Hearing loss is highly prevalent in older people and can reduce quality of life substantially. Emerging research suggests that potentially modifiable risk factors, including risk factors previously related to cardiovascular disease risk, may be associated with a decreased or increased incidence of hearing loss. This has prompted investigation into the possibility that certain nutrients, foods or dietary patterns may also be associated with incidence of hearing loss. The aim of this study was to determine any associations between dietary patterns and hearing loss in men enrolled in the Caerphilly study. The Caerphilly prospective cohort study began in 1979-1983 with recruitment of 2512 men aged 45-59 years. Dietary data was collected using a self-administered, semi-quantitative, 56-item food frequency questionnaire (FFQ) at baseline (1979-1983), and 7-day weighed food intake (WI) in a 30% sub-sample, while pure-tone unaided audiometric threshold was assessed at 0.5, 1, 2 and 4 kHz, between 1984 and 1988. Principal components analysis (PCA) was carried out to determine a posteriori dietary patterns and multivariate linear and logistic regression models were used to examine associations with hearing level (pure tone average (PTA) of frequencies 0.5, 1, 2 and 4 kHz in decibels (dB)) for linear regression and with hearing loss (PTA>25dB) for logistic regression. Three dietary patterns were determined using PCA on the FFQ data- Traditional, Healthy, High sugar/Alcohol avoider. After adjustment for potential confounding factors, both linear and logistic regression analyses showed a significant and inverse association between the Healthy pattern and hearing loss (P<0.001) and linear regression analysis showed a significant association between the High sugar/Alcohol avoider pattern and hearing loss (P=0.04). Three similar dietary patterns were determined using PCA on the WI data- Traditional, Healthy, High sugar/Alcohol avoider. After adjustment for potential confounding factors, logistic regression analyses showed a significant and inverse association between the Healthy pattern and hearing loss (P=0.02) and a significant association between the Traditional pattern and hearing loss (P=0.04). A Healthy dietary pattern was found to be significantly inversely associated with hearing loss in middle-aged men in the Caerphilly study. Furthermore, a High sugar/Alcohol avoider pattern (FFQ) and a Traditional pattern (WI) were associated with poorer hearing levels. Consequently, the role of dietary factors in hearing loss remains to be fully established and warrants further investigation.

Keywords: ageing, diet, dietary patterns, hearing loss

Procedia PDF Downloads 218
3590 Non-Linear Static Analysis of Screwed Moment Connections in Cold-Formed Steel Frames

Authors: Jikhil Joseph, Satish Kumar S R.

Abstract:

Cold-formed steel frames are preferable for framed constructions due to its low seismic weights and results into low seismic forces, but on the contrary, significant lateral deflections are expected under seismic/wind loading. The various factors affecting the lateral stiffness of steel frames are the stiffness of connections, beams and columns. So, by increasing the stiffness of beam, column and making the connections rigid will enhance the lateral stiffness. The present study focused on Structural elements made of rectangular hollow sections and fastened with screwed in-plane moment connections for the building frames. The self-drilling screws can be easily drilled on either side of the connection area with the help of gusset plates. The strength of screwed connections can be made 1.2 times the connecting elements. However, achieving high stiffness in connections is also a challenging job. Hence in addition to beam and column stiffness’s the connection stiffness are also going to be a governing parameter in the lateral deflections of the frames. SAP 2000 Non-linear static analysis has been planned to study the seismic behavior of steel frames. The SAP model will be consisting of nonlinear spring model for the connection to account the semi-rigid connections and the nonlinear hinges will be assigned for beam and column sections according to FEMA 273 guidelines. The reliable spring and hinge parameters will be assigned based on an experimental and analytical database. The non-linear static analysis is mainly focused on the identification of various hinge formations and the estimation of lateral deflection and these will contribute as an inputs for the direct displacement-based Seismic design. The research output from this study are the modelling techniques and suitable design guidelines for the performance-based seismic design of cold-formed steel frames.

Keywords: buckling, cold formed steel, nonlinear static analysis, screwed connections

Procedia PDF Downloads 160
3589 Modelling Fluidization by Data-Based Recurrence Computational Fluid Dynamics

Authors: Varun Dongre, Stefan Pirker, Stefan Heinrich

Abstract:

Over the last decades, the numerical modelling of fluidized bed processes has become feasible even for industrial processes. Commonly, continuous two-fluid models are applied to describe large-scale fluidization. In order to allow for coarse grids novel two-fluid models account for unresolved sub-grid heterogeneities. However, computational efforts remain high – in the order of several hours of compute-time for a few seconds of real-time – thus preventing the representation of long-term phenomena such as heating or particle conversion processes. In order to overcome this limitation, data-based recurrence computational fluid dynamics (rCFD) has been put forward in recent years. rCFD can be regarded as a data-based method that relies on the numerical predictions of a conventional short-term simulation. This data is stored in a database and then used by rCFD to efficiently time-extrapolate the flow behavior in high spatial resolution. This study will compare the numerical predictions of rCFD simulations with those of corresponding full CFD reference simulations for lab-scale and pilot-scale fluidized beds. In assessing the predictive capabilities of rCFD simulations, we focus on solid mixing and secondary gas holdup. We observed that predictions made by rCFD simulations are highly sensitive to numerical parameters such as diffusivity associated with face swaps. We achieved a computational speed-up of four orders of magnitude (10,000 time faster than classical TFM simulation) eventually allowing for real-time simulations of fluidized beds. In the next step, we apply the checkerboarding technique by introducing gas tracers subjected to convection and diffusion. We then analyze the concentration profiles by observing mixing, transport of gas tracers, insights about the convective and diffusive pattern of the gas tracers, and further towards heat and mass transfer methods. Finally, we run rCFD simulations and calibrate them with numerical and physical parameters compared with convectional Two-fluid model (full CFD) simulation. As a result, this study gives a clear indication of the applicability, predictive capabilities, and existing limitations of rCFD in the realm of fluidization modelling.

Keywords: multiphase flow, recurrence CFD, two-fluid model, industrial processes

Procedia PDF Downloads 55
3588 Analysis of Artificial Hip Joint Using Finite Element Method

Authors: Syed Zameer, Mohamed Haneef

Abstract:

Hip joint plays very important role in human beings as it takes up the whole body forces generated due to various activities. These loads are repetitive and fluctuating depending on the activities such as standing, sitting, jogging, stair casing, climbing, etc. which may lead to failure of Hip joint. Hip joint modification and replacement are common in old aged persons as well as younger persons. In this research study static and Fatigue analysis of Hip joint model was carried out using finite element software ANSYS. Stress distribution obtained from result of static analysis, material properties and S-N curve data of fabricated Ultra High molecular weight polyethylene / 50 wt% short E glass fibres + 40 wt% TiO2 Polymer matrix composites specimens were used to estimate fatigue life of Hip joint using stiffness Degradation model for polymer matrix composites. The stress distribution obtained from static analysis was found to be within the acceptable range.The factor of safety calculated from linear Palmgren linear damage rule is less than one, which indicates the component is safe under the design.

Keywords: hip joint, polymer matrix composite, static analysis, fatigue analysis, stress life approach

Procedia PDF Downloads 342
3587 Rd-PLS Regression: From the Analysis of Two Blocks of Variables to Path Modeling

Authors: E. Tchandao Mangamana, V. Cariou, E. Vigneau, R. Glele Kakai, E. M. Qannari

Abstract:

A new definition of a latent variable associated with a dataset makes it possible to propose variants of the PLS2 regression and the multi-block PLS (MB-PLS). We shall refer to these variants as Rd-PLS regression and Rd-MB-PLS respectively because they are inspired by both Redundancy analysis and PLS regression. Usually, a latent variable t associated with a dataset Z is defined as a linear combination of the variables of Z with the constraint that the length of the loading weights vector equals 1. Formally, t=Zw with ‖w‖=1. Denoting by Z' the transpose of Z, we define herein, a latent variable by t=ZZ’q with the constraint that the auxiliary variable q has a norm equal to 1. This new definition of a latent variable entails that, as previously, t is a linear combination of the variables in Z and, in addition, the loading vector w=Z’q is constrained to be a linear combination of the rows of Z. More importantly, t could be interpreted as a kind of projection of the auxiliary variable q onto the space generated by the variables in Z, since it is collinear to the first PLS1 component of q onto Z. Consider the situation in which we aim to predict a dataset Y from another dataset X. These two datasets relate to the same individuals and are assumed to be centered. Let us consider a latent variable u=YY’q to which we associate the variable t= XX’YY’q. Rd-PLS consists in seeking q (and therefore u and t) so that the covariance between t and u is maximum. The solution to this problem is straightforward and consists in setting q to the eigenvector of YY’XX’YY’ associated with the largest eigenvalue. For the determination of higher order components, we deflate X and Y with respect to the latent variable t. Extending Rd-PLS to the context of multi-block data is relatively easy. Starting from a latent variable u=YY’q, we consider its ‘projection’ on the space generated by the variables of each block Xk (k=1, ..., K) namely, tk= XkXk'YY’q. Thereafter, Rd-MB-PLS seeks q in order to maximize the average of the covariances of u with tk (k=1, ..., K). The solution to this problem is given by q, eigenvector of YY’XX’YY’, where X is the dataset obtained by horizontally merging datasets Xk (k=1, ..., K). For the determination of latent variables of order higher than 1, we use a deflation of Y and Xk with respect to the variable t= XX’YY’q. In the same vein, extending Rd-MB-PLS to the path modeling setting is straightforward. Methods are illustrated on the basis of case studies and performance of Rd-PLS and Rd-MB-PLS in terms of prediction is compared to that of PLS2 and MB-PLS.

Keywords: multiblock data analysis, partial least squares regression, path modeling, redundancy analysis

Procedia PDF Downloads 126
3586 Non-parametric Linear Technique for Measuring the Efficiency of Winter Road Maintenance in the Arctic Area

Authors: Mahshid Hatamzad, Geanette Polanco

Abstract:

Improving the performance of Winter Road Maintenance (WRM) can increase the traffic safety and reduce the cost as well as environmental impacts. This study evaluates the efficiency of WRM technique, named salting, in the Arctic area by using Data Envelopment Analysis (DEA), which is a non-parametric linear method to measure the efficiencies of decision-making units (DMUs) based on handling multiple inputs and multiple outputs at the same time that their associated weights are not known. Here, roads are considered as DMUs for which the efficiency must be determined. The three input variables considered are traffic flow, road area and WRM cost. In addition, the two output variables included are level of safety in the roads and environment impacts resulted from WRM, which is also considered as an uncontrollable factor in the second scenario. The results show the performance of DMUs from the most efficient WRM to the inefficient/least efficient one and this information provides decision makers with technical support and the required suggested improvements for inefficient WRM, in order to achieve a cost-effective WRM and a safe road transportation during wintertime in the Arctic areas.

Keywords: environmental impacts, DEA, risk and safety, WRM

Procedia PDF Downloads 106
3585 Glucose Monitoring System Using Machine Learning Algorithms

Authors: Sangeeta Palekar, Neeraj Rangwani, Akash Poddar, Jayu Kalambe

Abstract:

The bio-medical analysis is an indispensable procedure for identifying health-related diseases like diabetes. Monitoring the glucose level in our body regularly helps us identify hyperglycemia and hypoglycemia, which can cause severe medical problems like nerve damage or kidney diseases. This paper presents a method for predicting the glucose concentration in blood samples using image processing and machine learning algorithms. The glucose solution is prepared by the glucose oxidase (GOD) and peroxidase (POD) method. An experimental database is generated based on the colorimetric technique. The image of the glucose solution is captured by the raspberry pi camera and analyzed using image processing by extracting the RGB, HSV, LUX color space values. Regression algorithms like multiple linear regression, decision tree, RandomForest, and XGBoost were used to predict the unknown glucose concentration. The multiple linear regression algorithm predicts the results with 97% accuracy. The image processing and machine learning-based approach reduce the hardware complexities of existing platforms.

Keywords: artificial intelligence glucose detection, glucose oxidase, peroxidase, image processing, machine learning

Procedia PDF Downloads 181