Search results for: block linear multistep methods
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18335

Search results for: block linear multistep methods

17705 Measuring the Effectiveness of Response Inhibition regarding to Motor Complexity: Evidence from the Stroop Effect

Authors: Germán Gálvez-García, Marta Lavin, Javiera Peña, Javier Albayay, Claudio Bascour, Jesus Fernandez-Gomez, Alicia Pérez-Gálvez

Abstract:

We studied the effectiveness of response inhibition in movements with different degrees of motor complexity when they were executed in isolation and alternately. Sixteen participants performed the Stroop task which was used as a measure of response inhibition. Participants responded by lifting the index finger and reaching the screen with the same finger. Both actions were performed separately and alternately in different experimental blocks. Repeated measures ANOVAs were used to compare reaction time, movement time, kinematic errors and Movement errors across conditions (experimental block, movement, and congruency). Delta plots were constructed to perform distributional analyses of response inhibition and accuracy rate. The effectiveness of response inhibition did not show difference when the movements were performed in separated blocks. Nevertheless, it showed differences when they were performed alternately in the same experimental block, being more effective for the lifting action. This could be due to a competition of the available resources during a more complex scenario which also demands to adopt some strategy to avoid errors.

Keywords: response inhibition, motor complexity, Stroop task, delta plots

Procedia PDF Downloads 385
17704 Development of a Novel Antibacterial to Block Growth of Pseudomonas Aeruginosa and Prevent Biofilm Formation

Authors: Clara Franch de la Cal, Christopher J Morris, Michael McArthur

Abstract:

Cystic fibrosis (CF) is an autosomal recessive genetic disorder characterized by abnormal transport of chloride and sodium across the lung epithelium, leading to thick and viscous secretions. Within which CF patients suffer from repeated bacterial pulmonary infections, with Pseudomonas aeru-ginosa (PA) eliciting the greatest inflammatory response, causing an irreversible loss of lung func-tion that determines morbidity and mortality. The cell wall of PA is a permeability barrier to many antibacterials and the rise of Mutli-Drug Resistant strains (MDR) is eroding the efficacy of the few remaining clinical options. In addition when PA infection becomes established it forms an antibi-otic-resistant biofilm, embedded in which are slow growing cells that are refractive to drug treat-ment. Making the development of new antibacterials a major challenge. This work describes the development of new type of nanoparticulate oligonucleotide antibacterial capable of tackling PA infections, including MDR strains. It is being developed to both block growth and prevent biofilm formation. These oligonucleotide therapeutics, Transcription Factor Decoys (TFD), act on novel genomic targets by capturing key regulatory proteins to block essential bacterial genes and defeat infection. They have been successfully transfected into a wide range of pathogenic bacteria, both in vitro and in vivo, using a proprietary delivery technology. The surfactant used self-assembles with TFD to form a nanoparticle stable in biological fluids, which protects the TFD from degradation and preferentially transfects prokaryotic membranes. Key challenges are to adapt the nanoparticle so it is active against PA in the context of biofilms and to formulate it for administration by inhalation. This would allow the drug to be delivered to the respiratory tract, thereby achieving drug concentrations sufficient to eradicate the pathogenic organisms at the site of infection.

Keywords: antibacterials, transcriptional factor decoys (TFDs), pseudomonas aeruginosa

Procedia PDF Downloads 275
17703 CFD Analysis of an Aft Sweep Wing in Subsonic Flow and Making Analogy with Roskam Methods

Authors: Ehsan Sakhaei, Ali Taherabadi

Abstract:

In this study, an aft sweep wing with specific characteristic feature was analysis with CFD method in Fluent software. In this analysis wings aerodynamic coefficient was calculated in different rake angle and wing lift curve slope to rake angle was achieved. Wing section was selected among NACA airfoils version 6. The sweep angle of wing is 15 degree, aspect ratio 8 and taper ratios 0.4. Designing and modeling this wing was done in CATIA software. This model was meshed in Gambit software and its three dimensional analysis was done in Fluent software. CFD methods used here were based on pressure base algorithm. SIMPLE technique was used for solving Navier-Stokes equation and Spalart-Allmaras model was utilized to simulate three dimensional wing in air. Roskam method is one of the common and most used methods for determining aerodynamics parameters in the field of airplane designing. In this study besides CFD analysis, an advanced aircraft analysis was used for calculating aerodynamic coefficient using Roskam method. The results of CFD were compared with measured data acquired from Roskam method and authenticity of relation was evaluated. The results and comparison showed that in linear region of lift curve there is a minor difference between aerodynamics parameter acquired from CFD to relation present by Roskam.

Keywords: aft sweep wing, CFD method, fluent, Roskam, Spalart-Allmaras model

Procedia PDF Downloads 497
17702 Process Data-Driven Representation of Abnormalities for Efficient Process Control

Authors: Hyun-Woo Cho

Abstract:

Unexpected operational events or abnormalities of industrial processes have a serious impact on the quality of final product of interest. In terms of statistical process control, fault detection and diagnosis of processes is one of the essential tasks needed to run the process safely. In this work, nonlinear representation of process measurement data is presented and evaluated using a simulation process. The effect of using different representation methods on the diagnosis performance is tested in terms of computational efficiency and data handling. The results have shown that the nonlinear representation technique produced more reliable diagnosis results and outperforms linear methods. The use of data filtering step improved computational speed and diagnosis performance for test data sets. The presented scheme is different from existing ones in that it attempts to extract the fault pattern in the reduced space, not in the original process variable space. Thus this scheme helps to reduce the sensitivity of empirical models to noise.

Keywords: fault diagnosis, nonlinear technique, process data, reduced spaces

Procedia PDF Downloads 238
17701 2-Dimensional Kinematic Analysis on Sprint Start with Sprinting Performance of Novice Athletes

Authors: Satpal Yadav, Biswajit Basumatary, Arvind S. Sajwan, Ranjan Chakravarty

Abstract:

The purpose of the study was to assess the effect of 2D kinematical selected variables on sprint start with sprinting performance of novice athletes. Six (3 National and 3 State level) athletes of sports authority of India, Guwahati has been selected for this study. The mean (M) and standard deviation (SD) of sprinters were age (17.44, 1.55), height (1.74m, .84m), weight (62.25 kg, 4.55), arm length (65.00 cm, 3.72) and leg length (96.35 cm, 2.71). Biokin-2D motion analysis system V4.5 can be used for acquiring two-dimensional kinematical data/variables on sprint start with Sprinting Performance. For the purpose of kinematic analysis a standard motion driven camera which frequency of the camera was 60 frame/ second i.e. handy camera of Sony Company were used. The sequence of photographic was taken under controlled condition. The distance of the camera from the athletes was 12 mts away and was fixed at 1.2-meter height. The result was found that National and State level athletes significant difference in there, trajectory knee, trajectory ankle, displacement knee, displacement ankle, linear velocity knee, linear velocity ankle, and linear acceleration ankle whereas insignificant difference was found between National and State level athletes in their linear acceleration knee joint on sprint start with sprinting performance. For all the Statistical test the level of significance was set at p<0.05.

Keywords: 2D kinematic analysis, sprinting performance, novice athletes, sprint start

Procedia PDF Downloads 313
17700 An Experience Report on Course Teaching in Information Systems

Authors: Carlos Oliveira

Abstract:

This paper is a criticism of the traditional model of teaching and presents alternative teaching methods, different from the traditional lecture. These methods are accompanied by reports of experience of their application in a class. It was concluded that in the lecture, the student has a low learning rate and that other methods should be used to make the most engaging learning environment for the student, contributing (or facilitating) his learning process. However, the teacher should not use a single method, but rather a range of different methods to ensure the learning experience does not become repetitive and fatiguing for the student.

Keywords: educational practices, experience report, IT in education, teaching methods

Procedia PDF Downloads 383
17699 Reliable Consensus Problem for Multi-Agent Systems with Sampled-Data

Authors: S. H. Lee, M. J. Park, O. M. Kwon

Abstract:

In this paper, reliable consensus of multi-agent systems with sampled-data is investigated. By using a suitable Lyapunov-Krasovskii functional and some techniques such as Wirtinger Inequality, Schur Complement and Kronecker Product, the results of this systems are obtained by solving a set of Linear Matrix Inequalities(LMIs). One numerical example is included to show the effectiveness of the proposed criteria.

Keywords: multi-agent, linear matrix inequalities (LMIs), kronecker product, sampled-data, Lyapunov method

Procedia PDF Downloads 519
17698 Wear Behavior of Commercial Aluminium Engine Block and Piston under Dry Sliding Condition

Authors: Md. Salim Kaiser

Abstract:

In the present work, the effect of load and sliding distance on the performance tribology of commercially used aluminium-silicon engine block and piston was evaluated at ambient conditions with humidity of 80% under dry sliding conditions using a pin-on-disc with two different loads of 5N and 20N yielding applied pressure of 0.30MPa and 1.4MPa, respectively, at sliding velocity of 0.29ms-1 and with varying sliding distance ranging from 260m-4200m. Factors and conditions that had significant effect were identified. The results showed that the load and the sliding distance affect the wear rate of the alloys and the wear rate increased with increasing load for both the alloys. Wear rate also increases almost linearly at low loads and increase to a maximum then attain a plateau with increasing sliding distance. For both applied loads, the piston alloy showed the better performance due to higher Ni and Mg content. The worn surface and wear debris was characterized by optical microscope, SEM and EDX analyzer. The worn surface was characterized by surface with shallow grooves at loads while the groove width and depth increased as the loads increases. Oxidative wear was found to be the predominant mechanisms in the dry sliding of Al-Si alloys at low loads

Keywords: wear, friction, gravimetric analysis, aluminium-silicon alloys, SEM, EDX

Procedia PDF Downloads 244
17697 Mortar Positioning Effects on Uniaxial Compression Behavior in Hollow Concrete Block Masonry

Authors: José Álvarez Pérez, Ramón García Cedeño, Gerardo Fajardo-San Miguel, Jorge H. Chávez Gómez, Franco A. Carpio Santamaría, Milena Mesa Lavista

Abstract:

The uniaxial compressive strength and modulus of elasticity in hollow concrete block masonry (HCBM) represent key mechanical properties for structural design considerations. These properties are obtained through experimental tests conducted on prisms or wallettes and depend on various factors, with the HCB contributing significantly to overall strength. One influential factor in the compressive behaviour of masonry is the thickness and method of mortar placement. Mexican regulations stipulate mortar placement over the entire net area (full-shell) for strength computation based on the gross area. However, in professional practice, there's a growing trend to place mortar solely on the lateral faces. Conversely, the United States of America standard dictates mortar placement and computation over the net area of HCB. The Canadian standard specifies mortar placement solely on the lateral face (Face-Shell-Bedding), where computation necessitates the use of the effective load area, corresponding to the mortar's placement area. This research aims to evaluate the influence of different mortar placement methods on the axial compression behaviour of HCBM. To achieve this, an experimental campaign was conducted, including: (1) 10 HCB specimens with mortar on the entire net area, (2) 10 HCB specimens with mortar placed on the lateral faces, (3) 10 prisms of 2-course HCB under axial compression with mortar in full-shell, (4) 10 prisms of 2-course HCB under axial compression with mortar in face-shell-bedding, (5) 10 prisms of 3-course HCB under axial compression with mortar in full-shell, (6) 10 prisms of 3-course HCB under axial compression with mortar in face-shell-bedding, (7) 10 prisms of 4-course HCB under axial compression with mortar in full-shell, and, (8) 10 prisms of 4-course HCB under axial compression with mortar in face-shell-bedding. A combination of sulphur and fly ash in a 2:1 ratio was used for the capping material, meeting the average compressive strength requirement of over 35 MPa as per NMX-C-036 standards. Additionally, a mortar with a strength of over 17 MPa was utilized for the prisms. The results indicate that prisms with mortar placed over the full-shell exhibit higher strength compared to those with mortar over the face-shell-bedding. However, the elastic modulus was lower for prisms with mortar placement over the full-shell compared to face-shell bedding.

Keywords: masonry, hollow concrete blocks, mortar placement, prisms tests

Procedia PDF Downloads 49
17696 Decomposition of Third-Order Discrete-Time Linear Time-Varying Systems into Its Second- and First-Order Pairs

Authors: Mohamed Hassan Abdullahi

Abstract:

Decomposition is used as a synthesis tool in several physical systems. It can also be used for tearing and restructuring, which is large-scale system analysis. On the other hand, the commutativity of series-connected systems has fascinated the interest of researchers, and its advantages have been emphasized in the literature. The presentation looks into the necessary conditions for decomposing any third-order discrete-time linear time-varying system into a commutative pair of first- and second-order systems. Additional requirements are derived in the case of nonzero initial conditions. MATLAB simulations are used to verify the findings. The work is unique and is being published for the first time. It is critical from the standpoints of synthesis and/or design. Because many design techniques in engineering systems rely on tearing and reconstruction, this is the process of putting together simple components to create a finished product. Furthermore, it is demonstrated that regarding sensitivity to initial conditions, some combinations may be better than others. The results of this work can be extended for the decomposition of fourth-order discrete-time linear time-varying systems into lower-order commutative pairs, as two second-order commutative subsystems or one first-order and one third-order commutative subsystems.

Keywords: commutativity, decomposition, discrete time-varying systems, systems

Procedia PDF Downloads 97
17695 Inverse Scattering of Two-Dimensional Objects Using an Enhancement Method

Authors: A.R. Eskandari, M.R. Eskandari

Abstract:

A 2D complete identification algorithm for dielectric and multiple objects immersed in air is presented. The employed technique consists of initially retrieving the shape and position of the scattering object using a linear sampling method and then determining the electric permittivity and conductivity of the scatterer using adjoint sensitivity analysis. This inversion algorithm results in high computational speed and efficiency, and it can be generalized for any scatterer structure. Also, this method is robust with respect to noise. The numerical results clearly show that this hybrid approach provides accurate reconstructions of various objects.

Keywords: inverse scattering, microwave imaging, two-dimensional objects, Linear Sampling Method (LSM)

Procedia PDF Downloads 375
17694 Effects of Wind Load on the Tank Structures with Various Shapes and Aspect Ratios

Authors: Doo Byong Bae, Jae Jun Yoo, Il Gyu Park, Choi Seowon, Oh Chang Kook

Abstract:

There are several wind load provisions to evaluate the wind response on tank structures such as API, Euro-code, etc. the assessment of wind action applying these provisions is made by performing the finite element analysis using both linear bifurcation analysis and geometrically nonlinear analysis. By comparing the pressure patterns obtained from the analysis with the results of wind tunnel test, most appropriate wind load criteria will be recommended.

Keywords: wind load, finite element analysis, linear bifurcation analysis, geometrically nonlinear analysis

Procedia PDF Downloads 619
17693 Theoretical Study of Acetylation of P-Methylaniline Catalyzed by Cu²⁺ Ions

Authors: Silvana Caglieri

Abstract:

Theoretical study of acetylation of p-methylaniline catalyzed by Cu2+ ions from the analysis of intermediate of the reaction was carried out. The study of acetylation of amines is of great interest by the utility of its products of reaction and is one of the most frequently used transformations in organic synthesis as it provides an efficient and inexpensive means for protecting amino groups in a multistep synthetic process. Acetylation of amine is a nucleophilic substitution reaction. This reaction can be catalyzed by Lewis acid, metallic ion. In reaction mechanism, the metallic ion formed a complex with the oxygen of the acetic anhydride carbonyl, facilitating the polarization of the same and the successive addition of amine at the position to form a tetrahedral intermediate, determining step of the rate of the reaction. Experimental work agreed that this reaction takes place with the formation of a tetrahedral intermediate. In the present theoretical work were investigated the structure and energy of the tetrahedral intermediate of the reaction catalyzed by Cu2+ ions. Geometries of all species involved in the acetylation were made and identified. All of the geometry optimizations were performed by the method at the DFT/B3LYP level of theory and the method MP2. Were adopted the 6-31+G* basis sets. Energies were calculated using the Mechanics-UFF method. Following the same procedure it was identified the geometric parameters and energy of reaction intermediate. The calculations show 61.35 kcal/mol of energy for the tetrahedral intermediate and the energy of activation for the reaction was 15.55 kcal/mol.

Keywords: amides, amines, DFT, MP2

Procedia PDF Downloads 271
17692 Refined Procedures for Second Order Asymptotic Theory

Authors: Gubhinder Kundhi, Paul Rilstone

Abstract:

Refined procedures for higher-order asymptotic theory for non-linear models are developed. These include a new method for deriving stochastic expansions of arbitrary order, new methods for evaluating the moments of polynomials of sample averages, a new method for deriving the approximate moments of the stochastic expansions; an application of these techniques to gather improved inferences with the weak instruments problem is considered. It is well established that Instrumental Variable (IV) estimators in the presence of weak instruments can be poorly behaved, in particular, be quite biased in finite samples. In our application, finite sample approximations to the distributions of these estimators are obtained using Edgeworth and Saddlepoint expansions. Departures from normality of the distributions of these estimators are analyzed using higher order analytical corrections in these expansions. In a Monte-Carlo experiment, the performance of these expansions is compared to the first order approximation and other methods commonly used in finite samples such as the bootstrap.

Keywords: edgeworth expansions, higher order asymptotics, saddlepoint expansions, weak instruments

Procedia PDF Downloads 270
17691 Linear Complementary Based Approach for Unilateral Frictional Contact between Wheel and Beam

Authors: Muskaan Sethi, Arnab Banerjee, Bappaditya Manna

Abstract:

The present paper aims to investigate a suitable contact between a wheel rolling over a flexible beam. A Linear Complementary (LCP) based approach has been adopted to simulate the contact dynamics for a rigid wheel traversing over a flexible Euler Bernoulli simply supported beam. The adopted methodology is suitable to incorporate the effect of frictional force acting at the wheel-beam interface. Moreover, the possibility of the generation of a gap between the two bodies has also been considered. The present method is based on a unilateral contact assumption which assumes that no penetration would occur when the two bodies come in contact. This assumption helps to predict the contact between wheels and beams in a more practical sense. The proposed methodology is validated with the previously published results and is found to be in good agreement. Further, this method is applied to simulate the contact between wheels and beams for various railway configurations. Moreover, different parametric studies are conducted to study the contact dynamics between the wheel and beam more thoroughly.

Keywords: contact dynamics, linear complementary problem, railway dynamics, unilateral contact

Procedia PDF Downloads 93
17690 Evaluation of Uniformity for Gafchromic Sheets for Film Dosimetry

Authors: Fayzan Ahmed, Saad Bin Saeed, Abdul Qadir Jangda

Abstract:

Gafchromic™ sheet are extensively used for the QA of intensity modulated radiation therapy and other in-vivo dosimetry. Intra-sheet Non-uniformity of scanner as well as film causes undesirable fluctuations which are reflected in dosimetry The aim of this study is to define a systematic and robust method to investigate the intra-sheet uniformity of the unexposed Gafchromic Sheets and the region of interest (ROI) of the scanner. Sheets of lot No#: A05151201 were scanned before and after the expiry period with the EPSON™ XL10000 scanner in the transmission mode, landscape orientation and 72 dpi resolution. ROI of (8’x 10’ inches) equal to the sheet dimension in the center of the scanner is used to acquire images with full transmission, block transmission and with sheets in place. 500 virtual grids, created in MATALB® are imported as a macros in ImageJ (1.49m Wayne Rasband) to analyze the images. In order to remove the edge effects, the outer 86 grids are excluded from the analysis. The standard deviation of the block transmission and full transmission are 0.38% and 0.66% confirming a higher uniformity of the scanner. Expired and non-expired sheets have standard deviations of 2.18% and 1.29%, show that uniformity decreases after expiry. The results are promising and indicates a good potential of this method to be used as a uniformity check for scanner and unexposed Gafchromic sheets.

Keywords: IMRT, film dosimetry, virtual grids, uniformity

Procedia PDF Downloads 482
17689 Conceptional Design of a Hyperloop Capsule with Linear Induction Propulsion System

Authors: Ahmed E. Hodaib, Samar F. Abdel Fattah

Abstract:

High-speed transportation is a growing concern. To develop high-speed rails and to increase high-speed efficiencies, the idea of Hyperloop was introduced. The challenge is to overcome the difficulties of managing friction and air-resistance which become substantial when vehicles approach high speeds. In this paper, we are presenting the methodologies of the capsule design which got a design concept innovation award at SpaceX competition in January, 2016. MATLAB scripts are written for the levitation and propulsion calculations and iterations. Computational Fluid Dynamics (CFD) is used to simulate the air flow around the capsule considering the effect of the axial-flow air compressor and the levitation cushion on the air flow. The design procedures of a single-sided linear induction motor are analyzed in detail and its geometric and magnetic parameters are determined. A structural design is introduced and Finite Element Method (FEM) is used to analyze the stresses in different parts. The configuration and the arrangement of the components are illustrated. Moreover, comments on manufacturing are made.

Keywords: high-speed transportation, hyperloop, railways transportation, single-sided linear induction Motor (SLIM)

Procedia PDF Downloads 268
17688 Seismic Response and Sensitivity Analysis of Circular Shallow Tunnels

Authors: Siti Khadijah Che Osmi, Mohammed Ahmad Syed

Abstract:

Underground tunnels are one of the most popular public facilities for various applications such as transportation, water transfer, network utilities and etc. Experience from the past earthquake reveals that the underground tunnels also become vulnerable components and may damage at certain percentage depending on the level of ground shaking and induced phenomena. In this paper a numerical analysis is conducted in evaluating the sensitivity of two types of circular shallow tunnel lining models to wide ranging changes in the geotechnical design parameter. Critical analysis has been presented about the current methods of analysis, structural typology, ground motion characteristics, effect of soil conditions and associated uncertainties on the tunnel integrity. The response of the tunnel is evaluated through 2D non-linear finite element analysis, which critically assesses the impact of increasing levels of seismic loads. The finding from this study offer significant information on improving methods to assess the vulnerability of underground structures.

Keywords: geotechnical design parameter, seismic response, sensitivity analysis, shallow tunnel

Procedia PDF Downloads 432
17687 Discourse Analysis: Where Cognition Meets Communication

Authors: Iryna Biskub

Abstract:

The interdisciplinary approach to modern linguistic studies is exemplified by the merge of various research methods, which sometimes causes complications related to the verification of the research results. This methodological confusion can be resolved by means of creating new techniques of linguistic analysis combining several scientific paradigms. Modern linguistics has developed really productive and efficient methods for the investigation of cognitive and communicative phenomena of which language is the central issue. In the field of discourse studies, one of the best examples of research methods is the method of Critical Discourse Analysis (CDA). CDA can be viewed both as a method of investigation, as well as a critical multidisciplinary perspective. In CDA the position of the scholar is crucial from the point of view exemplifying his or her social and political convictions. The generally accepted approach to obtaining scientifically reliable results is to use a special well-defined scientific method for researching special types of language phenomena: cognitive methods applied to the exploration of cognitive aspects of language, whereas communicative methods are thought to be relevant only for the investigation of communicative nature of language. In the recent decades discourse as a sociocultural phenomenon has been the focus of careful linguistic research. The very concept of discourse represents an integral unity of cognitive and communicative aspects of human verbal activity. Since a human being is never able to discriminate between cognitive and communicative planes of discourse communication, it doesn’t make much sense to apply cognitive and communicative methods of research taken in isolation. It is possible to modify the classical CDA procedure by means of mapping human cognitive procedures onto the strategic communicative planning of discourse communication. The analysis of the electronic petition 'Block Donald J Trump from UK entry. The signatories believe Donald J Trump should be banned from UK entry' (584, 459 signatures) and the parliamentary debates on it has demonstrated the ability to map cognitive and communicative levels in the following way: the strategy of discourse modeling (communicative level) overlaps with the extraction of semantic macrostructures (cognitive level); the strategy of discourse management overlaps with the analysis of local meanings in discourse communication; the strategy of cognitive monitoring of the discourse overlaps with the formation of attitudes and ideologies at the cognitive level. Thus, the experimental data have shown that it is possible to develop a new complex methodology of discourse analysis, where cognition would meet communication, both metaphorically and literally. The same approach may appear to be productive for the creation of computational models of human-computer interaction, where the automatic generation of a particular type of a discourse could be based on the rules of strategic planning involving cognitive models of CDA.

Keywords: cognition, communication, discourse, strategy

Procedia PDF Downloads 243
17686 Neuron Efficiency in Fluid Dynamics and Prediction of Groundwater Reservoirs'' Properties Using Pattern Recognition

Authors: J. K. Adedeji, S. T. Ijatuyi

Abstract:

The application of neural network using pattern recognition to study the fluid dynamics and predict the groundwater reservoirs properties has been used in this research. The essential of geophysical survey using the manual methods has failed in basement environment, hence the need for an intelligent computing such as predicted from neural network is inevitable. A non-linear neural network with an XOR (exclusive OR) output of 8-bits configuration has been used in this research to predict the nature of groundwater reservoirs and fluid dynamics of a typical basement crystalline rock. The control variables are the apparent resistivity of weathered layer (p1), fractured layer (p2), and the depth (h), while the dependent variable is the flow parameter (F=λ). The algorithm that was used in training the neural network is the back-propagation coded in C++ language with 300 epoch runs. The neural network was very intelligent to map out the flow channels and detect how they behave to form viable storage within the strata. The neural network model showed that an important variable gr (gravitational resistance) can be deduced from the elevation and apparent resistivity pa. The model results from SPSS showed that the coefficients, a, b and c are statistically significant with reduced standard error at 5%.

Keywords: gravitational resistance, neural network, non-linear, pattern recognition

Procedia PDF Downloads 205
17685 Convergence Analysis of Training Two-Hidden-Layer Partially Over-Parameterized ReLU Networks via Gradient Descent

Authors: Zhifeng Kong

Abstract:

Over-parameterized neural networks have attracted a great deal of attention in recent deep learning theory research, as they challenge the classic perspective of over-fitting when the model has excessive parameters and have gained empirical success in various settings. While a number of theoretical works have been presented to demystify properties of such models, the convergence properties of such models are still far from being thoroughly understood. In this work, we study the convergence properties of training two-hidden-layer partially over-parameterized fully connected networks with the Rectified Linear Unit activation via gradient descent. To our knowledge, this is the first theoretical work to understand convergence properties of deep over-parameterized networks without the equally-wide-hidden-layer assumption and other unrealistic assumptions. We provide a probabilistic lower bound of the widths of hidden layers and proved linear convergence rate of gradient descent. We also conducted experiments on synthetic and real-world datasets to validate our theory.

Keywords: over-parameterization, rectified linear units ReLU, convergence, gradient descent, neural networks

Procedia PDF Downloads 132
17684 Adaptive Kaman Filter for Fault Diagnosis of Linear Parameter-Varying Systems

Authors: Rajamani Doraiswami, Lahouari Cheded

Abstract:

Fault diagnosis of Linear Parameter-Varying (LPV) system using an adaptive Kalman filter is proposed. The LPV model is comprised of scheduling parameters, and the emulator parameters. The scheduling parameters are chosen such that they are capable of tracking variations in the system model as a result of changes in the operating regimes. The emulator parameters, on the other hand, simulate variations in the subsystems during the identification phase and have negligible effect during the operational phase. The nominal model and the influence vectors, which are the gradient of the feature vector respect to the emulator parameters, are identified off-line from a number of emulator parameter perturbed experiments. A Kalman filter is designed using the identified nominal model. As the system varies, the Kalman filter model is adapted using the scheduling variables. The residual is employed for fault diagnosis. The proposed scheme is successfully evaluated on simulated system as well as on a physical process control system.

Keywords: identification, linear parameter-varying systems, least-squares estimation, fault diagnosis, Kalman filter, emulators

Procedia PDF Downloads 489
17683 A Study of User Awareness and Attitudes Towards Civil-ID Authentication in Oman’s Electronic Services

Authors: Raya Al Khayari, Rasha Al Jassim, Muna Al Balushi, Fatma Al Moqbali, Said El Hajjar

Abstract:

This study utilizes linear regression analysis to investigate the correlation between user account passwords and the probability of civil ID exposure, offering statistical insights into civil ID security. The study employs multiple linear regression (MLR) analysis to further investigate the elements that influence consumers’ views of civil ID security. This aims to increase awareness and improve preventive measures. The results obtained from the MLR analysis provide a thorough comprehension and can guide specific educational and awareness campaigns aimed at promoting improved security procedures. In summary, the study’s results offer significant insights for improving existing security measures and developing more efficient tactics to reduce risks related to civil ID security in Oman. By identifying key factors that impact consumers’ perceptions, organizations can tailor their strategies to address vulnerabilities effectively. Additionally, the findings can inform policymakers on potential regulatory changes to enhance civil ID security in the country.

Keywords: civil-id disclosure, awareness, linear regression, multiple regression

Procedia PDF Downloads 45
17682 NR/PEO Block Copolymer: A Chelating Exchanger for Metal Ions

Authors: M. S. Mrudula, M. R. Gopinathan Nair

Abstract:

In order to utilize the natural rubber for developing new green polymeric materials for specialty applications, we have prepared natural rubber and polyethylene oxide based polymeric networks by two shot method. The polymeric networks thus formed have been used as chelating exchanger for metal ion binding. Chelating exchangers are, in general, coordinating copolymers containing one or more electron donor atoms such as N, S, O, and P that can form coordinate bonds with metals. Hydrogels are water- swollen network of hydrophilic homopolymer or copolymers. They acquire a great interest due to the facility of the incorporation of different chelating groups into the polymeric networks. Such polymeric hydrogels are promising materials in the field of hydrometallurgical applications and water purification due to their chemical stability. The current study discusses the swelling response of the polymeric networks as a function of time, temperature, pH and [NaCl] and sorption studies. Equilibrium swelling has been observed to depend on both structural aspects of the polymers and environmental factors. Metal ion sorption shows that these polymeric networks can be used for removal, separation, and enrichment of metal ions from aqueous solutions and can play an important role for environmental remediation of municipal and industrial wastewater.

Keywords: block copolymer, adsorption, chelating exchanger, swelling study, polymer, metal complexes

Procedia PDF Downloads 331
17681 Copolymers of Epsilon-Caprolactam Received via Anionic Polymerization in the Presence of Polypropylene Glycol Based Polymeric Activators

Authors: Krasimira N. Zhilkova, Mariya K. Kyulavska, Roza P. Mateva

Abstract:

The anionic polymerization of -caprolactam (CL) with bifunctional activators has been extensively studied as an effective and beneficial method of improving chemical and impact resistances, elasticity and other mechanical properties of polyamide (PA6). In presence of activators or macroactivators (MAs) also called polymeric activators (PACs) the anionic polymerization of lactams proceeds rapidly at a temperature range of 130-180C, well below the melting point of PA-6 (220C) permitting thus the direct manufacturing of copolymer product together with desired modifications of polyamide properties. Copolymers of PA6 with an elastic polypropylene glycol (PPG) middle block into main chain were successfully synthesized via activated anionic ring opening polymerization (ROP) of CL. Using novel PACs based on PPG polyols (with differ molecular weight) the anionic ROP of CL was realized and investigated in the presence of a basic initiator sodium salt of CL (NaCL). The PACs were synthesized as N-carbamoyllactam derivatives of hydroxyl terminated PPG functionalized with isophorone diisocyanate [IPh, 5-Isocyanato-1-(isocyanatomethyl)-1,3,3-trimethylcyclohexane] and blocked then with CL units via an addition reaction. The block copolymers were analyzed and proved with 1H-NMR and FT-IR spectroscopy. The influence of the CL/PACs ratio in feed, the length of the PPG segments and polymerization conditions on the kinetics of anionic ROP, on average molecular weight, and on the structure of the obtained block copolymers were investigated. The structure and phase behaviour of the copolymers were explored with differential scanning calorimetry, wide-angle X-ray diffraction, thermogravimetric analysis and dynamic mechanical thermal analysis. The crystallinity dependence of PPG content incorporated into copolymers main backbone was estimate. Additionally, the mechanical properties of the obtained copolymers were studied by notched impact test. From the performed investigation in this study could be concluded that using PPG based PACs at the chosen ROP conditions leads to obtaining well-defined PA6-b-PPG-b-PA6 copolymers with improved impact resistance.

Keywords: anionic ring opening polymerization, caprolactam, polyamide copolymers, polypropylene glycol

Procedia PDF Downloads 400
17680 Cars Redistribution Optimization Problem in the Free-Float Car-Sharing

Authors: Amine Ait-Ouahmed, Didier Josselin, Fen Zhou

Abstract:

Free-Float car-sharing is an one-way car-sharing service where cars are available anytime and anywhere in the streets such that no dedicated stations are needed. This means that after driving a car you can park it anywhere. This car-sharing system creates an imbalance car distribution in the cites which can be regulated by staff agents through the redistribution of cars. In this paper, we aim to solve the car-reservation and agents traveling problem so that the number of successful cars’ reservations could be maximized. Beside, we also tend to minimize the distance traveled by agents for cars redistribution. To this end, we present a mixed integer linear programming formulation for the car-sharing problem.

Keywords: one-way car-sharing, vehicle redistribution, car reservation, linear programming

Procedia PDF Downloads 336
17679 In and Out-Of-Sample Performance of Non Simmetric Models in International Price Differential Forecasting in a Commodity Country Framework

Authors: Nicola Rubino

Abstract:

This paper presents an analysis of a group of commodity exporting countries' nominal exchange rate movements in relationship to the US dollar. Using a series of Unrestricted Self-exciting Threshold Autoregressive models (SETAR), we model and evaluate sixteen national CPI price differentials relative to the US dollar CPI. Out-of-sample forecast accuracy is evaluated through calculation of mean absolute error measures on the basis of two-hundred and fifty-three months rolling window forecasts and extended to three additional models, namely a logistic smooth transition regression (LSTAR), an additive non linear autoregressive model (AAR) and a simple linear Neural Network model (NNET). Our preliminary results confirm presence of some form of TAR non linearity in the majority of the countries analyzed, with a relatively higher goodness of fit, with respect to the linear AR(1) benchmark, in five countries out of sixteen considered. Although no model appears to statistically prevail over the other, our final out-of-sample forecast exercise shows that SETAR models tend to have quite poor relative forecasting performance, especially when compared to alternative non-linear specifications. Finally, by analyzing the implied half-lives of the > coefficients, our results confirms the presence, in the spirit of arbitrage band adjustment, of band convergence with an inner unit root behaviour in five of the sixteen countries analyzed.

Keywords: transition regression model, real exchange rate, nonlinearities, price differentials, PPP, commodity points

Procedia PDF Downloads 272
17678 Polymeric Micelles Based on Block Copolymer α-Tocopherol Succinate-g-Carboxymethyl Chitosan for Tamoxifen Delivery

Authors: Sunil K. Jena, Sanjaya K. Samal, Mahesh Chand, Abhay T. Sangamwar

Abstract:

Tamoxifen (TMX) and its analogues are approved as a first line therapy for the treatment of estrogen receptor-positive tumors. However, clinical development of TMX has been hampered by its low bioavailability and severe hepatotoxicity. Herein, we attempt to design a new drug delivery vehicle that could enhance the pharmacokinetic performance of TMX. Initially, high-molecular weight carboxymethyl chitosan was hydrolyzed to low-molecular weight carboxymethyl chitosan (LMW CMC) with hydrogen peroxide under the catalysis of phosphotungstic acid. Amphiphilic block copolymers of LMW CMC were synthesized via amidation reaction between the carboxyl group of α-tocopherol succinate (TS) and an amine group of LMW CMC. These amphiphilic block copolymers were self-assembled to nanosize core-shell-structural micelles in the aqueous medium. The critical micelle concentration (CMC) decreased with the increasing substitution of TS on LMW CMC, which ranged from 1.58 × 10-6 to 7.94 × 10-8 g/mL. Maximum TMX loading up to 8.08 ± 0.98% was achieved with Cmc-TS4.5 (TMX/Cmc-TS4.5 with 1:8 weight ratio). Both blank and TMX-loaded polymeric micelles (TMX-PM) of Cmc-TS4.5 exhibits spherical shape with the particle size below 200 nm. TMX-PM has been found to be stable in the gastrointestinal conditions and released only 44.5% of the total drug content by the first 72 h in simulated gastric fluid (SGF), pH 1.2. However, the presence of pepsin does not significantly increased the TMX release in SGF, pH 1.2, released only about 46.2% by the first 72 h suggesting its inability to cleave the peptide bond. In contrast, the release of TMX from TMX-PM4.5 in SIF, pH 6.8 (without pancreatin) was slow and sustained, released only about 10.43% of the total drug content within the first 30 min and nearly about 12.41% by the first 72 h. The presence of pancreatin in SIF, pH 6.8 led to an improvement in drug release. About 28.09% of incorporated TMX was released in the presence of pancreatin in 72 h. A cytotoxicity study demonstrated that TMX-PM exhibited time-delayed cytotoxicity in human MCF-7 breast cancer cells. Pharmacokinetic studies on Sprague-Dawley rats revealed a remarkable increase in oral bioavailability (1.87-fold) with significant (p < 0.0001) enhancement in AUC0-72 h, t1/2 and MRT of TMX-PM4.5 than that of TMX-suspension. Thus, the results suggested that CMC-TS micelles are a promising carrier for TMX delivery.

Keywords: carboxymethyl chitosan, d-α-tocopherol succinate, pharmacokinetic, polymeric micelles, tamoxifen

Procedia PDF Downloads 319
17677 A Combined Error Control with Forward Euler Method for Dynamical Systems

Authors: R. Vigneswaran, S. Thilakanathan

Abstract:

Variable time-stepping algorithms for solving dynamical systems performed poorly for long time computations which pass close to a fixed point. To overcome this difficulty, several authors considered phase space error controls for numerical simulation of dynamical systems. In one generalized phase space error control, a step-size selection scheme was proposed, which allows this error control to be incorporated into the standard adaptive algorithm as an extra constraint at negligible extra computational cost. For this generalized error control, it was already analyzed the forward Euler method applied to the linear system whose coefficient matrix has real and negative eigenvalues. In this paper, this result was extended to the linear system whose coefficient matrix has complex eigenvalues with negative real parts. Some theoretical results were obtained and numerical experiments were carried out to support the theoretical results.

Keywords: adaptivity, fixed point, long time simulations, stability, linear system

Procedia PDF Downloads 305
17676 A Linear Programming Approach to Assist Roster Construction Under a Salary Cap

Authors: Alex Contarino

Abstract:

Professional sports leagues often have a “free agency” period, during which teams may sign players with expiring contracts.To promote parity, many leagues operate under a salary cap that limits the amount teams can spend on player’s salaries in a given year. Similarly, in fantasy sports leagues, salary cap drafts are a popular method for selecting players. In order to sign a free agent in either setting, teams must bid against one another to buy the player’s services while ensuring the sum of their player’s salaries is below the salary cap. This paper models the bidding process for a free agent as a constrained optimization problem that can be solved using linear programming. The objective is to determine the largest bid that a team should offer the player subject to the constraint that the value of signing the player must exceed the value of using the salary cap elsewhere. Iteratively solving this optimization problem for each available free agent provides teams with an effective framework for maximizing the talent on their rosters. The utility of this approach is demonstrated for team sport roster construction and fantasy sport drafts, using recent data sets from both settings.

Keywords: linear programming, optimization, roster management, salary cap

Procedia PDF Downloads 104