Search results for: classical conditioning
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1281

Search results for: classical conditioning

231 Joint Training Offer Selection and Course Timetabling Problems: Models and Algorithms

Authors: Gianpaolo Ghiani, Emanuela Guerriero, Emanuele Manni, Alessandro Romano

Abstract:

In this article, we deal with a variant of the classical course timetabling problem that has a practical application in many areas of education. In particular, in this paper we are interested in high schools remedial courses. The purpose of such courses is to provide under-prepared students with the skills necessary to succeed in their studies. In particular, a student might be under prepared in an entire course, or only in a part of it. The limited availability of funds, as well as the limited amount of time and teachers at disposal, often requires schools to choose which courses and/or which teaching units to activate. Thus, schools need to model the training offer and the related timetabling, with the goal of ensuring the highest possible teaching quality, by meeting the above-mentioned financial, time and resources constraints. Moreover, there are some prerequisites between the teaching units that must be satisfied. We first present a Mixed-Integer Programming (MIP) model to solve this problem to optimality. However, the presence of many peculiar constraints contributes inevitably in increasing the complexity of the mathematical model. Thus, solving it through a general purpose solver may be performed for small instances only, while solving real-life-sized instances of such model requires specific techniques or heuristic approaches. For this purpose, we also propose a heuristic approach, in which we make use of a fast constructive procedure to obtain a feasible solution. To assess our exact and heuristic approaches we perform extensive computational results on both real-life instances (obtained from a high school in Lecce, Italy) and randomly generated instances. Our tests show that the MIP model is never solved to optimality, with an average optimality gap of 57%. On the other hand, the heuristic algorithm is much faster (in about the 50% of the considered instances it converges in approximately half of the time limit) and in many cases allows achieving an improvement on the objective function value obtained by the MIP model. Such an improvement ranges between 18% and 66%.

Keywords: heuristic, MIP model, remedial course, school, timetabling

Procedia PDF Downloads 584
230 Fully Coupled Porous Media Model

Authors: Nia Mair Fry, Matthew Profit, Chenfeng Li

Abstract:

This work focuses on the development and implementation of a fully implicit-implicit, coupled mechanical deformation and porous flow, finite element software tool. The fully implicit software accurately predicts classical fundamental analytical solutions such as the Terzaghi consolidation problem. Furthermore, it can capture other analytical solutions less well known in the literature, such as Gibson’s sedimentation rate problem and Coussy’s problems investigating wellbore stability for poroelastic rocks. The mechanical volume strains are transferred to the porous flow governing equation in an implicit framework. This will overcome some of the many current industrial issues, which use explicit solvers for the mechanical governing equations and only implicit solvers on the porous flow side. This can potentially lead to instability and non-convergence issues in the coupled system, plus giving results with an accountable degree of error. The specification of a fully monolithic implicit-implicit coupled porous media code sees the solution of both seepage-mechanical equations in one matrix system, under a unified time-stepping scheme, which makes the problem definition much easier. When using an explicit solver, additional input such as the damping coefficient and mass scaling factor is required, which are circumvented with a fully implicit solution. Further, improved accuracy is achieved as the solution is not dependent on predictor-corrector methods for the pore fluid pressure solution, but at the potential cost of reduced stability. In testing of this fully monolithic porous media code, there is the comparison of the fully implicit coupled scheme against an existing staggered explicit-implicit coupled scheme solution across a range of geotechnical problems. These cases include 1) Biot coefficient calculation, 2) consolidation theory with Terzaghi analytical solution, 3) sedimentation theory with Gibson analytical solution, and 4) Coussy well-bore poroelastic analytical solutions.

Keywords: coupled, implicit, monolithic, porous media

Procedia PDF Downloads 110
229 Off-Line Text-Independent Arabic Writer Identification Using Optimum Codebooks

Authors: Ahmed Abdullah Ahmed

Abstract:

The task of recognizing the writer of a handwritten text has been an attractive research problem in the document analysis and recognition community with applications in handwriting forensics, paleography, document examination and handwriting recognition. This research presents an automatic method for writer recognition from digitized images of unconstrained writings. Although a great effort has been made by previous studies to come out with various methods, their performances, especially in terms of accuracy, are fallen short, and room for improvements is still wide open. The proposed technique employs optimal codebook based writer characterization where each writing sample is represented by a set of features computed from two codebooks, beginning and ending. Unlike most of the classical codebook based approaches which segment the writing into graphemes, this study is based on fragmenting a particular area of writing which are beginning and ending strokes. The proposed method starting with contour detection to extract significant information from the handwriting and the curve fragmentation is then employed to categorize the handwriting into Beginning and Ending zones into small fragments. The similar fragments of beginning strokes are grouped together to create Beginning cluster, and similarly, the ending strokes are grouped to create the ending cluster. These two clusters lead to the development of two codebooks (beginning and ending) by choosing the center of every similar fragments group. Writings under study are then represented by computing the probability of occurrence of codebook patterns. The probability distribution is used to characterize each writer. Two writings are then compared by computing distances between their respective probability distribution. The evaluations carried out on ICFHR standard dataset of 206 writers using Beginning and Ending codebooks separately. Finally, the Ending codebook achieved the highest identification rate of 98.23%, which is the best result so far on ICFHR dataset.

Keywords: off-line text-independent writer identification, feature extraction, codebook, fragments

Procedia PDF Downloads 488
228 The Comparative Electroencephalogram Study: Children with Autistic Spectrum Disorder and Healthy Children Evaluate Classical Music in Different Ways

Authors: Galina Portnova, Kseniya Gladun

Abstract:

In our EEG experiment participated 27 children with ASD with the average age of 6.13 years and the average score for CARS 32.41 and 25 healthy children (of 6.35 years). Six types of musical stimulation were presented, included Gluck, Javier-Naida, Kenny G, Chopin and other classic musical compositions. Children with autism showed orientation reaction to the music and give behavioral responses to different types of music, some of them might assess stimulation by scales. The participants were instructed to remain calm. Brain electrical activity was recorded using a 19-channel EEG recording device, 'Encephalan' (Russia, Taganrog). EEG epochs lasting 150 s were analyzed using EEGLab plugin for MatLab (Mathwork Inc.). For EEG analysis we used Fast Fourier Transform (FFT), analyzed Peak alpha frequency (PAF), correlation dimension D2 and Stability of rhythms. To express the dynamics of desynchronizing of different rhythms we've calculated the envelope of the EEG signal, using the whole frequency range and a set of small narrowband filters using Hilbert transformation. Our data showed that healthy children showed similar EEG spectral changes during musical stimulation as well as described the feelings induced by musical fragments. The exception was the ‘Chopin. Prelude’ fragment (no.6). This musical fragment induced different subjective feeling, behavioral reactions and EEG spectral changes in children with ASD and healthy children. The correlation dimension D2 was significantly lower in autists compared to healthy children during musical stimulation. Hilbert envelope frequency was reduced in all group of subjects during musical compositions 1,3,5,6 compositions compared to the background. During musical fragments 2 and 4 (terrible) lower Hilbert envelope frequency was observed only in children with ASD and correlated with the severity of the disease. Alfa peak frequency was lower compared to the background during this musical composition in healthy children and conversely higher in children with ASD.

Keywords: electroencephalogram (EEG), emotional perception, ASD, musical perception, childhood Autism rating scale (CARS)

Procedia PDF Downloads 258
227 Geopolymerization Methods for Clay Soils Treatment

Authors: Baba Hassane Ahmed Hisseini, Abdelkrim Bennabi, Rabah Hamzaoui, Lamis Makki, Gaetan Blanck

Abstract:

Most of the clay soils are known as problematic soils due to their water content, which varies greatly over time. It is observed that they are used to be subject to shrinkage and swelling, thus causing a problem of stability on the structures of civil engineering construction work. They are often excavated and placed in a storage area giving rise to the opening of new quarries. This method has become obsolete today because to protect the environment, we are leading to think differently and opening the way to new research for the improvement of the performance of this type of clay soils to reuse them in the construction field. The solidification and stabilization technique is used to improve the properties of poor quality soils to transform them into materials with a suitable performance for a new use in the civil engineering field rather than to excavate them and store them in the discharge area. In our case, the polymerization method is used for bad clay soils classified as high plasticity soil class A4 according to the French standard NF P11-300, where classical treatment methods with cement or lime are not efficient. Our work concerns clay soil treatment study using raw materials as additives for solidification and stabilization. The geopolymers are synthesized by aluminosilicates materials like fly ash, metakaolin, or blast furnace slag and activated by alkaline solution based on sodium hydroxide (NaOH), sodium silicate (Na2SiO3) or a mixture of both of them. In this study, we present the mechanical properties of the soil clay (A4 type) evolution with geopolymerisation methods treatment. Various mix design of aluminosilicates materials and alkaline solutions were carried at different percentages and different curing times of 1, 7, and 28 days. The compressive strength of the untreated clayey soil could be increased from simple to triple. It is observed that the improvement of compressive strength is associated with a geopolymerization mechanism. The highest compressive strength was found with metakaolin at 28 days.

Keywords: treatment and valorization of clay-soil, solidification and stabilization, alkali-activation of co-product, geopolymerization

Procedia PDF Downloads 136
226 Forecasting Thermal Energy Demand in District Heating and Cooling Systems Using Long Short-Term Memory Neural Networks

Authors: Kostas Kouvaris, Anastasia Eleftheriou, Georgios A. Sarantitis, Apostolos Chondronasios

Abstract:

To achieve the objective of almost zero carbon energy solutions by 2050, the EU needs to accelerate the development of integrated, highly efficient and environmentally friendly solutions. In this direction, district heating and cooling (DHC) emerges as a viable and more efficient alternative to conventional, decentralized heating and cooling systems, enabling a combination of more efficient renewable and competitive energy supplies. In this paper, we develop a forecasting tool for near real-time local weather and thermal energy demand predictions for an entire DHC network. In this fashion, we are able to extend the functionality and to improve the energy efficiency of the DHC network by predicting and adjusting the heat load that is distributed from the heat generation plant to the connected buildings by the heat pipe network. Two case-studies are considered; one for Vransko, Slovenia and one for Montpellier, France. The data consists of i) local weather data, such as humidity, temperature, and precipitation, ii) weather forecast data, such as the outdoor temperature and iii) DHC operational parameters, such as the mass flow rate, supply and return temperature. The external temperature is found to be the most important energy-related variable for space conditioning, and thus it is used as an external parameter for the energy demand models. For the development of the forecasting tool, we use state-of-the-art deep neural networks and more specifically, recurrent networks with long-short-term memory cells, which are able to capture complex non-linear relations among temporal variables. Firstly, we develop models to forecast outdoor temperatures for the next 24 hours using local weather data for each case-study. Subsequently, we develop models to forecast thermal demand for the same period, taking under consideration past energy demand values as well as the predicted temperature values from the weather forecasting models. The contributions to the scientific and industrial community are three-fold, and the empirical results are highly encouraging. First, we are able to predict future thermal demand levels for the two locations under consideration with minimal errors. Second, we examine the impact of the outdoor temperature on the predictive ability of the models and how the accuracy of the energy demand forecasts decreases with the forecast horizon. Third, we extend the relevant literature with a new dataset of thermal demand and examine the performance and applicability of machine learning techniques to solve real-world problems. Overall, the solution proposed in this paper is in accordance with EU targets, providing an automated smart energy management system, decreasing human errors and reducing excessive energy production.

Keywords: machine learning, LSTMs, district heating and cooling system, thermal demand

Procedia PDF Downloads 114
225 Large-Scale Screening for Membrane Protein Interactions Involved in Platelet-Monocyte Interactions

Authors: Yi Sun, George Ed Rainger, Steve P. Watson

Abstract:

Background: Beyond the classical roles in haemostasis and thrombosis, platelets are important in the initiation and development of various thrombo-inflammatory diseases. In atherosclerosis and deep vein thrombosis, for example, platelets bridge monocytes with endothelium and form heterotypic aggregates with monocytes in the circulation. This can alter monocyte phenotype by inducing their activation, stimulating adhesion and migration. These interactions involve cell surface receptor-ligand pairs on both cells. This list is likely incomplete as new interactions of importance to platelet biology are continuing to be discovered as illustrated by our discovery of PEAR-1 binding to FcεR1α. Results: We have developed a highly sensitive avidity-based assay to identify novel extracellular interactions among 126 recombinantly-expressed platelet cell surface and secreted proteins involved in platelet aggregation. In this study, we will use this method to identify novel platelet-monocyte interactions. We aim to identify ligands for orphan receptors and novel partners of well-known proteins. Identified interactions will be studied in preliminary functional assays to demonstrate relevance to the inflammatory processes supporting atherogenesis. Conclusions: Platelet-monocyte interactions are essential for the development of thromboinflammatory disease. Up until relatively recently, technologies only allow us to limit our studies on each individual protein interaction at a single time. These studies propose for the first time to study the cell surface platelet-monocyte interactions in a systematic large-scale approach using a reliable screening method we have developed. If successful, this will likely to identify previously unknown ligands for important receptors that will be investigated in details and also provide a list of novel interactions for the field. This should stimulate studies on developing alternative therapeutic strategies to treat vascular inflammatory disorders such as atherosclerosis, DVT and sepsis and other clinically important inflammatory conditions.

Keywords: membrane proteins, large-scale screening, platelets, recombinant expression

Procedia PDF Downloads 124
224 Experimental and Theoretical Characterization of Supramolecular Complexes between 7-(Diethylamino)Quinoline-2(1H)-One and Cucurbit[7] Uril

Authors: Kevin A. Droguett, Edwin G. Pérez, Denis Fuentealba, Margarita E. Aliaga, Angélica M. Fierro

Abstract:

Supramolecular chemistry is a field of growing interest. Moreover, studying the formation of host-guest complexes between macrocycles and dyes is highly attractive due to their potential applications. Examples of the above are drug delivery, catalytic process, and sensing, among others. There are different dyes of interest in the literature; one example is the quinolinone derivatives. Those molecules have good optical properties and chemical and thermal stability, making them suitable for developing fluorescent probes. Secondly, several macrocycles can be seen in the literature. One example is the cucurbiturils. This water-soluble macromolecule family has a hydrophobic cavity and two identical carbonyl portals. Additionally, the thermodynamic analysis of those supramolecular systems could help understand the affinity between the host and guest, their interaction, and the main stabilization energy of the complex. In this work, two 7-(diethylamino) quinoline-2 (1H)-one derivative (QD1-2) and their interaction with cucurbit[7]uril (CB[7]) were studied from an experimental and in-silico point of view. For the experimental section, the complexes showed a 1:1 stoichiometry by HRMS-ESI and isothermal titration calorimetry (ITC). The inclusion of the derivatives on the macrocycle lends to an upward shift in the fluorescence intensity, and the pKa value of QD1-2 exhibits almost no variation after the formation of the complex. The thermodynamics of the inclusion complexes was investigated using ITC; the results demonstrate a non-classical hydrophobic effect with a minimum contribution from the entropy term and a constant binding on the order of 106 for both ligands. Additionally, dynamic molecular studies were carried out during 300 ns in an explicit solvent at NTP conditions. Our finding shows that the complex remains stable during the simulation (RMSD ~1 Å), and hydrogen bonds contribute to the stabilization of the systems. Finally, thermodynamic parameters from MMPBSA calculations were obtained to generate new computational insights to compare with experimental results.

Keywords: host-guest complexes, molecular dynamics, quinolin-2(1H)-one derivatives dyes, thermodynamics

Procedia PDF Downloads 63
223 Anti Oxidant Ayurvedic Rasyan Herbs Concept to Disease Managment

Authors: Mohammed Khalil Ur Rahman, Khanita Aammatullh

Abstract:

Rasayana is one of the eight clinical specialities of classical Ayurveda The disease preventive and health promotive approach of ‘Ayurveda’, which takes into consideration the whole body, mind and spirit while dealing with the maintenance of health, promotion of health and treating ailments is holistic and finds increasing acceptability in many regions of the world. Ancient Ayurvedic physicians had developed certain dietary and therapeutic measures to arrest/delay ageing and rejuvenating whole functional dynamics of the body system. This revitalization and rejuvenation is known as the ‘Rasayan chikitsa’ (rejuvenation therapy). Traditionally, Rasayana drugs are used against a plethora of seemingly diverse disorders with no pathophysiological connections according to modern medicine. Though, this group of plants generally possesses strong antioxidant activity, only a few have been investigated in detail. Over about 100 disorders like rheumatoid arthritis, hemorrhagic shock, CVS disorders, cystic fibrosis, metabolic disorders, neurodegenerative diseases, gastrointestinal ulcerogenesis and AIDS have been reported as reactive oxygen species mediated. In this review, the role of free radicals in these diseases has been briefly reviewed. ‘Rasayana’ plants with potent antioxidant activity have been reviewed for their traditional uses, and mechanism of antioxidant action. Fifteen such plants have been dealt with in detail and some more plants with less work have also been reviewed briefly The Rasayanas are rejuvenators, nutritional supplements and possess strong antioxidant activity. They also have antagonistic actions on the oxidative stressors, which give rise to the formation of different free radicals. Ocimum sanctum, Tinospora cordifolia, Emblica officinalis, Convolvulus pluricaulis, Centella asiatica, Bacopa monniera, Withania somnifera, Triphala rasayana, Chyawanprash, Brahma rasayana are very important rasayanas which are described in ayurveda and proved by new researches.

Keywords: rasayana, antioxidant activity, Bacopa monniera, Withania somnifera Triphala, chyawanprash

Procedia PDF Downloads 242
222 Research and Innovations in Music Teacher Training Programme in Hungary

Authors: Monika Benedek

Abstract:

Improvisation is an integral part of music education programmes worldwide since teachers recognize that improvisation helps to broaden stylistic knowledge, develops creativity and various musical skills, in particular, aural skills, and also motivates to learn music theory. In Hungary, where Kodály concept is a core element of music teacher education, improvisation has been relatively neglected subject in both primary school and classical music school curricula. Therefore, improvisation was an important theme of a one-year-long research project carried out at the Liszt Academy of Music in Budapest. The project aimed to develop the music teacher training programme, and among others, focused on testing how improvisation could be used as a teaching tool to improve students’ musical reading and writing skills and creative musical skills. Teacher-researchers first tested various teaching approaches of improvisation with numerous teaching modules in music lessons at public schools and music schools. Data were collected from videos of lessons and from teachers’ reflective notes. After analysing data and developing teaching modules, all modules were tested again in a pilot course in 30 contact lessons for music teachers. Teachers gave written feedback of the pilot programme, tested two modules by their choice in their own teaching and wrote reflecting comments about their experiences in applying teaching modules of improvisation. The overall results indicated that improvisation could be an innovative approach to teaching various musical subjects, in particular, solfege, music theory, and instrument, either in individual or in group instruction. Improvisation, especially with the application of relative solmisation and singing, appeared to have been a beneficial tool to develop various musicianship skills of students and teachers, in particular, the aural, musical reading and writing skills, and creative musical skills. Furthermore, improvisation seemed to have been a motivating teaching tool to learn music theory by creating a bridge between various musical styles. This paper reports on the results of the research project.

Keywords: improvisation, Kodály concept, music school, public school, teacher training

Procedia PDF Downloads 116
221 Ancient Iran Water Technologies

Authors: Akbar Khodavirdizadeh, Ali Nemati Babaylou, Hassan Moomivand

Abstract:

The history of human access to water technique has been one of the factors in the formation of human civilizations in the ancient world. The technique that makes surface water and groundwater accessible to humans on the ground has been a clever technique in human life to reach the water. In this study, while examining the water technique of ancient Iran using the Qanats technique, the water supply system of different regions of the ancient world were also studied and compared. Six groups of the ancient region of ancient Greece (Archaic 480-750 BC and Classical 223-480 BC), Urartu in Tuspa (600-850 BC), Petra (106-168 BC), Ancient Rome (265 BC), and the ancient United States (1450 BC) and ancient Iranian water technologies were studied under water supply systems. Past water technologies in these areas: water transmission systems in primary urban centers, use of water structures in water control, use of bridges in water transfer, construction of waterways for water transfer, storage of rainfall, construction of various types of pottery- ceramic, lead, wood and stone pipes have been used in water transfer, flood control, water reservoirs, dams, channel, wells, and Qanat. The central plateau of Iran is one of the arid and desert regions. Archaeological, geomorphological, and paleontological studies of the central region of the Iranian plateau showed that without the use of Qanats, the possibility of urban civilization in this region was difficult and even impossible. Zarch aqueduct is the most important aqueduct in Yazd region. Qanat of Zarch is a plain Qanat with a gallery length of 80 km; its mother well is 85 m deep and has 2115 well shafts. The main purpose of building the Qanat of Zārch was to access the groundwater source and transfer it to the surface of the ground. Regarding the structure of the aqueduct and the technique of transferring water from the groundwater source to the surface, it has a great impact on being different from other water techniques in the ancient world. The results show that the use of water technologies in ancient is very important to understand the history of humanity in the use of hydraulic techniques.

Keywords: ancient water technologies, groundwaters, qanat, human history, Ancient Iran

Procedia PDF Downloads 87
220 On Cold Roll Bonding of Polymeric Films

Authors: Nikhil Padhye

Abstract:

Recently a new phenomenon for bonding of polymeric films in solid-state, at ambient temperatures well below the glass transition temperature of the polymer, has been reported. This is achieved by bulk plastic compression of polymeric films held in contact. Here we analyze the process of cold-rolling of polymeric films via finite element simulations and illustrate a flexible and modular experimental rolling-apparatus that can achieve bonding of polymeric films through cold-rolling. Firstly, the classical theory of rolling a rigid-plastic thin-strip is utilized to estimate various deformation fields such as strain-rates, velocities, loads etc. in rolling the polymeric films at the specified feed-rates and desired levels of thickness-reduction(s). Predicted magnitudes of slow strain-rates, particularly at ambient temperatures during rolling, and moderate levels of plastic deformation (at which Bauschinger effect can be neglected for the particular class of polymeric materials studied here), greatly simplifies the task of material modeling and allows us to deploy a computationally efficient, yet accurate, finite deformation rate-independent elastic-plastic material behavior model (with inclusion of isotropic-hardening) for analyzing the rolling of these polymeric films. The interfacial behavior between the roller and polymer surfaces is modeled using Coulombic friction; consistent with the rate-independent behavior. The finite deformation elastic-plastic material behavior based on (i) the additive decomposition of stretching tensor (D = De + Dp, i.e. a hypoelastic formulation) with incrementally objective time integration and, (ii) multiplicative decomposition of deformation gradient (F = FeFp) into elastic and plastic parts, are programmed and carried out for cold-rolling within ABAQUS Explicit. Predictions from both the formulations, i.e., hypoelastic and multiplicative decomposition, exhibit a close match. We find that no specialized hyperlastic/visco-plastic model is required to describe the behavior of the blend of polymeric films, under the conditions described here, thereby speeding up the computation process .

Keywords: Polymer Plasticity, Bonding, Deformation Induced Mobility, Rolling

Procedia PDF Downloads 160
219 Design and Development of an Innovative MR Damper Based on Intelligent Active Suspension Control of a Malaysia's Model Vehicle

Authors: L. Wei Sheng, M. T. Noor Syazwanee, C. J. Carolyna, M. Amiruddin, M. Pauziah

Abstract:

This paper exhibits the alternatives towards active suspension systems revised based on the classical passive suspension system to improve comfort and handling performance. An active Magneto rheological (MR) suspension system is proposed as to explore the active based suspension system to enhance performance given its freedom to independently specify the characteristics of load carrying, handling, and ride quality. Malaysian quarter car with two degrees of freedom (2DOF) system is designed and constructed to simulate the actions of an active vehicle suspension system. The structure of a conventional twin-tube shock absorber is modified both internally and externally to comprehend with the active suspension system. The shock absorber peripheral structure is altered to enable the assembling and disassembling of the damper through a non-permanent joint whereby the stress analysis of the designed joint is simulated using Finite Element Analysis. Simulation on the internal part where an electrified copper coil of 24AWG is winded is done using Finite Element Method Magnetics to measure the magnetic flux density inside the MR damper. The primary purpose of this approach is to reduce the vibration transmitted from the effects of road surface irregularities while maintaining solid manoeuvrability. The aim of this research is to develop an intelligent control system of a consecutive damping automotive suspension system. The ride quality is improved by means of the reduction of the vertical body acceleration caused by the car body when it experiences disturbances from speed bump and random road roughness. Findings from this research are expected to enhance the quality of ride which in return can prevent the deteriorating effect of vibration on the vehicle condition as well as the passengers’ well-being.

Keywords: active suspension, FEA, magneto rheological damper, Malaysian quarter car model, vibration control

Procedia PDF Downloads 188
218 Total Plaque Area in Chronic Renal Failure

Authors: Hernán A. Perez, Luis J. Armando, Néstor H. García

Abstract:

Background and aims Cardiovascular disease rates are very high in patients with renal failure (CRF), but the underlying mechanisms are incompletely understood. Traditional cardiovascular risk factors do not explain the increased risk, and observational studies have observed paradoxical or absent associations between classical risk factors and mortality in dialysis patients. A large randomized controlled trial, the 4D Study, the AURORA and the ALERT study found that statin therapy in CRF do not reduce cardiovascular events. These results may be the results of ‘accelerated atherosclerosis’ observed on these patients. The objective of this study was to investigate if carotid total plaque area (TPA), a measure of carotid plaque burden growth is increased at progressively lower creatinine clearance in patients with CRF. We studied a cohort of patients with CRF not on dialysis, reasoning that risk factor associations might be more easily discerned before end stage renal disease. Methods: The Blossom DMO Argentina ethics committee approved the study and informed consent from each participant was obtained. We performed a cohort study in 412 patients with Stage 1, 2 and 3 CRF. Clinical and laboratory data were obtained. TPA was determined using bilateral carotid ultrasonography. Modification of Diet in Renal Disease estimation formula was used to determine renal function. ANOVA was used when appropriate. Results: Stage 1 CRF group (n= 16, 43±2yo) had a blood pressure of 123±2/78±2 mmHg, BMI 30±1, LDL col 145±10 mg/dl, HbA1c 5.8±0.4% and had the lowest TPA 25.8±6.9 mm2. Stage 2 CRF (n=231, 50±1 yo) had a blood pressure of 132±1/81±1 mmHg, LDL col 125±2 mg/dl, HbA1c 6±0.1% and TPA 48±10mm2 ( p< 0.05 vs CRF stage 1) while Stage 3 CRF (n=165, 59±1 yo) had a blood pressure of 134±1/81±1, LDL col 125±3 mg/dl, HbA1c 6±0.1% and TPA 71±6mm2 (p < 0.05 vs CRF stage 1 and 2). Conclusion: Our data indicate that TPA increases along the renal function deterioration, and it is not related with the LDL cholesterol and triglycerides levels. We suggest that mechanisms other than the classics are responsible for the observed excess of cardiovascular disease in CKD patients and finally, determination of total plaque area should be used to measure effects of antiatherosclerotic therapy.

Keywords: hypertension, chronic renal failure, atherosclerosis, cholesterol

Procedia PDF Downloads 248
217 A Review on Development of Pedicle Screws and Characterization of Biomaterials for Fixation in Lumbar Spine

Authors: Shri Dubey, Jamal Ghorieshi

Abstract:

Instability of the lumbar spine is caused by various factors that include degenerative disc, herniated disc, traumatic injuries, and other disorders. Pedicle screws are widely used as a main fixation device to construct rigid linkages of vertebrae to provide a fully functional and stable spine. Various technologies and methods have been used to restore the stabilization. However, loosening of pedicle screws is the main cause of concerns for neurosurgeons. This could happen due to poor bone quality with osteoporosis as well as types of pedicle screw used. Compatibilities and stabilities of pedicle screws with bone depend on design (thread design, length, and diameter) and material. Grip length and pullout strength affect the motion and stability of the spine when it goes through different phases such as extension, flexion, and rotation. Pullout strength of augmented pedicle screws is increased in both primary and salvage procedures by 119% (p = 0.001) and 162% (p = 0.01), respectively. Self-centering pedicle screws at different trajectories (0°, 10°, 20°, and 30°) show the same pullout strength as insertion in a straight-ahead trajectory. The outer cylindrical and inner conical shape of pedicle screws show the highest pullout strength in Grades 5 and 15 foams (synthetic bone). An outer cylindrical and inner conical shape with a V-shape thread exhibit the highest pullout strength in all foam grades. The maximum observed pullout strength is at axial pullout configuration at 0°. For Grade 15 (240 kg/m³) foam, there is a decline in pull out strength. The largest decrease in pullout strength is reported for Grade 10 (160 kg/m³) foam. The maximum pullout strength of 2176 N (0.32-g/cm³ Sawbones) on all densities. Type 1 Pedicle screw shows the best fixation due to smaller conical core diameter and smaller thread pitch (Screw 2 with 2 mm; Screws 1 and 3 with 3 mm).

Keywords: polymethylmethacrylate, PMMA, classical pedicle screws, CPS, expandable poly-ether-ether-ketone shell, EPEEKS, includes translaminar facet screw, TLFS, poly-ether-ether-ketone, PEEK, transfacetopedicular screw, TFPS

Procedia PDF Downloads 135
216 A Kierkegaardian Reading of Iqbal's Poetry as a Communicative Act

Authors: Sevcan Ozturk

Abstract:

The overall aim of this paper is to present a Kierkegaardian approach to Iqbal’s use of literature as a form of communication. Despite belonging to different historical, cultural, and religious backgrounds, the philosophical approaches of Soren Kierkegaard, ‘the father of existentialism,' and Muhammad Iqbal ‘the spiritual father of Pakistan’ present certain parallels. Both Kierkegaard and Iqbal take human existence as the starting point for their reflections, emphasise the subject of becoming genuine religious personalities, and develop a notion of the self. While doing these they both adopt parallel methods, employ literary techniques and poetical forms, and use their literary works as a form of communication. The problem is that Iqbal does not provide a clear account of his method as Kierkegaard does in his works. As a result, Iqbal’s literary approach appears to be a collection of contradictions. This is mainly because despite he writes most of his works in the poetical form, he condemns all kinds of art including poetry. Moreover, while attacking on Islamic mysticism, he, at the same time, uses classical literary forms, and a number of traditional mystical, poetic symbols. This paper will argue that the contradictions found in Iqbal’s approach are actually a significant part of Iqbal’s way of communicating his reader. It is the contention of this paper that with the help of the parallels between the literary and philosophical theories of Kierkegaard and Iqbal, the application of Kierkegaard’s method to Iqbal’s use of poetry as a communicative act will make it possible to dispel the seeming ambiguities in Iqbal’s literary approach. The application of Kierkegaard’s theory to Iqbal’s literary method will include an analysis of the main principles of Kierkegaard’s own literary technique of ‘indirect communication,' which is a crucial term of his existentialist philosophy. Second, the clash between what Iqbal’s says about art and poetry and what he does will be highlighted in the light of Kierkegaardian theory of indirect communication. It will be argued that Iqbal’s literary technique can be considered as a form of ‘indirect communication,' and that reading his technique in this way helps on dispelling the contradictions in his approach. It is hoped that this paper will cultivate a dialogue between those who work in the fields of comparative philosophy Kierkegaard studies, existentialism, contemporary Islamic thought, Iqbal studies, and literary criticism.

Keywords: comparative philosophy, existentialism, indirect communication, intercultural philosophy, literary communication, Muhammad Iqbal, Soren Kierkegaard

Procedia PDF Downloads 295
215 Comparison of Data Reduction Algorithms for Image-Based Point Cloud Derived Digital Terrain Models

Authors: M. Uysal, M. Yilmaz, I. Tiryakioğlu

Abstract:

Digital Terrain Model (DTM) is a digital numerical representation of the Earth's surface. DTMs have been applied to a diverse field of tasks, such as urban planning, military, glacier mapping, disaster management. In the expression of the Earth' surface as a mathematical model, an infinite number of point measurements are needed. Because of the impossibility of this case, the points at regular intervals are measured to characterize the Earth's surface and DTM of the Earth is generated. Hitherto, the classical measurement techniques and photogrammetry method have widespread use in the construction of DTM. At present, RADAR, LiDAR, and stereo satellite images are also used for the construction of DTM. In recent years, especially because of its superiorities, Airborne Light Detection and Ranging (LiDAR) has an increased use in DTM applications. A 3D point cloud is created with LiDAR technology by obtaining numerous point data. However recently, by the development in image mapping methods, the use of unmanned aerial vehicles (UAV) for photogrammetric data acquisition has increased DTM generation from image-based point cloud. The accuracy of the DTM depends on various factors such as data collection method, the distribution of elevation points, the point density, properties of the surface and interpolation methods. In this study, the random data reduction method is compared for DTMs generated from image based point cloud data. The original image based point cloud data set (100%) is reduced to a series of subsets by using random algorithm, representing the 75, 50, 25 and 5% of the original image based point cloud data set. Over the ANS campus of Afyon Kocatepe University as the test area, DTM constructed from the original image based point cloud data set is compared with DTMs interpolated from reduced data sets by Kriging interpolation method. The results show that the random data reduction method can be used to reduce the image based point cloud datasets to 50% density level while still maintaining the quality of DTM.

Keywords: DTM, Unmanned Aerial Vehicle (UAV), uniform, random, kriging

Procedia PDF Downloads 126
214 Mental Wellbeing Using Music Intervention: A Case Study of Therapeutic Role of Music, From Both Psychological and Neurocognitive Perspectives

Authors: Medha Basu, Kumardeb Banerjee, Dipak Ghosh

Abstract:

After the massive blow of the COVID-19 pandemic, several health hazards have been reported all over the world. Serious cases of Major Depressive Disorder (MDD) are seen to be common in about 15% of the global population, making depression one of the leading mental health diseases, as reported by the World Health Organization. Various psychological and pharmacological treatment techniques are regularly being reported. Music, a globally accepted mode of entertainment, is often used as a therapeutic measure to treat various health conditions. We have tried to understand how Indian Classical Music can affect the overall well-being of the human brain. A case study has been reported here, where a Flute-rendition has been chosen from a detailed audience response survey, and the effects of that clip on human brain conditions have been studied from both psychological and neural perspectives. Taking help from internationally-accepted depression-rating scales, two questionnaires have been designed to understand both the prolonged and immediate effect of music on various emotional states of human lives. Thereafter, from EEG experiments on 5 participants using the same clip, the parameter ‘ALAY’, alpha frontal asymmetry (alpha power difference of right and left frontal hemispheres), has been calculated. Works of Richard Davidson show that an increase in the ‘ALAY’ value indicates a decrease in depressive symptoms. Using the non-linear technique of MFDFA on EEG analysis, we have also calculated frontal asymmetry using the complexity values of alpha-waves in both hemispheres. The results show a positive correlation between both the psychological survey and the EEG findings, revealing the prominent role of music on the human brain, leading to a decrease in mental unrest and an increase in overall well-being. In this study, we plan to propose the scientific foundation of music therapy, especially from a neurocognition perspective, with appropriate neural bio-markers to understand the positive and remedial effects of music on the human brain.

Keywords: music therapy, EEG, psychological survey, frontal alpha asymmetry, wellbeing

Procedia PDF Downloads 6
213 Exploring Regularity Results in the Context of Extremely Degenerate Elliptic Equations

Authors: Zahid Ullah, Atlas Khan

Abstract:

This research endeavors to explore the regularity properties associated with a specific class of equations, namely extremely degenerate elliptic equations. These equations hold significance in understanding complex physical systems like porous media flow, with applications spanning various branches of mathematics. The focus is on unraveling and analyzing regularity results to gain insights into the smoothness of solutions for these highly degenerate equations. Elliptic equations, fundamental in expressing and understanding diverse physical phenomena through partial differential equations (PDEs), are particularly adept at modeling steady-state and equilibrium behaviors. However, within the realm of elliptic equations, the subset of extremely degenerate cases presents a level of complexity that challenges traditional analytical methods, necessitating a deeper exploration of mathematical theory. While elliptic equations are celebrated for their versatility in capturing smooth and continuous behaviors across different disciplines, the introduction of degeneracy adds a layer of intricacy. Extremely degenerate elliptic equations are characterized by coefficients approaching singular behavior, posing non-trivial challenges in establishing classical solutions. Still, the exploration of extremely degenerate cases remains uncharted territory, requiring a profound understanding of mathematical structures and their implications. The motivation behind this research lies in addressing gaps in the current understanding of regularity properties within solutions to extremely degenerate elliptic equations. The study of extreme degeneracy is prompted by its prevalence in real-world applications, where physical phenomena often exhibit characteristics defying conventional mathematical modeling. Whether examining porous media flow or highly anisotropic materials, comprehending the regularity of solutions becomes crucial. Through this research, the aim is to contribute not only to the theoretical foundations of mathematics but also to the practical applicability of mathematical models in diverse scientific fields.

Keywords: elliptic equations, extremely degenerate, regularity results, partial differential equations, mathematical modeling, porous media flow

Procedia PDF Downloads 35
212 Exploring the Applications of Neural Networks in the Adaptive Learning Environment

Authors: Baladitya Swaika, Rahul Khatry

Abstract:

Computer Adaptive Tests (CATs) is one of the most efficient ways for testing the cognitive abilities of students. CATs are based on Item Response Theory (IRT) which is based on item selection and ability estimation using statistical methods of maximum information selection/selection from posterior and maximum-likelihood (ML)/maximum a posteriori (MAP) estimators respectively. This study aims at combining both classical and Bayesian approaches to IRT to create a dataset which is then fed to a neural network which automates the process of ability estimation and then comparing it to traditional CAT models designed using IRT. This study uses python as the base coding language, pymc for statistical modelling of the IRT and scikit-learn for neural network implementations. On creation of the model and on comparison, it is found that the Neural Network based model performs 7-10% worse than the IRT model for score estimations. Although performing poorly, compared to the IRT model, the neural network model can be beneficially used in back-ends for reducing time complexity as the IRT model would have to re-calculate the ability every-time it gets a request whereas the prediction from a neural network could be done in a single step for an existing trained Regressor. This study also proposes a new kind of framework whereby the neural network model could be used to incorporate feature sets, other than the normal IRT feature set and use a neural network’s capacity of learning unknown functions to give rise to better CAT models. Categorical features like test type, etc. could be learnt and incorporated in IRT functions with the help of techniques like logistic regression and can be used to learn functions and expressed as models which may not be trivial to be expressed via equations. This kind of a framework, when implemented would be highly advantageous in psychometrics and cognitive assessments. This study gives a brief overview as to how neural networks can be used in adaptive testing, not only by reducing time-complexity but also by being able to incorporate newer and better datasets which would eventually lead to higher quality testing.

Keywords: computer adaptive tests, item response theory, machine learning, neural networks

Procedia PDF Downloads 157
211 Robustness of the Deep Chroma Extractor and Locally-Normalized Quarter Tone Filters in Automatic Chord Estimation under Reverberant Conditions

Authors: Luis Alvarado, Victor Poblete, Isaac Gonzalez, Yetzabeth Gonzalez

Abstract:

In MIREX 2016 (http://www.music-ir.org/mirex), the deep neural network (DNN)-Deep Chroma Extractor, proposed by Korzeniowski and Wiedmer, reached the highest score in an audio chord recognition task. In the present paper, this tool is assessed under acoustic reverberant environments and distinct source-microphone distances. The evaluation dataset comprises The Beatles and Queen datasets. These datasets are sequentially re-recorded with a single microphone in a real reverberant chamber at four reverberation times (0 -anechoic-, 1, 2, and 3 s, approximately), as well as four source-microphone distances (32, 64, 128, and 256 cm). It is expected that the performance of the trained DNN will dramatically decrease under these acoustic conditions with signals degraded by room reverberation and distance to the source. Recently, the effect of the bio-inspired Locally-Normalized Cepstral Coefficients (LNCC), has been assessed in a text independent speaker verification task using speech signals degraded by additive noise at different signal-to-noise ratios with variations of recording distance, and it has also been assessed under reverberant conditions with variations of recording distance. LNCC showed a performance so high as the state-of-the-art Mel Frequency Cepstral Coefficient filters. Based on these results, this paper proposes a variation of locally-normalized triangular filters called Locally-Normalized Quarter Tone (LNQT) filters. By using the LNQT spectrogram, robustness improvements of the trained Deep Chroma Extractor are expected, compared with classical triangular filters, and thus compensating the music signal degradation improving the accuracy of the chord recognition system.

Keywords: chord recognition, deep neural networks, feature extraction, music information retrieval

Procedia PDF Downloads 204
210 Integration of Agile Philosophy and Scrum Framework to Missile System Design Processes

Authors: Misra Ayse Adsiz, Selim Selvi

Abstract:

In today's world, technology is competing with time. In order to catch up with the world's companies and adapt quickly to the changes, it is necessary to speed up the processes and keep pace with the rate of change of the technology. The missile system design processes, which are handled with classical methods, keep behind in this race. Because customer requirements are not clear, and demands are changing again and again in the design process. Therefore, in the system design process, a methodology suitable for the missile system design dynamics has been investigated and the processes used for catching up the era are examined. When commonly used design processes are analyzed, it is seen that any one of them is dynamic enough for today’s conditions. So a hybrid design process is established. After a detailed review of the existing processes, it is decided to focus on the Scrum Framework and Agile Philosophy. Scrum is a process framework. It is focused on to develop software and handling change management with rapid methods. In addition, agile philosophy is intended to respond quickly to changes. In this study, it is aimed to integrate Scrum framework and agile philosophy, which are the most appropriate ways for rapid production and change adaptation, into the missile system design process. With this approach, it is aimed that the design team, involved in the system design processes, is in communication with the customer and provide an iterative approach in change management. These methods, which are currently being used in the software industry, have been integrated with the product design process. A team is created for system design process. The roles of Scrum Team are realized with including the customer. A scrum team consists of the product owner, development team and scrum master. Scrum events, which are short, purposeful and time-limited, are organized to serve for coordination rather than long meetings. Instead of the classic system design methods used in product development studies, a missile design is made with this blended method. With the help of this design approach, it is become easier to anticipate changing customer demands, produce quick solutions to demands and combat uncertainties in the product development process. With the feedback of the customer who included in the process, it is worked towards marketing optimization, design and financial optimization.

Keywords: agile, design, missile, scrum

Procedia PDF Downloads 147
209 Energy Efficiency of Secondary Refrigeration with Phase Change Materials and Impact on Greenhouse Gases Emissions

Authors: Michel Pons, Anthony Delahaye, Laurence Fournaison

Abstract:

Secondary refrigeration consists of splitting large-size direct-cooling units into volume-limited primary cooling units complemented by secondary loops for transporting and distributing cold. Such a design reduces the refrigerant leaks, which represents a source of greenhouse gases emitted into the atmosphere. However, inserting the secondary circuit between the primary unit and the ‘users’ heat exchangers (UHX) increases the energy consumption of the whole process, which induces an indirect emission of greenhouse gases. It is thus important to check whether that efficiency loss is sufficiently limited for the change to be globally beneficial to the environment. Among the likely secondary fluids, phase change slurries offer several advantages: they transport latent heat, they stabilize the heat exchange temperature, and the formerly evaporators still can be used as UHX. The temperature level can also be adapted to the desired cooling application. Herein, the slurry {ice in mono-propylene-glycol solution} (melting temperature Tₘ of 6°C) is considered for food preservation, and the slurry {mixed hydrate of CO₂ + tetra-n-butyl-phosphonium-bromide in aqueous solution of this salt + CO₂} (melting temperature Tₘ of 13°C) is considered for air conditioning. For the sake of thermodynamic consistency, the analysis encompasses the whole process, primary cooling unit plus secondary slurry loop, and the various properties of the slurries, including their non-Newtonian viscosity. The design of the whole process is optimized according to the properties of the chosen slurry and under explicit constraints. As a first constraint, all the units must deliver the same cooling power to the user. The other constraints concern the heat exchanges areas, which are prescribed, and the flow conditions, which prevent deposition of the solid particles transported in the slurry, and their agglomeration. Minimization of the total energy consumption leads to the optimal design. In addition, the results are analyzed in terms of exergy losses, which allows highlighting the couplings between the primary unit and the secondary loop. One important difference between the ice-slurry and the mixed-hydrate one is the presence of gaseous carbon dioxide in the latter case. When the mixed-hydrate crystals melt in the UHX, CO₂ vapor is generated at a rate that depends on the phase change kinetics. The flow in the UHX, and its heat and mass transfer properties are significantly modified. This effect has never been investigated before. Lastly, inserting the secondary loop between the primary unit and the users increases the temperature difference between the refrigerated space and the evaporator. This results in a loss of global energy efficiency, and therefore in an increased energy consumption. The analysis shows that this loss of efficiency is not critical in the first case (Tₘ = 6°C), while the second case leads to more ambiguous results, partially because of the higher melting temperature.The consequences in terms of greenhouse gases emissions are also analyzed.

Keywords: exergy, hydrates, optimization, phase change material, thermodynamics

Procedia PDF Downloads 109
208 Impact of Expressive Writing on Creativity

Authors: Małgorzata Osowiecka

Abstract:

Negative emotions are rather seen as creativity inhibitor. On the other hand, it is worth noting that negative emotions may be good for our functioning. Negative emotions enhance cognitive resources and improve evaluative processes. Moreover maintaining a negative emotional state allow for cognitive reinterpretation of the emotional stimuli, what is good for our creativity, especially cognitive flexibility. Writing a diary or writing about difficult emotional experiences in general can be the way to not only improve psychical health, but also – enhance creative behaviors. Thanks to translating difficult emotions to the verbal level and giving them ‘a name’ or ‘a label’, we can get easier access to both emotional content of an experience and to the semantic content, without the need of speaking out loud. Expressive writing improves academic results and the efficiency of working memory. The classical method of writing about emotions consists in a long-term process of describing negative experiences. Present research demonstrate the efficiency of this process over a shorter period of time - one writing session, on school children sample. Participants performed writing task. Writing task had two different topics: emotions connected with their negative emotions (expressive writing) and content not connected with negative emotional state (writing about one’s typical day). Creativity was measured by Guilford’s Alternative Uses Task. Results have shown that writing about negative emotions results in the higher level of divergent thinking in all three parameters: fluency, flexibility and originality. After the writing task mood of expressive writing participants remained negative more than the mood of the controls. Taking an expressive action after a difficult emotional experience can support functioning, which can be observed in enhancement of divergent thinking. Writing about emotions connected with negative experience makes one more creative, than writing about something unrelated with difficult emotional moments. Research has shown that young people should not demonize negative emotions. Sometimes, properly applied, negative emotions can be the basis of creation. Preparation was supported by a The Young Scientist University grant titled ‘Dynamics of emotions in the creative process’ from The Polish Ministry of Science and Higher Education.

Keywords: creativity, divergent thinking, emotions, expressive writing

Procedia PDF Downloads 164
207 Discerning Divergent Nodes in Social Networks

Authors: Mehran Asadi, Afrand Agah

Abstract:

In data mining, partitioning is used as a fundamental tool for classification. With the help of partitioning, we study the structure of data, which allows us to envision decision rules, which can be applied to classification trees. In this research, we used online social network dataset and all of its attributes (e.g., Node features, labels, etc.) to determine what constitutes an above average chance of being a divergent node. We used the R statistical computing language to conduct the analyses in this report. The data were found on the UC Irvine Machine Learning Repository. This research introduces the basic concepts of classification in online social networks. In this work, we utilize overfitting and describe different approaches for evaluation and performance comparison of different classification methods. In classification, the main objective is to categorize different items and assign them into different groups based on their properties and similarities. In data mining, recursive partitioning is being utilized to probe the structure of a data set, which allow us to envision decision rules and apply them to classify data into several groups. Estimating densities is hard, especially in high dimensions, with limited data. Of course, we do not know the densities, but we could estimate them using classical techniques. First, we calculated the correlation matrix of the dataset to see if any predictors are highly correlated with one another. By calculating the correlation coefficients for the predictor variables, we see that density is strongly correlated with transitivity. We initialized a data frame to easily compare the quality of the result classification methods and utilized decision trees (with k-fold cross validation to prune the tree). The method performed on this dataset is decision trees. Decision tree is a non-parametric classification method, which uses a set of rules to predict that each observation belongs to the most commonly occurring class label of the training data. Our method aggregates many decision trees to create an optimized model that is not susceptible to overfitting. When using a decision tree, however, it is important to use cross-validation to prune the tree in order to narrow it down to the most important variables.

Keywords: online social networks, data mining, social cloud computing, interaction and collaboration

Procedia PDF Downloads 124
206 Therapeutic Drug Monitoring by Dried Blood Spot and LC-MS/MS: Novel Application to Carbamazepine and Its Metabolite in Paediatric Population

Authors: Giancarlo La Marca, Engy Shokry, Fabio Villanelli

Abstract:

Epilepsy is one of the most common neurological disorders, with an estimated prevalence of 50 million people worldwide. Twenty five percent of the epilepsy population is represented in children under the age of 15 years. For antiepileptic drugs (AED), there is a poor correlation between plasma concentration and dose especially in children. This was attributed to greater pharmacokinetic variability than adults. Hence, therapeutic drug monitoring (TDM) is recommended in controlling toxicity while drug exposure is maintained. Carbamazepine (CBZ) is a first-line AED and the drug of first choice in trigeminal neuralgia. CBZ is metabolised in the liver into carbamazepine-10,11-epoxide (CBZE), its major metabolite which is equipotent. This develops the need for an assay able to monitor the levels of both CBZ and CBZE. The aim of the present study was to develop and validate a LC-MS/MS method for simultaneous quantification of CBZ and CBZE in dried blood spots (DBS). DBS technique overcomes many logistical problems, ethical issues and technical challenges faced by classical plasma sampling. LC-MS/MS has been regarded as superior technique over immunoassays and HPLC/UV methods owing to its better specificity and sensitivity, lack of interference or matrix effects. Our method combines advantages of DBS technique and LC-MS/MS in clinical practice. The extraction process was done using methanol-water-formic acid (80:20:0.1, v/v/v). The chromatographic elution was achieved by using a linear gradient with a mobile phase consisting of acetonitrile-water-0.1% formic acid at a flow rate of 0.50 mL/min. The method was linear over the range 1-40 mg/L and 0.25-20 mg/L for CBZ and CBZE respectively. The limit of quantification was 1.00 mg/L and 0.25 mg/L for CBZ and CBZE, respectively. Intra-day and inter-day assay precisions were found to be less than 6.5% and 11.8%. An evaluation of DBS technique was performed, including effect of extraction solvent, spot homogeneity and stability in DBS. Results from a comparison with the plasma assay are also presented. The novelty of the present work lies in being the first to quantify CBZ and its metabolite from only one 3.2 mm DBS disc finger-prick sample (3.3-3.4 µl blood) by LC-MS/MS in a 10 min. chromatographic run.

Keywords: carbamazepine, carbamazepine-10, 11-epoxide, dried blood spots, LC-MS/MS, therapeutic drug monitoring

Procedia PDF Downloads 387
205 A Comprehensive Survey on Machine Learning Techniques and User Authentication Approaches for Credit Card Fraud Detection

Authors: Niloofar Yousefi, Marie Alaghband, Ivan Garibay

Abstract:

With the increase of credit card usage, the volume of credit card misuse also has significantly increased, which may cause appreciable financial losses for both credit card holders and financial organizations issuing credit cards. As a result, financial organizations are working hard on developing and deploying credit card fraud detection methods, in order to adapt to ever-evolving, increasingly sophisticated defrauding strategies and identifying illicit transactions as quickly as possible to protect themselves and their customers. Compounding on the complex nature of such adverse strategies, credit card fraudulent activities are rare events compared to the number of legitimate transactions. Hence, the challenge to develop fraud detection that are accurate and efficient is substantially intensified and, as a consequence, credit card fraud detection has lately become a very active area of research. In this work, we provide a survey of current techniques most relevant to the problem of credit card fraud detection. We carry out our survey in two main parts. In the first part, we focus on studies utilizing classical machine learning models, which mostly employ traditional transnational features to make fraud predictions. These models typically rely on some static physical characteristics, such as what the user knows (knowledge-based method), or what he/she has access to (object-based method). In the second part of our survey, we review more advanced techniques of user authentication, which use behavioral biometrics to identify an individual based on his/her unique behavior while he/she is interacting with his/her electronic devices. These approaches rely on how people behave (instead of what they do), which cannot be easily forged. By providing an overview of current approaches and the results reported in the literature, this survey aims to drive the future research agenda for the community in order to develop more accurate, reliable and scalable models of credit card fraud detection.

Keywords: Credit Card Fraud Detection, User Authentication, Behavioral Biometrics, Machine Learning, Literature Survey

Procedia PDF Downloads 89
204 Simulation and Thermal Evaluation of Containers Using PCM in Different Weather Conditions of Chile: Energy Savings in Lightweight Constructions

Authors: Paula Marín, Mohammad Saffari, Alvaro de Gracia, Luisa F. Cabeza, Svetlana Ushak

Abstract:

Climate control represents an important issue when referring to energy consumption of buildings and associated expenses, both in installation or operation periods. The climate control of a building relies on several factors. Among them, localization, orientation, architectural elements, sources of energy used, are considered. In order to study the thermal behaviour of a building set up, the present study proposes the use of energy simulation program Energy Plus. In recent years, energy simulation programs have become important tools for evaluation of thermal/energy performance of buildings and facilities. Besides, the need to find new forms of passive conditioning in buildings for energy saving is a critical component. The use of phase change materials (PCMs) for heat storage applications has grown in importance due to its high efficiency. Therefore, the climatic conditions of Northern Chile: high solar radiation and extreme temperature fluctuations ranging from -10°C to 30°C (Calama city), low index of cloudy days during the year are appropriate to take advantage of solar energy and use passive systems in buildings. Also, the extensive mining activities in northern Chile encourage the use of large numbers of containers to harbour workers during shifts. These containers are constructed with lightweight construction systems, requiring heating during night and cooling during day, increasing the HVAC electricity consumption. The use of PCM can improve thermal comfort and reduce the energy consumption. The objective of this study was to evaluate the thermal and energy performance of containers of 2.5×2.5×2.5 m3, located in four cities of Chile: Antofagasta, Calama, Santiago, and Concepción. Lightweight envelopes, typically used in these building prototypes, were evaluated considering a container without PCM inclusion as the reference building and another container with PCM-enhanced envelopes as a test case, both of which have a door and a window in the same wall, orientated in two directions: North and South. To see the thermal response of these containers in different seasons, the simulations were performed considering a period of one year. The results show that higher energy savings for the four cities studied are obtained when the distribution of door and window in the container is in the north direction because of higher solar radiation incidence. The comparison of HVAC consumption and energy savings in % for north direction of door and window are summarised. Simulation results show that in the city of Antofagasta 47% of heating energy could be saved and in the cities of Calama and Concepción the biggest savings in terms of cooling could be achieved since PCM reduces almost all the cooling demand. Currently, based on simulation results, four containers have been constructed and sized with the same structural characteristics carried out in simulations, that are, containers with/without PCM, with door and window in one wall. Two of these containers will be placed in Antofagasta and two containers in a copper mine near to Calama, all of them will be monitored for a period of one year. The simulation results will be validated with experimental measurements and will be reported in the future.

Keywords: energy saving, lightweight construction, PCM, simulation

Procedia PDF Downloads 258
203 How Social Support, Interaction with Clients and Work-Family Conflict Contribute to Mental Well-Being for Employees in the Human Service System

Authors: Uwe C. Fischer

Abstract:

Mental health and well-being for employees working in the human service system are getting more and more important given the increasing rate of absenteeism at work. Besides individual capacities, social and community factors seem to be important in the working setting. Starting from a demand resource framework including the classical demand control aspects, social support systems, specific demands and resources of the client work, and work-family conflict were considered in the present study. We state hypothetically, that these factors have a meaningful association with the mental quality of life of employees working in the field of social, educational and health sectors. 1140 employees, working in human service organizations (education, youth care, nursing etc.) were asked for strains and resources at work (selected scales from Salutogenetic Subjective Work Assessment SALSA and own new scales for client work), work-family conflict, and mental quality of life from the German Short Form Health Survey. Considering the complex influences of the variables, we conducted a multiple hierarchical regression analysis. One third of the whole variance of the mental quality of life can be declared by the different variables of the model. When the variables concerning social influences were included in the hierarchical regression, the influence of work related control resource decreased. Excessive workload, work-family conflict, social support by supervisors, co-workers and other persons outside work, as well as strains and resources associated with client work had significant regression coefficients. Conclusions: Social support systems are crucial in the social, educational and health related service sector, regarding the influence on mental well-being. Especially the work-family conflict focuses on the importance of the work-life balance. Also the specific strains and resources of the client work, measured with new constructed scales, showed great impact on mental health. Therefore occupational health promotion should focus more on the social factors within and outside the working place.

Keywords: client interaction, human service system, mental health, social support, work-family conflict

Procedia PDF Downloads 416
202 Multi-Agent System Based Solution for Operating Agile and Customizable Micro Manufacturing Systems

Authors: Dylan Santos De Pinho, Arnaud Gay De Combes, Matthieu Steuhlet, Claude Jeannerat, Nabil Ouerhani

Abstract:

The Industry 4.0 initiative has been launched to address huge challenges related to ever-smaller batch sizes. The end-user need for highly customized products requires highly adaptive production systems in order to keep the same efficiency of shop floors. Most of the classical Software solutions that operate the manufacturing processes in a shop floor are based on rigid Manufacturing Execution Systems (MES), which are not capable to adapt the production order on the fly depending on changing demands and or conditions. In this paper, we present a highly modular and flexible solution to orchestrate a set of production systems composed of a micro-milling machine-tool, a polishing station, a cleaning station, a part inspection station, and a rough material store. The different stations are installed according to a novel matrix configuration of a 3x3 vertical shelf. The different cells of the shelf are connected through horizontal and vertical rails on which a set of shuttles circulate to transport the machined parts from a station to another. Our software solution for orchestrating the tasks of each station is based on a Multi-Agent System. Each station and each shuttle is operated by an autonomous agent. All agents communicate with a central agent that holds all the information about the manufacturing order. The core innovation of this paper lies in the path planning of the different shuttles with two major objectives: 1) reduce the waiting time of stations and thus reduce the cycle time of the entire part, and 2) reduce the disturbances like vibration generated by the shuttles, which highly impacts the manufacturing process and thus the quality of the final part. Simulation results show that the cycle time of the parts is reduced by up to 50% compared with MES operated linear production lines while the disturbance is systematically avoided for the critical stations like the milling machine-tool.

Keywords: multi-agent systems, micro-manufacturing, flexible manufacturing, transfer systems

Procedia PDF Downloads 112