Search results for: modern pattern cutting technique
317 Knowledge Creation Environment in the Iranian Universities: A Case Study
Authors: Mahdi Shaghaghi, Amir Ghaebi, Fariba Ahmadi
Abstract:
Purpose: The main purpose of the present research is to analyze the knowledge creation environment at a Iranian University (Alzahra University) as a typical University in Iran, using a combination of the i-System and Ba models. This study is necessary for understanding the determinants of knowledge creation at Alzahra University as a typical University in Iran. Methodology: To carry out the present research, which is an applied study in terms of purpose, a descriptive survey method was used. In this study, a combination of the i-System and Ba models has been used to analyze the knowledge creation environment at Alzahra University. i-System consists of 5 constructs including intervention (input), intelligence (process), involvement (process), imagination (process), and integration (output). The Ba environment has three pillars, namely the infrastructure, the agent, and the information. The integration of these two models resulted in 11 constructs which were as follows: intervention (input), infrastructure-intelligence, agent-intelligence, information-intelligence (process); infrastructure-involvement, agent-involvement, information-involvement (process); infrastructure-imagination, agent-imagination, information-imagination (process); and integration (output). These 11 constructs were incorporated into a 52-statement questionnaire and the validity and reliability of the questionnaire were examined and confirmed. The statistical population included the faculty members of Alzahra University (344 people). A total of 181 participants were selected through the stratified random sampling technique. The descriptive statistics, binomial test, regression analysis, and structural equation modeling (SEM) methods were also utilized to analyze the data. Findings: The research findings indicated that among the 11 research constructs, the levels of intervention, information-intelligence, infrastructure-involvement, and agent-imagination constructs were average and not acceptable. The levels of infrastructure-intelligence and information-imagination constructs ranged from average to low. The levels of agent-intelligence and information-involvement constructs were also completely average. The level of infrastructure-imagination construct was average to high and thus was considered acceptable. The levels of agent-involvement and integration constructs were above average and were in a highly acceptable condition. Furthermore, the regression analysis results indicated that only two constructs, viz. the information-imagination and agent-involvement constructs, positively and significantly correlate with the integration construct. The results of the structural equation modeling also revealed that the intervention, intelligence, and involvement constructs are related to the integration construct with the complete mediation of imagination. Discussion and conclusion: The present research suggests that knowledge creation at Alzahra University relatively complies with the combination of the i-System and Ba models. Unlike this model, the intervention, intelligence, and involvement constructs are not directly related to the integration construct and this seems to have three implications: 1) the information sources are not frequently used to assess and identify the research biases; 2) problem finding is probably of less concern at the end of studies and at the time of assessment and validation; 3) the involvement of others has a smaller role in the summarization, assessment, and validation of the research.Keywords: i-System, Ba model , knowledge creation , knowledge management, knowledge creation environment, Iranian Universities
Procedia PDF Downloads 101316 Sattriya: Its Transformation as a Principal Medium of Preaching Vaishnava Religion to Performing Art
Authors: Smita Lahkar
Abstract:
Sattriya, the youngest of the eight principal Classical Indian dance traditions, has undergone too many changes and modifications to arrive at its present stage of performing art form extracting itself from age-old religious confinement. Although some of the other traditions have been revived in the recent past, Sattriya has a living tradition since its inception in the 15th century by Srimanta Sankardeva, the great Vaishnavite saint, poet, playwright, lyricist, painter, singer and dancer of Assam, a primary north-eastern state of India. This living dance tradition from the Sattras, the Vaishnavite monasteries, has been practiced for over five hundred years by celibate male monks, as a powerful medium for propagating the Vaishnava religious faith. Sankardeva realised the potential of the vocalised word integrated with the visual image as a powerful medium of expression and communication. So he used this principal medium for propagating his newly found message of devotion among the people of his time. Earlier, Sattriya was performed by male monks alone in monasteries (Sattras) as a part of daily rituals. The females were not even allowed to learn this art form. But, in present time, Sattriya has come out from the Sattras to proscenium stage, performed mostly by female as well as few male dancers also. The technique of performing movements, costumes, ornaments, music and style of performance too have experienced too many changes and modifications. For example, earlier and even today in Sattra, the ‘Pataka’ hand gesture is depicted in conformity with the original context (religious) of creation of the dance form. But, today stage-performers prefer the instructions of the scripture ‘Srihastamuktavali’ and depict the ‘Pataka’ in a sophisticated manner affecting decontextualisation to a certain extent. This adds aesthetic beauty to the dance form as an art distancing it from its context of being a vehicle for propagating Vaishnava religion. The Sattriya dance today stands at the crossroads of past and future, tradition and modernity, devotion and display, spirituality and secularism. The traditional exponents trained under the tutelage of Sattra maestros and imbibing a devotionally inspired rigour of the religion, try to retain the traditional nuances; while the young artists being trained outside the monasteries are more interested in taking up the discipline purely from the perspective of ‘performing arts’ bereft of the philosophy of religion or its sacred associations. Hence, this paper will be an endeavor to establish the hypothesis that the Sattriya, whose origin was for propagating Vaishnava faith, has now entered the world of performing arts with highly aesthetical components. And as a transformed art form, Sattriya may be expected to carve a niche in world dance arena. This will be done with the help of historical evidences, observations from the recorded past and expert rendezvous.Keywords: dance, performing art, religion, Sattriya
Procedia PDF Downloads 219315 Hydrographic Mapping Based on the Concept of Fluvial-Geomorphological Auto-Classification
Authors: Jesús Horacio, Alfredo Ollero, Víctor Bouzas-Blanco, Augusto Pérez-Alberti
Abstract:
Rivers have traditionally been classified, assessed and managed in terms of hydrological, chemical and / or biological criteria. Geomorphological classifications had in the past a secondary role, although proposals like River Styles Framework, Catchment Baseline Survey or Stroud Rural Sustainable Drainage Project did incorporate geomorphology for management decision-making. In recent years many studies have been attracted to the geomorphological component. The geomorphological processes and their associated forms determine the structure of a river system. Understanding these processes and forms is a critical component of the sustainable rehabilitation of aquatic ecosystems. The fluvial auto-classification approach suggests that a river is a self-built natural system, with processes and forms designed to effectively preserve their ecological function (hydrologic, sedimentological and biological regime). Fluvial systems are formed by a wide range of elements with multiple non-linear interactions on different spatial and temporal scales. Besides, the fluvial auto-classification concept is built using data from the river itself, so that each classification developed is peculiar to the river studied. The variables used in the classification are specific stream power and mean grain size. A discriminant analysis showed that these variables are the best characterized processes and forms. The statistical technique applied allows to get an individual discriminant equation for each geomorphological type. The geomorphological classification was developed using sites with high naturalness. Each site is a control point of high ecological and geomorphological quality. The changes in the conditions of the control points will be quickly recognizable, and easy to apply a right management measures to recover the geomorphological type. The study focused on Galicia (NW Spain) and the mapping was made analyzing 122 control points (sites) distributed over eight river basins. In sum, this study provides a method for fluvial geomorphological classification that works as an open and flexible tool underlying the fluvial auto-classification concept. The hydrographic mapping is the visual expression of the results, such that each river has a particular map according to its geomorphological characteristics. Each geomorphological type is represented by a particular type of hydraulic geometry (channel width, width-depth ratio, hydraulic radius, etc.). An alteration of this geometry is indicative of a geomorphological disturbance (whether natural or anthropogenic). Hydrographic mapping is also dynamic because its meaning changes if there is a modification in the specific stream power and/or the mean grain size, that is, in the value of their equations. The researcher has to check annually some of the control points. This procedure allows to monitor the geomorphology quality of the rivers and to see if there are any alterations. The maps are useful to researchers and managers, especially for conservation work and river restoration.Keywords: fluvial auto-classification concept, mapping, geomorphology, river
Procedia PDF Downloads 367314 Advancing the Analysis of Physical Activity Behaviour in Diverse, Rapidly Evolving Populations: Using Unsupervised Machine Learning to Segment and Cluster Accelerometer Data
Authors: Christopher Thornton, Niina Kolehmainen, Kianoush Nazarpour
Abstract:
Background: Accelerometers are widely used to measure physical activity behavior, including in children. The traditional method for processing acceleration data uses cut points, relying on calibration studies that relate the quantity of acceleration to energy expenditure. As these relationships do not generalise across diverse populations, they must be parametrised for each subpopulation, including different age groups, which is costly and makes studies across diverse populations difficult. A data-driven approach that allows physical activity intensity states to emerge from the data under study without relying on parameters derived from external populations offers a new perspective on this problem and potentially improved results. We evaluated the data-driven approach in a diverse population with a range of rapidly evolving physical and mental capabilities, namely very young children (9-38 months old), where this new approach may be particularly appropriate. Methods: We applied an unsupervised machine learning approach (a hidden semi-Markov model - HSMM) to segment and cluster the accelerometer data recorded from 275 children with a diverse range of physical and cognitive abilities. The HSMM was configured to identify a maximum of six physical activity intensity states and the output of the model was the time spent by each child in each of the states. For comparison, we also processed the accelerometer data using published cut points with available thresholds for the population. This provided us with time estimates for each child’s sedentary (SED), light physical activity (LPA), and moderate-to-vigorous physical activity (MVPA). Data on the children’s physical and cognitive abilities were collected using the Paediatric Evaluation of Disability Inventory (PEDI-CAT). Results: The HSMM identified two inactive states (INS, comparable to SED), two lightly active long duration states (LAS, comparable to LPA), and two short-duration high-intensity states (HIS, comparable to MVPA). Overall, the children spent on average 237/392 minutes per day in INS/SED, 211/129 minutes per day in LAS/LPA, and 178/168 minutes in HIS/MVPA. We found that INS overlapped with 53% of SED, LAS overlapped with 37% of LPA and HIS overlapped with 60% of MVPA. We also looked at the correlation between the time spent by a child in either HIS or MVPA and their physical and cognitive abilities. We found that HIS was more strongly correlated with physical mobility (R²HIS =0.5, R²MVPA= 0.28), cognitive ability (R²HIS =0.31, R²MVPA= 0.15), and age (R²HIS =0.15, R²MVPA= 0.09), indicating increased sensitivity to key attributes associated with a child’s mobility. Conclusion: An unsupervised machine learning technique can segment and cluster accelerometer data according to the intensity of movement at a given time. It provides a potentially more sensitive, appropriate, and cost-effective approach to analysing physical activity behavior in diverse populations, compared to the current cut points approach. This, in turn, supports research that is more inclusive across diverse populations.Keywords: physical activity, machine learning, under 5s, disability, accelerometer
Procedia PDF Downloads 210313 Testing Depression in Awareness Space: A Proposal to Evaluate Whether a Psychotherapeutic Method Based on Spatial Cognition and Imagination Therapy Cures Moderate Depression
Authors: Lucas Derks, Christine Beenhakker, Michiel Brandt, Gert Arts, Ruud van Langeveld
Abstract:
Background: The method Depression in Awareness Space (DAS) is a psychotherapeutic intervention technique based on the principles of spatial cognition and imagination therapy with spatial components. The basic assumptions are: mental space is the primary organizing principle in the mind, and all psychological issues can be treated by first locating and by next relocating the conceptualizations involved. The most clinical experience was gathered over the last 20 years in the area of social issues (with the social panorama model). The latter work led to the conclusion that a mental object (image) gains emotional impact when it is placed more central, closer and higher in the visual field – and vice versa. Changing the locations of mental objects in space thus alters the (socio-) emotional meaning of the relationships. The experience of depression seems always associated with darkness. Psychologists tend to see the link between depression and darkness as a metaphor. However, clinical practice hints to the existence of more literal forms of darkness. Aims: The aim of the method Depression in Awareness Space is to reduce the distress of clients with depression in the clinical counseling practice, as a reliable alternative method of psychological therapy for the treatment of depression. The method Depression in Awareness Space aims at making dark areas smaller, lighter and more transparent in order to identify the problem or the cause of the depression which lies behind the darkness. It was hypothesized that the darkness is a subjective side-effect of the neurological process of repression. After reducing the dark clouds the real problem behind the depression becomes more visible, allowing the client to work on it and in that way reduce their feelings of depression. This makes repression of the issue obsolete. Results: Clients could easily get into their 'sadness' when asked to do so and finding the location of the dark zones proved pretty easy as well. In a recent pilot study with five participants with mild depressive symptoms (measured on two different scales and tested against an untreated control group with similar symptoms), the first results were also very promising. If the mental spatial approach to depression can be proven to be really effective, this would be very good news. The Society of Mental Space Psychology is now looking for sponsoring of an up scaled experiment. Conclusions: For spatial cognition and the research into spatial psychological phenomena, the discovery of dark areas can be a step forward. Beside out of pure scientific interest, it is great to know that this discovery has a clinical implication: when darkness can be connected to depression. Also, darkness seems to be more than metaphorical expression. Progress can be monitored over measurement tools that quantify the level of depressive symptoms and by reviewing the areas of darkness.Keywords: depression, spatial cognition, spatial imagery, social panorama
Procedia PDF Downloads 169312 Optical and Structural Characterization of Rare Earth Doped Phosphate Glasses
Authors: Zélia Maria Da Costa Ludwig, Maria José Valenzuela Bell, Geraldo Henriques Da Silva, Thales Alves Faraco, Victor Rocha Da Silva, Daniel Rotmeister Teixeira, Vírgilio De Carvalho Dos Anjos, Valdemir Ludwig
Abstract:
Advances in telecommunications grow with the development of optical amplifiers based on rare earth ions. The focus has been concentrated in silicate glasses although their amplified spontaneous emission is limited to a few tens of nanometers (~ 40nm). Recently, phosphate glasses have received great attention due to their potential application in optical data transmission, detection, sensors and laser detector, waveguide and optical fibers, besides its excellent physical properties such as high thermal expansion coefficients and low melting temperature. Compared with the silica glasses, phosphate glasses provide different optical properties such as, large transmission window of infrared, and good density. Research on the improvement of physical and chemical durability of phosphate glass by addition of heavy metals oxides in P2O5 has been performed. The addition of Na2O further improves the solubility of rare earths, while increasing the Al2O3 links in the P2O5 tetrahedral results in increased durability and aqueous transition temperature and a decrease of the coefficient of thermal expansion. This work describes the structural and spectroscopic characterization of a phosphate glass matrix doped with different Er (Erbium) concentrations. The phosphate glasses containing Er3+ ions have been prepared by melt technique. A study of the optical absorption, luminescence and lifetime was conducted in order to characterize the infrared emission of Er3+ ions at 1540 nm, due to the radiative transition 4I13/2 → 4I15/2. Our results indicate that the present glass is a quite good matrix for Er3+ ions, and the quantum efficiency of the 1540 nm emission was high. A quenching mechanism for the mentioned luminescence was not observed up to 2,0 mol% of Er concentration. The Judd-Ofelt parameters, radiative lifetime and quantum efficiency have been determined in order to evaluate the potential of Er3+ ions in new phosphate glass. The parameters follow the trend as Ω2 > Ω4 > Ω6. It is well known that the parameter Ω2 is an indication of the dominant covalent nature and/or structural changes in the vicinity of the ion (short range effects), while Ω4 and Ω6 intensity parameters are long range parameters that can be related to the bulk properties such as viscosity and rigidity of the glass. From the PL measurements, no red or green upconversion was measured when pumping the samples with laser excitation at 980 nm. As future prospects: Synthesize this glass system with silver in order to determine the influence of silver nanoparticles on the Er3+ ions.Keywords: phosphate glass, erbium, luminescence, glass system
Procedia PDF Downloads 510311 Analyzing Concrete Structures by Using Laser Induced Breakdown Spectroscopy
Authors: Nina Sankat, Gerd Wilsch, Cassian Gottlieb, Steven Millar, Tobias Guenther
Abstract:
Laser-Induced Breakdown Spectroscopy (LIBS) is a combination of laser ablation and optical emission spectroscopy, which in principle can simultaneously analyze all elements on the periodic table. Materials can be analyzed in terms of chemical composition in a two-dimensional, time efficient and minor destructive manner. These advantages predestine LIBS as a monitoring technique in the field of civil engineering. The decreasing service life of concrete infrastructures is a continuously growing problematic. A variety of intruding, harmful substances can damage the reinforcement or the concrete itself. To insure a sufficient service life a regular monitoring of the structure is necessary. LIBS offers many applications to accomplish a successful examination of the conditions of concrete structures. A selection of those applications are the 2D-evaluation of chlorine-, sodium- and sulfur-concentration, the identification of carbonation depths and the representation of the heterogeneity of concrete. LIBS obtains this information by using a pulsed laser with a short pulse length (some mJ), which is focused on the surfaces of the analyzed specimen, for this only an optical access is needed. Because of the high power density (some GW/cm²) a minimal amount of material is vaporized and transformed into a plasma. This plasma emits light depending on the chemical composition of the vaporized material. By analyzing the emitted light, information for every measurement point is gained. The chemical composition of the scanned area is visualized in a 2D-map with spatial resolutions up to 0.1 mm x 0.1 mm. Those 2D-maps can be converted into classic depth profiles, as typically seen for the results of chloride concentration provided by chemical analysis like potentiometric titration. However, the 2D-visualization offers many advantages like illustrating chlorine carrying cracks, direct imaging of the carbonation depth and in general allowing the separation of the aggregates from the cement paste. By calibrating the LIBS-System, not only qualitative but quantitative results can be obtained. Those quantitative results can also be based on the cement paste, while excluding the aggregates. An additional advantage of LIBS is its mobility. By using the mobile system, located at BAM, onsite measurements are feasible. The mobile LIBS-system was already used to obtain chloride, sodium and sulfur concentrations onsite of parking decks, bridges and sewage treatment plants even under hard conditions like ongoing construction work or rough weather. All those prospects make LIBS a promising method to secure the integrity of infrastructures in a sustainable manner.Keywords: concrete, damage assessment, harmful substances, LIBS
Procedia PDF Downloads 176310 Bioresorbable Medicament-Eluting Grommet Tube for Otitis Media with Effusion
Authors: Chee Wee Gan, Anthony Herr Cheun Ng, Yee Shan Wong, Subbu Venkatraman, Lynne Hsueh Yee Lim
Abstract:
Otitis media with effusion (OME) is the leading cause of hearing loss in children worldwide. Surgery to insert grommet tube into the eardrum is usually indicated for OME unresponsive to antimicrobial therapy. It is the most common surgery for children. However, current commercially available grommet tubes are non-bioresorbable, not drug-treated, with unpredictable duration of retention on the eardrum to ventilate middle ear. Their functionality is impaired when clogged or chronically infected, requiring additional surgery to remove/reinsert grommet tubes. We envisaged that a novel fully bioresorbable grommet tube with sustained antibiotic release technology could address these drawbacks. In this study, drug-loaded bioresorbable poly(L-lactide-co-ε-caprolactone)(PLC) copolymer grommet tubes were fabricated by microinjection moulding technique. In vitro drug release and degradation model of PLC tubes were studied. Antibacterial property was evaluated by incubating PLC tubes with P. aeruginosa broth. Surface morphology was analyzed using scanning electron microscopy. A preliminary animal study was conducted using guinea pigs as an in vivo model to evaluate PLC tubes with and without drug, with commercial Mini Shah grommet tube as comparison. Our in vitro data showed sustained drug release over 3 months. All PLC tubes revealed exponential degradation profiles over time. Modeling predicted loss of tube functionality in water to be approximately 14 weeks and 17 weeks for PLC with and without drug, respectively. Generally, PLC tubes had less bacteria adherence, which were attributed to the much smoother tube surfaces compared to Mini Shah. Antibiotic from PLC tube further made bacteria adherence on surface negligible. They showed neither inflammation nor otorrhea after 18 weeks post-insertion in the eardrums of guinea pigs, but had demonstrated severe degree of bioresorption. Histology confirmed the new PLC tubes were biocompatible. Analyses on the PLC tubes in the eardrums showed bioresorption profiles close to our in vitro degradation models. The bioresorbable antibiotic-loaded grommet tubes showed good predictability in functionality. The smooth surface and sustained release technology reduced the risk of tube infection. Tube functional duration of 18 weeks allowed sufficient ventilation period to treat OME. Our ongoing studies include modifying the surface properties with protein coating, optimizing the drug dosage in the tubes to enhance their performances, evaluating their functional outcome on hearing after full resoption of grommet tube and healing of eardrums, and developing animal model with OME to further validate our in vitro models.Keywords: bioresorbable polymer, drug release, grommet tube, guinea pigs, otitis media with effusion
Procedia PDF Downloads 450309 Regularizing Software for Aerosol Particles
Authors: Christine Böckmann, Julia Rosemann
Abstract:
We present an inversion algorithm that is used in the European Aerosol Lidar Network for the inversion of data collected with multi-wavelength Raman lidar. These instruments measure backscatter coefficients at 355, 532, and 1064 nm, and extinction coefficients at 355 and 532 nm. The algorithm is based on manually controlled inversion of optical data which allows for detailed sensitivity studies and thus provides us with comparably high quality of the derived data products. The algorithm allows us to derive particle effective radius, volume, surface-area concentration with comparably high confidence. The retrieval of the real and imaginary parts of the complex refractive index still is a challenge in view of the accuracy required for these parameters in climate change studies in which light-absorption needs to be known with high accuracy. Single-scattering albedo (SSA) can be computed from the retrieve microphysical parameters and allows us to categorize aerosols into high and low absorbing aerosols. From mathematical point of view the algorithm is based on the concept of using truncated singular value decomposition as regularization method. This method was adapted to work for the retrieval of the particle size distribution function (PSD) and is called hybrid regularization technique since it is using a triple of regularization parameters. The inversion of an ill-posed problem, such as the retrieval of the PSD, is always a challenging task because very small measurement errors will be amplified most often hugely during the solution process unless an appropriate regularization method is used. Even using a regularization method is difficult since appropriate regularization parameters have to be determined. Therefore, in a next stage of our work we decided to use two regularization techniques in parallel for comparison purpose. The second method is an iterative regularization method based on Pade iteration. Here, the number of iteration steps serves as the regularization parameter. We successfully developed a semi-automated software for spherical particles which is able to run even on a parallel processor machine. From a mathematical point of view, it is also very important (as selection criteria for an appropriate regularization method) to investigate the degree of ill-posedness of the problem which we found is a moderate ill-posedness. We computed the optical data from mono-modal logarithmic PSD and investigated particles of spherical shape in our simulations. We considered particle radii as large as 6 nm which does not only cover the size range of particles in the fine-mode fraction of naturally occurring PSD but also covers a part of the coarse-mode fraction of PSD. We considered errors of 15% in the simulation studies. For the SSA, 100% of all cases achieve relative errors below 12%. In more detail, 87% of all cases for 355 nm and 88% of all cases for 532 nm are well below 6%. With respect to the absolute error for non- and weak-absorbing particles with real parts 1.5 and 1.6 in all modes the accuracy limit +/- 0.03 is achieved. In sum, 70% of all cases stay below +/-0.03 which is sufficient for climate change studies.Keywords: aerosol particles, inverse problem, microphysical particle properties, regularization
Procedia PDF Downloads 343308 Evaluation of the Effect of Learning Disabilities and Accommodations on the Prediction of the Exam Performance: Ordinal Decision-Tree Algorithm
Abstract:
Providing students with learning disabilities (LD) with extra time to grant them equal access to the exam is a necessary but insufficient condition to compensate for their LD; there should also be a clear indication that the additional time was actually used. For example, if students with LD use more time than students without LD and yet receive lower grades, this may indicate that a different accommodation is required. If they achieve higher grades but use the same amount of time, then the effectiveness of the accommodation has not been demonstrated. The main goal of this study is to evaluate the effect of including parameters related to LD and extended exam time, along with other commonly-used characteristics (e.g., student background and ability measures such as high-school grades), on the ability of ordinal decision-tree algorithms to predict exam performance. We use naturally-occurring data collected from hundreds of undergraduate engineering students. The sub-goals are i) to examine the improvement in prediction accuracy when the indicator of exam performance includes 'actual time used' in addition to the conventional indicator (exam grade) employed in most research; ii) to explore the effectiveness of extended exam time on exam performance for different courses and for LD students with different profiles (i.e., sets of characteristics). This is achieved by using the patterns (i.e., subgroups) generated by the algorithms to identify pairs of subgroups that differ in just one characteristic (e.g., course or type of LD) but have different outcomes in terms of exam performance (grade and time used). Since grade and time used to exhibit an ordering form, we propose a method based on ordinal decision-trees, which applies a weighted information-gain ratio (WIGR) measure for selecting the classifying attributes. Unlike other known ordinal algorithms, our method does not assume monotonicity in the data. The proposed WIGR is an extension of an information-theoretic measure, in the sense that it adjusts to the case of an ordinal target and takes into account the error severity between two different target classes. Specifically, we use ordinal C4.5, random-forest, and AdaBoost algorithms, as well as an ensemble technique composed of ordinal and non-ordinal classifiers. Firstly, we find that the inclusion of LD and extended exam-time parameters improves prediction of exam performance (compared to specifications of the algorithms that do not include these variables). Secondly, when the indicator of exam performance includes 'actual time used' together with grade (as opposed to grade only), the prediction accuracy improves. Thirdly, our subgroup analyses show clear differences in the effect of extended exam time on exam performance among different courses and different student profiles. From a methodological perspective, we find that the ordinal decision-tree based algorithms outperform their conventional, non-ordinal counterparts. Further, we demonstrate that the ensemble-based approach leverages the strengths of each type of classifier (ordinal and non-ordinal) and yields better performance than each classifier individually.Keywords: actual exam time usage, ensemble learning, learning disabilities, ordinal classification, time extension
Procedia PDF Downloads 100307 Automated Evaluation Approach for Time-Dependent Question Answering Pairs on Web Crawler Based Question Answering System
Authors: Shraddha Chaudhary, Raksha Agarwal, Niladri Chatterjee
Abstract:
This work demonstrates a web crawler-based generalized end-to-end open domain Question Answering (QA) system. An efficient QA system requires a significant amount of domain knowledge to answer any question with the aim to find an exact and correct answer in the form of a number, a noun, a short phrase, or a brief piece of text for the user's questions. Analysis of the question, searching the relevant document, and choosing an answer are three important steps in a QA system. This work uses a web scraper (Beautiful Soup) to extract K-documents from the web. The value of K can be calibrated on the basis of a trade-off between time and accuracy. This is followed by a passage ranking process using the MS-Marco dataset trained on 500K queries to extract the most relevant text passage, to shorten the lengthy documents. Further, a QA system is used to extract the answers from the shortened documents based on the query and return the top 3 answers. For evaluation of such systems, accuracy is judged by the exact match between predicted answers and gold answers. But automatic evaluation methods fail due to the linguistic ambiguities inherent in the questions. Moreover, reference answers are often not exhaustive or are out of date. Hence correct answers predicted by the system are often judged incorrect according to the automated metrics. One such scenario arises from the original Google Natural Question (GNQ) dataset which was collected and made available in the year 2016. Use of any such dataset proves to be inefficient with respect to any questions that have time-varying answers. For illustration, if the query is where will be the next Olympics? Gold Answer for the above query as given in the GNQ dataset is “Tokyo”. Since the dataset was collected in the year 2016, and the next Olympics after 2016 were in 2020 that was in Tokyo which is absolutely correct. But if the same question is asked in 2022 then the answer is “Paris, 2024”. Consequently, any evaluation based on the GNQ dataset will be incorrect. Such erroneous predictions are usually given to human evaluators for further validation which is quite expensive and time-consuming. To address this erroneous evaluation, the present work proposes an automated approach for evaluating time-dependent question-answer pairs. In particular, it proposes a metric using the current timestamp along with top-n predicted answers from a given QA system. To test the proposed approach GNQ dataset has been used and the system achieved an accuracy of 78% for a test dataset comprising 100 QA pairs. This test data was automatically extracted using an analysis-based approach from 10K QA pairs of the GNQ dataset. The results obtained are encouraging. The proposed technique appears to have the possibility of developing into a useful scheme for gathering precise, reliable, and specific information in a real-time and efficient manner. Our subsequent experiments will be guided towards establishing the efficacy of the above system for a larger set of time-dependent QA pairs.Keywords: web-based information retrieval, open domain question answering system, time-varying QA, QA evaluation
Procedia PDF Downloads 101306 Enhancing Efficiency of Building through Translucent Concrete
Authors: Humaira Athar, Brajeshwar Singh
Abstract:
Generally, the brightness of the indoor environment of buildings is entirely maintained by the artificial lighting which has consumed a large amount of resources. It is reported that lighting consumes about 19% of the total generated electricity which accounts for about 30-40% of total energy consumption. One possible way is to reduce the lighting energy by exploiting sunlight either through the use of suitable devices or energy efficient materials like translucent concrete. Translucent concrete is one such architectural concrete which allows the passage of natural light as well as artificial light through it. Several attempts have been made on different aspects of translucent concrete such as light guiding materials (glass fibers, plastic fibers, cylinder etc.), concrete mix design and manufacturing methods for use as building elements. Concerns are, however, raised on various related issues such as poor compatibility between the optical fibers and cement paste, unaesthetic appearance due to disturbance occurred in the arrangement of fibers during vibration and high shrinkage in flowable concrete due to its high water/cement ratio. Need is felt to develop translucent concrete to meet the requirement of structural safety as OPC concrete with the maximized saving in energy towards the power of illumination and thermal load in buildings. Translucent concrete was produced using pre-treated plastic optical fibers (POF, 2mm dia.) and high slump white concrete. The concrete mix was proportioned in the ratio of 1:1.9:2.1 with a w/c ratio of 0.40. The POF was varied from 0.8-9 vol.%. The mechanical properties and light transmission of this concrete were determined. Thermal conductivity of samples was measured by a transient plate source technique. Daylight illumination was measured by a lux grid method as per BIS:SP-41. It was found that the compressive strength of translucent concrete increased with decreasing optical fiber content. An increase of ~28% in the compressive strength of concrete was noticed when fiber was pre-treated. FE-SEM images showed little-debonded zone between the fibers and cement paste which was well supported with pull-out bond strength test results (~187% improvement over untreated). The light transmission of concrete was in the range of 3-7% depending on fiber spacing (5-20 mm). The average daylight illuminance (~75 lux) was nearly equivalent to the criteria specified for illumination for circulation (80 lux). The thermal conductivity of translucent concrete was reduced by 28-40% with respect to plain concrete. The thermal load calculated by heat conduction equation was ~16% more than the plain concrete. Based on Design-Builder software, the total annual illumination energy load of a room using one side translucent concrete was 162.36 kW compared with the energy load of 249.75 kW for a room without concrete. The calculated energy saving on an account of the power of illumination was ~25%. A marginal improvement towards thermal comfort was also noticed. It is concluded that the translucent concrete has the advantages of the existing concrete (load bearing) with translucency and insulation characteristics. It saves a significant amount of energy by providing natural daylight instead of artificial power consumption of illumination.Keywords: energy saving, light transmission, microstructure, plastic optical fibers, translucent concrete
Procedia PDF Downloads 128305 Cut-Off of CMV Cobas® Taqman® (CAP/CTM Roche®) for Introduction of Ganciclovir Pre-Emptive Therapy in Allogeneic Hematopoietic Stem Cell Transplant Recipients
Authors: B. B. S. Pereira, M. O. Souza, L. P. Zanetti, L. C. S. Oliveira, J. R. P. Moreno, M. P. Souza, V. R. Colturato, C. M. Machado
Abstract:
Background: The introduction of prophylactic or preemptive therapies has effectively decreased the CMV mortality rates after hematopoietic stem cell transplantation (HSCT). CMV antigenemia (pp65) or quantitative PCR are methods currently approved for CMV surveillance in pre-emptive strategies. Commercial assays are preferred as cut-off levels defined by in-house assays may vary among different protocols and in general show low reproducibility. Moreover, comparison of published data among different centers is only possible if international standards of quantification are included in the assays. Recently, the World Health Organization (WHO) established the first international standard for CMV detection. The real time PCR COBAS Ampliprep/ CobasTaqMan (CAP/CTM) (Roche®) was developed using the WHO standard for CMV quantification. However, the cut-off for the introduction of antiviral has not been determined yet. Methods: We conducted a retrospective study to determine: 1) the sensitivity and specificity of the new CMV CAP/CTM test in comparison with pp65 antigenemia to detect episodes of CMV infection/reactivation, and 2) the cut-off of viral load for introduction of ganciclovir (GCV). Pp65 antigenemia was performed and the corresponding plasma samples were stored at -20°C for further CMV detection by CAP/CTM. Comparison of tests was performed by kappa index. The appearance of positive antigenemia was considered the state variable to determine the cut-off of CMV viral load by ROC curve. Statistical analysis was performed using SPSS software version 19 (SPSS, Chicago, IL, USA.). Results: Thirty-eight patients were included and followed from August 2014 through May 2015. The antigenemia test detected 53 episodes of CMV infection in 34 patients (89.5%), while CAP/CTM detected 37 episodes in 33 patients (86.8%). AG and PCR results were compared in 431 samples and Kappa index was 30.9%. The median time for first AG detection was 42 (28-140) days, while CAP/CTM detected at a median of 7 days earlier (34 days, ranging from 7 to 110 days). The optimum cut-off value of CMV DNA was 34.25 IU/mL to detect positive antigenemia with 88.2% of sensibility, 100% of specificity and AUC of 0.91. This cut-off value is below the limit of detection and quantification of the equipment which is 56 IU/mL. According to CMV recurrence definition, 16 episodes of CMV recurrence were detected by antigenemia (47.1%) and 4 (12.1%) by CAP/CTM. The duration of viremia as detected by antigenemia was shorter (60.5% of the episodes lasted ≤ 7 days) in comparison to CAP/CTM (57.9% of the episodes lasting 15 days or more). This data suggests that the use of antigenemia to define the duration of GCV therapy might prompt early interruption of antiviral, which may favor CMV reactivation. The CAP/CTM PCR could possibly provide a safer information concerning the duration of GCV therapy. As prolonged treatment may increase the risk of toxicity, this hypothesis should be confirmed in prospective trials. Conclusions: Even though CAP/CTM by ROCHE showed great qualitative correlation with the antigenemia technique, the fully automated CAP/CTM did not demonstrate increased sensitivity. The cut-off value below the limit of detection and quantification may result in delayed introduction of pre-emptive therapy.Keywords: antigenemia, CMV COBAS/TAQMAN, cytomegalovirus, antiviral cut-off
Procedia PDF Downloads 191304 Progress Towards Optimizing and Standardizing Fiducial Placement Geometry in Prostate, Renal, and Pancreatic Cancer
Authors: Shiva Naidoo, Kristena Yossef, Grimm Jimm, Mirza Wasique, Eric Kemmerer, Joshua Obuch, Anand Mahadevan
Abstract:
Background: Fiducial markers effectively enhance tumor target visibility prior to Stereotactic Body Radiation Therapy or Proton therapy. To streamline clinical practice, fiducial placement guidelines from a robotic radiosurgery vendor were examined with the goals of optimizing and standardizing feasible geometries for each treatment indication. Clinical examples of prostate, renal, and pancreatic cases are presented. Methods: Vendor guidelines (Accuray, Sunnyvale, Ca) suggest implantation of 4–6 fiducials at least 20 mm apart, with at least a 15-degree angular difference between fiducials, within 50 mm or less from the target centroid, to ensure that any potential fiducial motion (e.g., from respiration or abdominal/pelvic pressures) will mimic target motion. Also recommended is that all fiducials can be seen in 45-degree oblique views with no overlap to coincide with the robotic radiosurgery imaging planes. For the prostate, a standardized geometry that meets all these objectives is a 2 cm-by-2 cm square in the coronal plane. The transperineal implant of two pairs of preloaded tandem fiducials makes the 2 cm-by-2 cm square geometry clinically feasible. This technique may be applied for renal cancer, except repositioned in a sagittal plane, with the retroperitoneal placement of the fiducials into the tumor. Pancreatic fiducial placement via endoscopic ultrasound (EUS) is technically more challenging, as fiducial placement is operator-dependent, and lesion access may be limited by adjacent vasculature, tumor location, or restricted mobility of the EUS probe in the duodenum. Fluoroscopically assisted fiducial placement during EUS can help ensure fiducial markers are deployed with optimal geometry and visualization. Results: Among the first 22 fiducial cases on a newly installed robotic radiosurgery system, live x-ray images for all nine prostatic cases had excellent fiducial visualization at the treatment console. Renal and pancreatic fiducials were not as clearly visible due to difficult target access and smaller caliber insertion needle/fiducial usage. The geometry of the first prostate case was used to ensure accurate geometric marker placement for the remaining 8 cases. Initially, some of the renal and pancreatic fiducials were closer than the 20 mm recommendation, and interactive feedback with the proceduralists led to subsequent fiducials being too far to the edge of the tumor. Further feedback and discussion of all cases are being used to help guide standardized geometries and achieve ideal fiducial placement. Conclusion: The ideal tradeoffs of fiducial visibility versus the thinnest possible gauge needle to avoid complications needs to be systematically optimized among all patients, particularly in regards to body habitus. Multidisciplinary collaboration among proceduralists and radiation oncologists can lead to improved outcomes.Keywords: fiducial, prostate cancer, renal cancer, pancreatic cancer, radiotherapy
Procedia PDF Downloads 93303 Crosslinked Porous 3-Dimensional Cellulose Nanofibers/Gelatin Based Biocomposite Aerogels for Tissue Engineering Application
Authors: Ali Mirtaghavi, Andy Baldwin, Rajendarn Muthuraj, Jack Luo
Abstract:
Recent advances in biomaterials have led to utilizing biopolymers to develop 3D scaffolds in tissue regeneration. One of the major challenges of designing biomaterials for 3D scaffolds is to mimic the building blocks similar to the extracellular matrix (ECM) of the native tissues. Biopolymer based aerogels obtained by freeze-drying have shown to provide structural similarities to the ECM owing to their 3D format and a highly porous structure with interconnected pores, similar to the ECM. Gelatin (GEL) is known to be a promising biomaterial with inherent regenerative characteristics owing to its chemical similarities to the ECM in native tissue, biocompatibility abundance, cost-effectiveness and accessible functional groups, which makes it facile for chemical modifications with other biomaterials to form biocomposites. Despite such advantages, gelatin offers poor mechanical properties, sensitive enzymatic degradation and high viscosity at room temperature which limits its application and encourages its use to develop biocomposites. Hydrophilic biomass-based cellulose nanofibrous (CNF) has been explored to use as suspension for biocomposite aerogels for the development of 3D porous structures with excellent mechanical properties, biocompatibility and slow enzymatic degradation. In this work, CNF biocomposite aerogels with various ratios of CNF:GEL) (90:10, 70:30 and 50:50) were prepared by freeze-drying technique, and their properties were investigated in terms of physicochemical, mechanical and biological characteristics. Epichlorohydrin (EPH) was used to investigate the effect of chemical crosslinking on the molecular interaction of CNF: GEL, and its effects on physicochemical, mechanical and biological properties of the biocomposite aerogels. Ultimately, chemical crosslinking helped to improve the mechanical resilience of the resulting aerogels. Amongst all the CNF-GEL composites, the crosslinked CNF: GEL (70:30) biocomposite was found to be favourable for cell attachment and viability. It possessed highly porous structure (porosity of ~93%) with pore sizes ranging from 16-110 µm, adequate mechanical properties (compression modulus of ~47 kPa) and optimal biocompatibility both in-vitro and in-vivo, as well as controlled enzymatic biodegradation, high water penetration, which could be considered a suitable option for wound healing application. In-vivo experiments showed improvement on inflammation and foreign giant body cell reaction for the crosslinked CNF: GEL (70:30) compared to the other samples. This could be due to the superior interaction of CNF with gelatin through chemical crosslinking, resulting in more optimal in-vivo improvement. In-vitro cell culture investigation on human dermal fibroblasts showed satisfactory 3D cell attachment over time. Overall, it has been observed that the developed CNF: GEL aerogel can be considered as a potential scaffold for soft tissue regeneration application.Keywords: 3D scaffolds, aerogels, Biocomposites , tissue engineering
Procedia PDF Downloads 129302 Structure Modification of Leonurine to Improve Its Potency as Aphrodisiac
Authors: Ruslin, R. E. Kartasasmita, M. S. Wibowo, S. Ibrahim
Abstract:
An aphrodisiac is a substance contained in food or drug that can arouse sexual instinct and increase pleasure while working, these substances derived from plants, animals, and minerals. When consuming substances that have aphrodisiac activity and duration can improve the sexual instinct. The natural aphrodisiac effect can be obtained through plants, animals, and minerals. Leonurine compound has aphrodisiac activity, these compounds can be isolated from plants of Leonurus Sp, Sundanese people is known as deundereman, this plant is empirical has aphrodisiac activity and based on the isolation of active compounds from plants known to contain compounds leonurine, so that the compound is expected to have activity aphrodisiac. Leonurine compound can be isolated from plants or synthesized chemically with material dasa siringat acid. Leonurine compound can be obtained commercial and derivatives of these compounds can be synthesized in an effort to increase its activity. This study aims to obtain derivatives leonurine better aphrodisiac activity compared with the parent compound, modified the structure of the compounds in the form leonurin guanidino butyl ester group with butyl amin and bromoetanol. ArgusLab program version 4.0.1 is used to determine the binding energy, hydrogen bonds and amino acids involved in the interaction of the compound PDE5 receptor. The in vivo test leonurine compounds and derivatives as an aphrodisiac ingredients and hormone testosterone levels using 27 male rats Wistar strain and 9 female mice of the same species, ages ranged from 12 weeks rats weighing + 200 g / tail. The test animal is divided into 9 groups according to the type of compounds and the dose given. Each treatment group was orally administered 2 ml per day for 5 days. On the sixth day was observed male rat sexual behavior and taking blood from the heart to measure testosterone levels using ELISA technique. Statistical analysis was performed in this study is the ANOVA test Least Square Differences (LSD) using the program Statistical Product and Service Solutions (SPSS). Aphrodisiac efficacy of the leonurine compound and its derivatives have proven in silico and in vivo test, the in silico testing leonurine derivatives have smaller binding energy derivatives leonurine so that activity better than leonurine compounds. Testing in vivo using rats of wistar strain that better leonurine derivative of this compound shows leonurine that in silico studies in parallel with in vivo tests. Modification of the structure in the form of guanidine butyl ester group with butyl amin and bromoethanol increase compared leonurine compound for aphrodisiac activity, testosterone derivatives of compounds leonurine experienced a significant improvement especial is 1RD compounds especially at doses of 100 and 150 mg/bb. The results showed that the compound leonurine and its compounds contain aphrodisiac activity and increase the amount of testosterone in the blood. The compound test used in this study acts as a steroid precursor resulting in increased testosterone.Keywords: aphrodisiac dysfunction erectile leonurine 1-RD 2-RD, dysfunction, erectile leonurine, 1-RD 2-RD
Procedia PDF Downloads 279301 Examinations of Sustainable Protection Possibilities against Granary Weevil (Sitophilus granarius L.) on Stored Products
Authors: F. Pal-Fam, R. Hoffmann, S. Keszthelyi
Abstract:
Granary weevil, Sitophilus granarius (L.) (Col.: Curculionidae) is a typical cosmopolitan pest. It can cause significant damage to stored grains, and can drastically decrease yields. Damaged grain has reduced nutritional and market value, weaker germination, and reduced weight. The commonly used protectants against stored-product pests in Europe are residual insecticides, applied directly to the product. Unfortunately, these pesticides can be toxic to mammals, the residues can accumulate in the treated products, and many pest species could become resistant to the protectants. During recent years, alternative solutions of grain protection have received increased attention. These solutions are considered as the most promising alternatives to residual insecticides. The aims of our comparative study were to obtain information about the efficacies of the 1. diatomaceous earth, 2. sterile insect technology and 3. herbal oils against the S. granarius on grain (foremost maize), and to evaluate the influence of the dose rate on weevil mortality and progeny. The main results of our laboratory experiments are the followings: 1. Diatomaceous earth was especially efficacious against S. granarius, but its insecticidal properties depend on exposure time and applied dose. The efficacy on barley was better than on maize. Mortality value of the highest dose was 85% on the 21st day in the case of barley. It can be ascertained that complete elimination of progeny was evidenced on both gain types. To summarize, a satisfactory efficacy level was obtained only on barley at a rate of 4g/kg. Alteration of efficacy between grain types can be explained with differences in grain surface. 2. The mortality consequences of Roentgen irradiation on the S. granarius was highly influenced by the exposure time, and the dose applied. At doses of 50 and 70Gy, the efficacy accepted in plant protection (mortality: 95%) was recorded only on the 21st day. During the application of 100 and 200Gy doses, high mortality values (83.5% and 97.5%) were observed on the 14th day. Our results confirmed the complete sterilizing effect of the doses of 70Gy and above. The autocide effect of 50 and 70Gy doses were demonstrated when irradiated specimens were mixed into groups of fertile specimens. Consequently, these doses might be successfully applied to put sterile insect technique (SIT) into practice. 3. The results revealed that both studied essential oils (Callendula officinalis, Hippophae rhamnoides) exerted strong toxic effect on S. granarius, but C. officinalis triggered higher mortality. The efficacy (94.62 ± 2.63%) was reached after a 48 hours exposure to H. rhamnoides oil at 2ml/kg while the application of 2ml/kg of C. officinalis oil for 24 hours produced 98.94 ± 1.00% mortality rate. Mortality was 100% at 5 ml/kg of H. rhamnoides after 24 hours duration of its application, while with C. officinalis the same value could be reached after a 12 hour-exposure to the oil. Both essential oils applied were eliminated the progeny.Keywords: Sitophilus granarius, stored product, protection, alternative solutions
Procedia PDF Downloads 170300 Enhancing Industrial Wastewater Treatment: Efficacy and Optimization of Ultrasound-Assisted Laccase Immobilized on Magnetic Fe₃O₄ Nanoparticles
Authors: K. Verma, v. S. Moholkar
Abstract:
In developed countries, water pollution caused by industrial discharge has emerged as a significant environmental concern over the past decades. However, despite ongoing efforts, a fully effective and sustainable remediation strategy has yet to be identified. This paper describes how enzymatic and sonochemical treatments have demonstrated great promise in degrading bio-refractory pollutants. Mainly, a compelling area of interest lies in the combined technique of sono-enzymatic treatment, which has exhibited a synergistic enhancement effect surpassing that of the individual techniques. This study employed the covalent attachment method to immobilize Laccase from Trametes versicolor onto amino-functionalized magnetic Fe₃O₄ nanoparticles. To comprehensively characterize the synthesized free nanoparticles and the laccase-immobilized nanoparticles, various techniques such as X-ray diffraction (XRD), Fourier transform infrared spectroscopy (FT-IR), scanning electron microscope (SEM), vibrating sample magnetometer (VSM), and surface area through Brunauer-Emmett-Teller (BET) were employed. The size of immobilized Fe₃O₄@Laccase was found to be 60 nm, and the maximum loading of laccase was found to be 24 mg/g of nanoparticle. An investigation was conducted to study the effect of various process parameters, such as immobilized Fe₃O₄ Laccase dose, temperature, and pH, on the % Chemical oxygen demand (COD) removal as a response. The statistical design pinpointed the optimum conditions (immobilized Fe₃O₄ Laccase dose = 1.46 g/L, pH = 4.5, and temperature = 66 oC), resulting in a remarkable 65.58% COD removal within 60 minutes. An even more significant improvement (90.31% COD removal) was achieved with ultrasound-assisted enzymatic reaction utilizing a 10% duty cycle. The investigation of various kinetic models for free and immobilized laccase, such as the Haldane, Yano, and Koga, and Michaelis-Menten, showed that ultrasound application impacted the kinetic parameters Vmax and Km. Specifically, Vmax values for free and immobilized laccase were found to be 0.021 mg/L min and 0.045 mg/L min, respectively, while Km values were 147.2 mg/L for free laccase and 136.46 mg/L for immobilized laccase. The lower Km and higher Vmax for immobilized laccase indicate its enhanced affinity towards the substrate, likely due to ultrasound-induced alterations in the enzyme's confirmation and increased exposure of active sites, leading to more efficient degradation. Furthermore, the toxicity and Liquid chromatography-mass spectrometry (LC-MS) analysis revealed that after the treatment process, the wastewater exhibited 70% less toxicity than before treatment, with over 25 compounds degrading by more than 75%. At last, the prepared immobilized laccase had excellent recyclability retaining 70% activity up to 6 consecutive cycles. A straightforward manufacturing strategy and outstanding performance make the recyclable magnetic immobilized Laccase (Fe₃O₄ Laccase) an up-and-coming option for various environmental applications, particularly in water pollution control and treatment.Keywords: kinetic, laccase enzyme, sonoenzymatic, ultrasound irradiation
Procedia PDF Downloads 67299 Exploratory Study on Mediating Role of Commitment-to-Change in Relations between Employee Voice, Employee Involvement and Organizational Change Readiness
Authors: Rohini Sharma, Chandan Kumar Sahoo, Rama Krishna Gupta Potnuru
Abstract:
Strong competitive forces and requirements to achieve efficiency are forcing the organizations to realize the necessity and inevitability of change. What's more, the trend does not appear to be abating. Researchers have estimated that about two thirds of change project fails. Empirical evidences further shows that organizations invest significantly in the planned change but people side is accounted for in a token or instrumental way, which is identified as one of the important reason, why change endeavours fail. However, whatever be the reason for change, organizational change readiness must be gauged prior to the institutionalization of organizational change. Hence, in this study the influence of employee voice and employee involvement on organizational change readiness via commitment-to-change is examined, as it is an area yet to be extensively studied. Also, though a recent study has investigated the interrelationship between leadership, organizational change readiness and commitment to change, our study further examined these constructs in relation with employee voice and employee involvement that plays a consequential role for organizational change readiness. Further, integrated conceptual model weaving varied concepts relating to organizational readiness with focus on commitment to change as mediator was found to be an area, which required more theorizing and empirical validation, and this study rooted in an Indian public sector organization is a step in this direction. Data for the study were collected through a survey among employees of Rourkela Steel Plant (RSP), a unit of Steel Authority of India Limited (SAIL); the first integrated Steel Plant in the public sector in India, for which stratified random sampling method was adopted. The schedule was distributed to around 700 employees, out of which 516 complete responses were obtained. The pre-validated scales were used for the study. All the variables in the study were measured on a five-point Likert scale ranging from “strongly disagree (1)” to “strongly agree (5)”. Structural equation modeling (SEM) using AMOS 22 was used to examine the hypothesized model, which offers a simultaneous test of an entire system of variables in a model. The study results shows that inter-relationship between employee voice and commitment-to-change, employee involvement and commitment-to-change and commitment-to-change and organizational change readiness were significant. To test the mediation hypotheses, Baron and Kenny’s technique was used. Examination of direct and mediated effect of mediators confirmed that commitment-to-change partially mediated the relation between employee involvement and organizational change readiness. Furthermore, study results also affirmed that commitment-to-change does not mediate the relation between employee involvement and organizational change readiness. The empirical exploration therefore establishes that it is important to harness employee’s valuable suggestions regarding change for building organizational change readiness. Regarding employee involvement, it was found that sharing information and involving people in decision-making, leads to a creation of participative climate, which educes employee commitment during change and commitment-to-change further, fosters organizational change readiness.Keywords: commitment-to-change, change management, employee voice, employee involvement, organizational change readiness
Procedia PDF Downloads 327298 Statistical Optimization of Adsorption of a Harmful Dye from Aqueous Solution
Abstract:
Textile industries cater to varied customer preferences and contribute substantially to the economy. However, these textile industries also produce a considerable amount of effluents. Prominent among these are the azo dyes which impart considerable color and toxicity even at low concentrations. Azo dyes are also used as coloring agents in food and pharmaceutical industry. Despite their applications, azo dyes are also notorious pollutants and carcinogens. Popular techniques like photo-degradation, biodegradation and the use of oxidizing agents are not applicable for all kinds of dyes, as most of them are stable to these techniques. Chemical coagulation produces a large amount of toxic sludge which is undesirable and is also ineffective towards a number of dyes. Most of the azo dyes are stable to UV-visible light irradiation and may even resist aerobic degradation. Adsorption has been the most preferred technique owing to its less cost, high capacity and process efficiency and the possibility of regenerating and recycling the adsorbent. Adsorption is also most preferred because it may produce high quality of the treated effluent and it is able to remove different kinds of dyes. However, the adsorption process is influenced by many variables whose inter-dependence makes it difficult to identify optimum conditions. The variables include stirring speed, temperature, initial concentration and adsorbent dosage. Further, the internal diffusional resistance inside the adsorbent particle leads to slow uptake of the solute within the adsorbent. Hence, it is necessary to identify optimum conditions that lead to high capacity and uptake rate of these pollutants. In this work, commercially available activated carbon was chosen as the adsorbent owing to its high surface area. A typical azo dye found in textile effluent waters, viz. the monoazo Acid Orange 10 dye (CAS: 1936-15-8) has been chosen as the representative pollutant. Adsorption studies were mainly focused at obtaining equilibrium and kinetic data for the batch adsorption process at different process conditions. Studies were conducted at different stirring speed, temperature, adsorbent dosage and initial dye concentration settings. The Full Factorial Design was the chosen statistical design framework for carrying out the experiments and identifying the important factors and their interactions. The optimum conditions identified from the experimental model were validated with actual experiments at the recommended settings. The equilibrium and kinetic data obtained were fitted to different models and the model parameters were estimated. This gives more details about the nature of adsorption taking place. Critical data required to design batch adsorption systems for removal of Acid Orange 10 dye and identification of factors that critically influence the separation efficiency are the key outcomes from this research.Keywords: acid orange 10, activated carbon, optimum adsorption conditions, statistical design
Procedia PDF Downloads 169297 Insights into Particle Dispersion, Agglomeration and Deposition in Turbulent Channel Flow
Authors: Mohammad Afkhami, Ali Hassanpour, Michael Fairweather
Abstract:
The work described in this paper was undertaken to gain insight into fundamental aspects of turbulent gas-particle flows with relevance to processes employed in a wide range of applications, such as oil and gas flow assurance in pipes, powder dispersion from dry powder inhalers, and particle resuspension in nuclear waste ponds, to name but a few. In particular, the influence of particle interaction and fluid phase behavior in turbulent flow on particle dispersion in a horizontal channel is investigated. The mathematical modeling technique used is based on the large eddy simulation (LES) methodology embodied in the commercial CFD code FLUENT, with flow solutions provided by this approach coupled to a second commercial code, EDEM, based on the discrete element method (DEM) which is used for the prediction of particle motion and interaction. The results generated by LES for the fluid phase have been validated against direct numerical simulations (DNS) for three different channel flows with shear Reynolds numbers, Reτ = 150, 300 and 590. Overall, the LES shows good agreement, with mean velocities and normal and shear stresses matching those of the DNS in both magnitude and position. The research work has focused on the prediction of those conditions favoring particle aggregation and deposition within turbulent flows. Simulations have been carried out to investigate the effects of particle size, density and concentration on particle agglomeration. Furthermore, particles with different surface properties have been simulated in three channel flows with different levels of flow turbulence, achieved by increasing the Reynolds number of the flow. The simulations mimic the conditions of two-phase, fluid-solid flows frequently encountered in domestic, commercial and industrial applications, for example, air conditioning and refrigeration units, heat exchangers, oil and gas suction and pressure lines. The particle size, density, surface energy and volume fractions selected are 45.6, 102 and 150 µm, 250, 1000 and 2159 kg m-3, 50, 500, and 5000 mJ m-2 and 7.84 × 10-6, 2.8 × 10-5, and 1 × 10-4, respectively; such particle properties are associated with particles found in soil, as well as metals and oxides prevalent in turbulent bounded fluid-solid flows due to erosion and corrosion of inner pipe walls. It has been found that the turbulence structure of the flow dominates the motion of the particles, creating particle-particle interactions, with most of these interactions taking place at locations close to the channel walls and in regions of high turbulence where their agglomeration is aided both by the high levels of turbulence and the high concentration of particles. A positive relationship between particle surface energy, concentration, size and density, and agglomeration was observed. Moreover, the results derived for the three Reynolds numbers considered show that the rate of agglomeration is strongly influenced for high surface energy particles by, and increases with, the intensity of the flow turbulence. In contrast, for lower surface energy particles, the rate of agglomeration diminishes with an increase in flow turbulence intensity.Keywords: agglomeration, channel flow, DEM, LES, turbulence
Procedia PDF Downloads 317296 Hydroxyapatite Nanorods as Novel Fillers for Improving the Properties of PBSu
Authors: M. Nerantzaki, I. Koliakou, D. Bikiaris
Abstract:
This study evaluates the hypothesis that the incorporation of fibrous hydroxyapatite nanoparticles (nHA) with high crystallinity and high aspect ratio, synthesized by hydrothermal method, into Poly(butylene succinate) (PBSu), improves the bioactivity of the aliphatic polyester and affects new bone growth inhibiting resorption and enhancing bone formation. Hydroxyapatite nanorods were synthesized using a simple hydrothermal procedure. First, the HPO42- -containing solution was added drop-wise into the Ca2+-containing solution, while the molar ratio of Ca/P was adjusted at 1.67. The HA precursor was then treated hydrothermally at 200°C for 72 h. The resulting powder was characterized using XRD, FT-IR, TEM, and EDXA. Afterwards, PBSu nanocomposites containing 2.5wt% (nHA) were prepared by in situ polymerization technique for the first time and were examined as potential scaffolds for bone engineering applications. For comparison purposes composites containing either 2.5wt% micro-Bioglass (mBG) or 2.5wt% mBG-nHA were prepared and studied, too. The composite scaffolds were characterized using SEM, FTIR, and XRD. Mechanical testing (Instron 3344) and Contact Angle measurements were also carried out. Enzymatic degradation was studied in an aqueous solution containing a mixture of R. Oryzae and P. Cepacia lipases at 37°C and pH=7.2. In vitro biomineralization test was performed by immersing all samples in simulated body fluid (SBF) for 21 days. Biocompatibility was assessed using rat Adipose Stem Cells (rASCs), genetically modified by nucleofection with DNA encoding SB100x transposase and pT2-Venus-neo transposon expression plasmids in order to attain fluorescence images. Cell proliferation and viability of cells on the scaffolds were evaluated using fluoresce microscopy and MTT (3-(4,5-dimethylthiazol-2-yl)-2,5 diphenyltetrazolium bromide) assay. Finally, osteogenic differentiation was assessed by staining rASCs with alizarine red using cetylpyridinium chloride (CPC) method. TEM image of the fibrous HAp nanoparticles, synthesized in the present study clearly showed the fibrous morphology of the synthesized powder. The addition of nHA decreased significantly the contact angle of the samples, indicating that the materials become more hydrophilic and hence they absorb more water and subsequently degrade more rapidly. In vitro biomineralization test confirmed that all samples were bioactive as mineral deposits were detected by X-ray diffractometry after incubation in SBF. Metabolic activity of rASCs on all PBSu composites was high and increased from day 1 of culture to day 14. On day 28 metabolic activity of rASCs cultured on samples enriched with bioceramics was significantly decreased due to possible differentiation of rASCs to osteoblasts. Staining rASCs with alizarin red after 28 days in culture confirmed our initial hypothesis as the presence of calcium was detected, suggesting osteogenic differentiation of rACS on PBSu/nHAp/mBG 2.5% and PBSu/mBG 2.5% composite scaffolds.Keywords: biomaterials, hydroxyapatite nanorods, poly(butylene succinate), scaffolds
Procedia PDF Downloads 308295 Low-Temperature Poly-Si Nanowire Junctionless Thin Film Transistors with Nickel Silicide
Authors: Yu-Hsien Lin, Yu-Ru Lin, Yung-Chun Wu
Abstract:
This work demonstrates the ultra-thin poly-Si (polycrystalline Silicon) nanowire junctionless thin film transistors (NWs JL-TFT) with nickel silicide contact. For nickel silicide film, this work designs to use two-step annealing to form ultra-thin, uniform and low sheet resistance (Rs) Ni silicide film. The NWs JL-TFT with nickel silicide contact exhibits the good electrical properties, including high driving current (>10⁷ Å), subthreshold slope (186 mV/dec.), and low parasitic resistance. In addition, this work also compares the electrical characteristics of NWs JL-TFT with nickel silicide and non-silicide contact. Nickel silicide techniques are widely used for high-performance devices as the device scaling due to the source/drain sheet resistance issue. Therefore, the self-aligned silicide (salicide) technique is presented to reduce the series resistance of the device. Nickel silicide has several advantages including low-temperature process, low silicon consumption, no bridging failure property, smaller mechanical stress, and smaller contact resistance. The junctionless thin-film transistor (JL-TFT) is fabricated simply by heavily doping the channel and source/drain (S/D) regions simultaneously. Owing to the special doping profile, JL-TFT has some advantages such as lower thermal the budget which can integrate with high-k/metal-gate easier than conventional MOSFETs (Metal Oxide Semiconductor Field-Effect Transistors), longer effective channel length than conventional MOSFETs, and avoidance of complicated source/drain engineering. To solve JL-TFT has turn-off problem, JL-TFT needs ultra-thin body (UTB) structure to reach fully depleted channel region in off-state. On the other hand, the drive current (Iᴅ) is declined as transistor features are scaled. Therefore, this work demonstrates ultra thin poly-Si nanowire junctionless thin film transistors with nickel silicide contact. This work investigates the low-temperature formation of nickel silicide layer by physical-chemical deposition (PVD) of a 15nm Ni layer on the poly-Si substrate. Notably, this work designs to use two-step annealing to form ultrathin, uniform and low sheet resistance (Rs) Ni silicide film. The first step was promoted Ni diffusion through a thin interfacial amorphous layer. Then, the unreacted metal was lifted off after the first step. The second step was annealing for lower sheet resistance and firmly merged the phase.The ultra-thin poly-Si nanowire junctionless thin film transistors NWs JL-TFT with nickel silicide contact is demonstrated, which reveals high driving current (>10⁷ Å), subthreshold slope (186 mV/dec.), and low parasitic resistance. In silicide film analysis, the second step of annealing was applied to form lower sheet resistance and firmly merge the phase silicide film. In short, the NWs JL-TFT with nickel silicide contact has exhibited a competitive short-channel behavior and improved drive current.Keywords: poly-Si, nanowire, junctionless, thin-film transistors, nickel silicide
Procedia PDF Downloads 237294 A Generalized Framework for Adaptive Machine Learning Deployments in Algorithmic Trading
Authors: Robert Caulk
Abstract:
A generalized framework for adaptive machine learning deployments in algorithmic trading is introduced, tested, and released as open-source code. The presented software aims to test the hypothesis that recent data contains enough information to form a probabilistically favorable short-term price prediction. Further, the framework contains various adaptive machine learning techniques that are geared toward generating profit during strong trends and minimizing losses during trend changes. Results demonstrate that this adaptive machine learning approach is capable of capturing trends and generating profit. The presentation also discusses the importance of defining the parameter space associated with the dynamic training data-set and using the parameter space to identify and remove outliers from prediction data points. Meanwhile, the generalized architecture enables common users to exploit the powerful machinery while focusing on high-level feature engineering and model testing. The presentation also highlights common strengths and weaknesses associated with the presented technique and presents a broad range of well-tested starting points for feature set construction, target setting, and statistical methods for enforcing risk management and maintaining probabilistically favorable entry and exit points. The presentation also describes the end-to-end data processing tools associated with FreqAI, including automatic data fetching, data aggregation, feature engineering, safe and robust data pre-processing, outlier detection, custom machine learning and statistical tools, data post-processing, and adaptive training backtest emulation, and deployment of adaptive training in live environments. Finally, the generalized user interface is also discussed in the presentation. Feature engineering is simplified so that users can seed their feature sets with common indicator libraries (e.g. TA-lib, pandas-ta). The user also feeds data expansion parameters to fill out a large feature set for the model, which can contain as many as 10,000+ features. The presentation describes the various object-oriented programming techniques employed to make FreqAI agnostic to third-party libraries and external data sources. In other words, the back-end is constructed in such a way that users can leverage a broad range of common regression libraries (Catboost, LightGBM, Sklearn, etc) as well as common Neural Network libraries (TensorFlow, PyTorch) without worrying about the logistical complexities associated with data handling and API interactions. The presentation finishes by drawing conclusions about the most important parameters associated with a live deployment of the adaptive learning framework and provides the road map for future development in FreqAI.Keywords: machine learning, market trend detection, open-source, adaptive learning, parameter space exploration
Procedia PDF Downloads 89293 Assessment of the Living Conditions of Female Inmates in Correctional Service Centres in South West Nigeria
Authors: Ayoola Adekunle Dada, Tolulope Omolola Fateropa
Abstract:
There is no gain saying the fact that the Nigerian correctional services lack rehabilitation reformation. Owing to this, some so many inmates, including the female, become more emotionally bruised and hardened instead of coming out of the prison reformed. Although female inmates constitute only a small percentage worldwide, the challenges resulting from women falling under the provision of the penal system have prompted ficial and humanitarian bodies to consider female inmateas as vulnerable persons who need particular social work measures that meet their specific needs. Female inmates’condition may become worseinprisondue to the absence of the standard living condition. A survey of 100 female inmates will be used to determine the assessment of the living condition of the female inmates within the contexts in which they occur. Employing field methods from Medical Sociology and Law, the study seeks to make use of the collaboration of both disciplines for a comprehensive understanding of the scenario. Its specific objectives encompassed: (1) To examine access and use of health facilities among the female inmates;(2) To examine the effect of officers/warders attitude towards female inmates;(3)To investigate the perception of the female inmates towards the housing facilities in the centre and; (4) To investigate the feeding habit of the female inmates. Due to the exploratory nature of the study, the researchers will make use of mixed-method, such qualitative methods as interviews will be undertaken to complement survey research (quantitative). By adopting the above-explained inter-method triangulation, the study will not only ensure that the advantages of both methods are exploited but will also fulfil the basic purposes of research. The sampling for this study will be purposive. The study aims at sampling two correctional centres (Ado Ekiti and Akure) in order to generate representative data for the female inmates in South West Nigeria. In all, the total number of respondents will be 100. A cross-section of female inmates will be selected as respondents using a multi-stage sampling technique. 100 questionnaires will be administered. A semi structured (in-depth) interviews will be conducted among workers in the two selected correctional centres, respectively, to gain further insight on the living conditions of female inmates, which the survey may not readily elicit. These participants will be selected purposively in respect to their status in the organisation. Ethical issues in research on human subjects will be given due consideration. Such issues rest on principles of beneficence, non-maleficence, autonomy/justice and confidentiality. In the final analysis, qualitative data will be analyzed using manual content analysis. Both the descriptive and inferential statistics will be used for analytical purposes. Frequency, simple percentage, pie chart, bar chart, curve and cross-tabulations will form part of the descriptive analysis.Keywords: assessment, health facilities, inmates, perception, living conditions
Procedia PDF Downloads 96292 Health and Performance Fitness Assessment of Adolescents in Middle Income Schools in Lagos State
Authors: Onabajo Paul
Abstract:
The testing and assessment of physical fitness of school-aged adolescents in Nigeria has been going on for several decades. Originally, these tests strictly focused on identifying health and physical fitness status and comparing the results of adolescents with others. There is a considerable interest in health and performance fitness of adolescents in which results attained are compared with criteria representing positive health rather than simply on score comparisons with others. Despite the fact that physical education program is being studied in secondary schools and physical activities are encouraged, it is observed that regular assessment of students’ fitness level and health status seems to be scarce or not being done in these schools. The purpose of the study was to assess the heath and performance fitness of adolescents in middle-income schools in Lagos State. A total number of 150 students were selected using the simple random sampling technique. Participants were measured on hand grip strength, sit-up, pacer 20 meter shuttle run, standing long jump, weight and height. The data collected were analyzed with descriptive statistics of means, standard deviations, and range and compared with fitness norms. It was concluded that majority 111(74.0%) of the adolescents achieved the healthy fitness zone, 33(22.0%) were very lean, and 6(4.0%) needed improvement according to the normative standard of Body Mass Index test. For muscular strength, majority 78(52.0%) were weak, 66(44.0%) were normal, and 6(4.0%) were strong according to the normative standard of hand-grip strength test. For aerobic capacity fitness, majority 93(62.0%) needed improvement and were at health risk, 36(24.0%) achieved healthy fitness zone, and 21(14.0%) needed improvement according to the normative standard of PACER test. Majority 48(32.0%) of the participants had good hip flexibility, 38(25.3%) had fair status, 27(18.0%) needed improvement, 24(16.0%) had very good hip flexibility status, and 13(8.7%) of the participants had excellent status. Majority 61(40.7%) had average muscular endurance status, 30(20.0%) had poor status, 29(18.3%) had good status, 28(18.7%) had fair muscular endurance status, and 2(1.3%) of the participants had excellent status according to the normative standard of sit-up test. Majority 52(34.7%) had low jump ability fitness, 47(31.3%) had marginal fitness, 31(20.7%) had good fitness, and 20(13.3%) had high performance fitness according to the normative standard of standing long jump test. Based on the findings, it was concluded that majority of the adolescents had better Body Mass Index status, and performed well in both hip flexibility and muscular endurance tests. Whereas majority of the adolescents performed poorly in aerobic capacity test, muscular strength and jump ability test. It was recommended that to enhance wellness, adolescents should be involved in physical activities and recreation lasting 30 minutes three times a week. Schools should engage in fitness program for students on regular basis at both senior and junior classes so as to develop good cardio-respiratory, muscular fitness and improve overall health of the students.Keywords: adolescents, health-related fitness, performance-related fitness, physical fitness
Procedia PDF Downloads 353291 Use of Socially Assistive Robots in Early Rehabilitation to Promote Mobility for Infants with Motor Delays
Authors: Elena Kokkoni, Prasanna Kannappan, Ashkan Zehfroosh, Effrosyni Mavroudi, Kristina Strother-Garcia, James C. Galloway, Jeffrey Heinz, Rene Vidal, Herbert G. Tanner
Abstract:
Early immobility affects the motor, cognitive, and social development. Current pediatric rehabilitation lacks the technology that will provide the dosage needed to promote mobility for young children at risk. The addition of socially assistive robots in early interventions may help increase the mobility dosage. The aim of this study is to examine the feasibility of an early intervention paradigm where non-walking infants experience independent mobility while socially interacting with robots. A dynamic environment is developed where both the child and the robot interact and learn from each other. The environment involves: 1) a range of physical activities that are goal-oriented, age-appropriate, and ability-matched for the child to perform, 2) the automatic functions that perceive the child’s actions through novel activity recognition algorithms, and decide appropriate actions for the robot, and 3) a networked visual data acquisition system that enables real-time assessment and provides the means to connect child behavior with robot decision-making in real-time. The environment was tested by bringing a two-year old boy with Down syndrome for eight sessions. The child presented delays throughout his motor development with the current being on the acquisition of walking. During the sessions, the child performed physical activities that required complex motor actions (e.g. climbing an inclined platform and/or staircase). During these activities, a (wheeled or humanoid) robot was either performing the action or was at its end point 'signaling' for interaction. From these sessions, information was gathered to develop algorithms to automate the perception of activities which the robot bases its actions on. A Markov Decision Process (MDP) is used to model the intentions of the child. A 'smoothing' technique is used to help identify the model’s parameters which are a critical step when dealing with small data sets such in this paradigm. The child engaged in all activities and socially interacted with the robot across sessions. With time, the child’s mobility was increased, and the frequency and duration of complex and independent motor actions were also increased (e.g. taking independent steps). Simulation results on the combination of the MDP and smoothing support the use of this model in human-robot interaction. Smoothing facilitates learning MDP parameters from small data sets. This paradigm is feasible and provides an insight on how social interaction may elicit mobility actions suggesting a new early intervention paradigm for very young children with motor disabilities. Acknowledgment: This work has been supported by NIH under grant #5R01HD87133.Keywords: activity recognition, human-robot interaction, machine learning, pediatric rehabilitation
Procedia PDF Downloads 292290 Subcutan Isosulfan Blue Administration May Interfere with Pulse Oximetry
Authors: Esra Yuksel, Dilek Duman, Levent Yeniay, Sezgin Ulukaya
Abstract:
Sentinel lymph node biopsy (SLNB) is a minimal invasive technique with lower morbidity in axillary staging of breast cancer. Isosulfan blue stain is frequently used in SLNB and regarded as safe. The present case report aimed to report severe decrement in SpO2 following isosulfan blue administration, as well as skin and urine signs and inconsistency with clinical picture in a 67-year-old ,77 kg, ASA II female case that underwent SLNB under general anesthesia. Ten minutes after subcutaneous administration of 10 ml 1% isosulfan blue by the surgeons into the patient, who were hemodynamically stable, SpO2 first reduced to 87% from 99%, and then to 75% in minutes despite 100% oxygen support. Meanwhile, blood pressure and EtCO2 monitoring was unremarkable. After specifying that anesthesia device worked normally, airway pressure did not increase and the endotracheal tube has been placed accurately, the blood sample was taken from the patient for arterial gas analysis. A severe increase was thought in MetHb concentration since SpO2 persisted to be 75% although the concentration of inspired oxygen was 100%, and solution of 2500 mg ascorbic acid in 500 ml 5% Dextrose was given to the patient via intravenous route until the results of arterial blood gas were obtained. However, arterial blood gas results were as follows: pH: 7.54, PaCO2: 23.3 mmHg, PaO2: 281 mmHg, SaO2: %99, and MetHb: %2.7. Biochemical analysis revealed a blood MetHb concentration of 2%.However, since arterial blood gas parameters were good, hemodynamics of the patient was stable and methemoglobin concentration was not so high, the patient was extubated after surgery when she was relaxed, cooperated and had adequate respiration. Despite the absence of respiratory or neurological distress, SpO2 value was increased only up to 85% within 2 hours with 5 L/min oxygen support via face mask in the surgery room as the patient was extubated. At that time, the skin of particularly the upper part of her body has turned into blue, more remarkable on the face. The color of plasma of the blood taken from the patient for biochemical analysis was blue. The color of urine coming throughout the urinary catheter placed in intensive care unit was also blue. Twelve hours after 5 L/min. oxygen inhalation via a mask, the SpO2 reached to 90%. During monitoring in intensive care unit on the postoperative 1st day, facial color and urine color of the patient was still blue, SpO2 was 92%, and arterial blood gas levels were as follows: pH: 7.44, PaO2: 76.1 mmHg, PaCO2: 38.2 mmHg, SaO2: 99%, and MetHb 1%. During monitoring in clinic on the postoperative 2nd day, SpO2 was 95% without oxygen support and her facial and urine color turned into normal. The patient was discharged on the 3rd day without any problem.In conclusion, SLNB is a less invasive alternative to axillary dissection. However, false pulse oximeter reading due to pigment interference is a rare complication of this procedure. Arterial blood gas analysis should be used to confirm any fall in SpO2 reading during monitoring.Keywords: isosulfan blue, pulse oximetry, SLNB, methemoglobinemia
Procedia PDF Downloads 315289 Decision-Making, Expectations and Life Project in Dependent Adults Due to Disability
Authors: Julia Córdoba
Abstract:
People are not completely autonomous, as we live in society; therefore, people could be defined as relationally dependent. The lack, decrease or loss of physical, psychological and/or social interdependence due to a disability situation is known as dependence. This is related to the need for help from another person in order to carry out activities of daily living. This population group lives with major social limitations that significantly reduce their participation and autonomy. They have high levels of stigma and invisibility from private environments (family and close networks), as well as from the public order (environment, community). The importance of this study lies in the fact that the lack of support and adjustments leads to what authors call the circle of exclusion. This circle describes how not accessing services - due to the difficulties caused by the disability situation impacts biological, social and psychological levels. This situation produces higher levels of exclusion and vulnerability. This study will focus on the process of autonomy and dependence of adults with disability from the model of disability proposed by the International Classification of Functioning, Health and Disability (ICF). The objectives are: i) to write down the relationship between autonomy and dependence based on socio-health variables and ii) to determine the relationship between the situation of autonomy and dependence and the expectations and interests of the participants. We propose a study that will use a survey technique through a previously validated virtual questionnaire. The data obtained will be analyzed using quantitative and qualitative methods for the details of the profiles obtained. No less than 200 questionnaires will be administered to people between 18 and 64 years of age who self-identify as having some degree of dependency due to disability. For the analysis of the results, the two main variables of autonomy and dependence will be considered. Socio-demographic variables such as age, gender identity, area of residence and family composition will be used. In relation to the biological dimension of the situation, the diagnosis, if any, and the type of disability will be asked. For the description of these profiles of autonomy and dependence, the following variables will be used: self-perception, decision-making, interests, expectations and life project, care of their health condition, support and social network, and labor and educational inclusion. The relationship between the target population and the variables collected provides several guidelines that could form the basis for the analysis of other research of interest in terms of self-perception, autonomy and dependence. The areas and situations where people state that they have greater possibilities to decide and have a say will be obtained. It will identify social (networks and support, educational background), demographic (age, gender identity and residence) and health-related variables (diagnosis and type of disability, quality of care) that may have a greater relationship with situations of dependency or autonomy. It will be studied whether the level of autonomy and/or dependence has an impact on the type of expectations and interests of the people surveyed.Keywords: life project, disability, inclusion, autonomy
Procedia PDF Downloads 67288 Precise Determination of the Residual Stress Gradient in Composite Laminates Using a Configurable Numerical-Experimental Coupling Based on the Incremental Hole Drilling Method
Authors: A. S. Ibrahim Mamane, S. Giljean, M.-J. Pac, G. L’Hostis
Abstract:
Fiber reinforced composite laminates are particularly subject to residual stresses due to their heterogeneity and the complex chemical, mechanical and thermal mechanisms that occur during their processing. Residual stresses are now well known to cause damage accumulation, shape instability, and behavior disturbance in composite parts. Many works exist in the literature on techniques for minimizing residual stresses in thermosetting and thermoplastic composites mainly. To study in-depth the influence of processing mechanisms on the formation of residual stresses and to minimize them by establishing a reliable correlation, it is essential to be able to measure very precisely the profile of residual stresses in the composite. Residual stresses are important data to consider when sizing composite parts and predicting their behavior. The incremental hole drilling is very effective in measuring the gradient of residual stresses in composite laminates. This method is semi-destructive and consists of drilling incrementally a hole through the thickness of the material and measuring relaxation strains around the hole for each increment using three strain gauges. These strains are then converted into residual stresses using a matrix of coefficients. These coefficients, called calibration coefficients, depending on the diameter of the hole and the dimensions of the gauges used. The reliability of the incremental hole drilling depends on the accuracy with which the calibration coefficients are determined. These coefficients are calculated using a finite element model. The samples’ features and the experimental conditions must be considered in the simulation. Any mismatch can lead to inadequate calibration coefficients, thus introducing errors on residual stresses. Several calibration coefficient correction methods exist for isotropic material, but there is a lack of information on this subject concerning composite laminates. In this work, a Python program was developed to automatically generate the adequate finite element model. This model allowed us to perform a parametric study to assess the influence of experimental errors on the calibration coefficients. The results highlighted the sensitivity of the calibration coefficients to the considered errors and gave an order of magnitude of the precisions required on the experimental device to have reliable measurements. On the basis of these results, improvements were proposed on the experimental device. Furthermore, a numerical method was proposed to correct the calibration coefficients for different types of materials, including thick composite parts for which the analytical approach is too complex. This method consists of taking into account the experimental errors in the simulation. Accurate measurement of the experimental errors (such as eccentricity of the hole, angular deviation of the gauges from their theoretical position, or errors on increment depth) is therefore necessary. The aim is to determine more precisely the residual stresses and to expand the validity domain of the incremental hole drilling technique.Keywords: fiber reinforced composites, finite element simulation, incremental hole drilling method, numerical correction of the calibration coefficients, residual stresses
Procedia PDF Downloads 132