Search results for: and normalization technique
338 Bioresorbable Medicament-Eluting Grommet Tube for Otitis Media with Effusion
Authors: Chee Wee Gan, Anthony Herr Cheun Ng, Yee Shan Wong, Subbu Venkatraman, Lynne Hsueh Yee Lim
Abstract:
Otitis media with effusion (OME) is the leading cause of hearing loss in children worldwide. Surgery to insert grommet tube into the eardrum is usually indicated for OME unresponsive to antimicrobial therapy. It is the most common surgery for children. However, current commercially available grommet tubes are non-bioresorbable, not drug-treated, with unpredictable duration of retention on the eardrum to ventilate middle ear. Their functionality is impaired when clogged or chronically infected, requiring additional surgery to remove/reinsert grommet tubes. We envisaged that a novel fully bioresorbable grommet tube with sustained antibiotic release technology could address these drawbacks. In this study, drug-loaded bioresorbable poly(L-lactide-co-ε-caprolactone)(PLC) copolymer grommet tubes were fabricated by microinjection moulding technique. In vitro drug release and degradation model of PLC tubes were studied. Antibacterial property was evaluated by incubating PLC tubes with P. aeruginosa broth. Surface morphology was analyzed using scanning electron microscopy. A preliminary animal study was conducted using guinea pigs as an in vivo model to evaluate PLC tubes with and without drug, with commercial Mini Shah grommet tube as comparison. Our in vitro data showed sustained drug release over 3 months. All PLC tubes revealed exponential degradation profiles over time. Modeling predicted loss of tube functionality in water to be approximately 14 weeks and 17 weeks for PLC with and without drug, respectively. Generally, PLC tubes had less bacteria adherence, which were attributed to the much smoother tube surfaces compared to Mini Shah. Antibiotic from PLC tube further made bacteria adherence on surface negligible. They showed neither inflammation nor otorrhea after 18 weeks post-insertion in the eardrums of guinea pigs, but had demonstrated severe degree of bioresorption. Histology confirmed the new PLC tubes were biocompatible. Analyses on the PLC tubes in the eardrums showed bioresorption profiles close to our in vitro degradation models. The bioresorbable antibiotic-loaded grommet tubes showed good predictability in functionality. The smooth surface and sustained release technology reduced the risk of tube infection. Tube functional duration of 18 weeks allowed sufficient ventilation period to treat OME. Our ongoing studies include modifying the surface properties with protein coating, optimizing the drug dosage in the tubes to enhance their performances, evaluating their functional outcome on hearing after full resoption of grommet tube and healing of eardrums, and developing animal model with OME to further validate our in vitro models.Keywords: bioresorbable polymer, drug release, grommet tube, guinea pigs, otitis media with effusion
Procedia PDF Downloads 451337 Regularizing Software for Aerosol Particles
Authors: Christine Böckmann, Julia Rosemann
Abstract:
We present an inversion algorithm that is used in the European Aerosol Lidar Network for the inversion of data collected with multi-wavelength Raman lidar. These instruments measure backscatter coefficients at 355, 532, and 1064 nm, and extinction coefficients at 355 and 532 nm. The algorithm is based on manually controlled inversion of optical data which allows for detailed sensitivity studies and thus provides us with comparably high quality of the derived data products. The algorithm allows us to derive particle effective radius, volume, surface-area concentration with comparably high confidence. The retrieval of the real and imaginary parts of the complex refractive index still is a challenge in view of the accuracy required for these parameters in climate change studies in which light-absorption needs to be known with high accuracy. Single-scattering albedo (SSA) can be computed from the retrieve microphysical parameters and allows us to categorize aerosols into high and low absorbing aerosols. From mathematical point of view the algorithm is based on the concept of using truncated singular value decomposition as regularization method. This method was adapted to work for the retrieval of the particle size distribution function (PSD) and is called hybrid regularization technique since it is using a triple of regularization parameters. The inversion of an ill-posed problem, such as the retrieval of the PSD, is always a challenging task because very small measurement errors will be amplified most often hugely during the solution process unless an appropriate regularization method is used. Even using a regularization method is difficult since appropriate regularization parameters have to be determined. Therefore, in a next stage of our work we decided to use two regularization techniques in parallel for comparison purpose. The second method is an iterative regularization method based on Pade iteration. Here, the number of iteration steps serves as the regularization parameter. We successfully developed a semi-automated software for spherical particles which is able to run even on a parallel processor machine. From a mathematical point of view, it is also very important (as selection criteria for an appropriate regularization method) to investigate the degree of ill-posedness of the problem which we found is a moderate ill-posedness. We computed the optical data from mono-modal logarithmic PSD and investigated particles of spherical shape in our simulations. We considered particle radii as large as 6 nm which does not only cover the size range of particles in the fine-mode fraction of naturally occurring PSD but also covers a part of the coarse-mode fraction of PSD. We considered errors of 15% in the simulation studies. For the SSA, 100% of all cases achieve relative errors below 12%. In more detail, 87% of all cases for 355 nm and 88% of all cases for 532 nm are well below 6%. With respect to the absolute error for non- and weak-absorbing particles with real parts 1.5 and 1.6 in all modes the accuracy limit +/- 0.03 is achieved. In sum, 70% of all cases stay below +/-0.03 which is sufficient for climate change studies.Keywords: aerosol particles, inverse problem, microphysical particle properties, regularization
Procedia PDF Downloads 343336 Evaluation of the Effect of Learning Disabilities and Accommodations on the Prediction of the Exam Performance: Ordinal Decision-Tree Algorithm
Abstract:
Providing students with learning disabilities (LD) with extra time to grant them equal access to the exam is a necessary but insufficient condition to compensate for their LD; there should also be a clear indication that the additional time was actually used. For example, if students with LD use more time than students without LD and yet receive lower grades, this may indicate that a different accommodation is required. If they achieve higher grades but use the same amount of time, then the effectiveness of the accommodation has not been demonstrated. The main goal of this study is to evaluate the effect of including parameters related to LD and extended exam time, along with other commonly-used characteristics (e.g., student background and ability measures such as high-school grades), on the ability of ordinal decision-tree algorithms to predict exam performance. We use naturally-occurring data collected from hundreds of undergraduate engineering students. The sub-goals are i) to examine the improvement in prediction accuracy when the indicator of exam performance includes 'actual time used' in addition to the conventional indicator (exam grade) employed in most research; ii) to explore the effectiveness of extended exam time on exam performance for different courses and for LD students with different profiles (i.e., sets of characteristics). This is achieved by using the patterns (i.e., subgroups) generated by the algorithms to identify pairs of subgroups that differ in just one characteristic (e.g., course or type of LD) but have different outcomes in terms of exam performance (grade and time used). Since grade and time used to exhibit an ordering form, we propose a method based on ordinal decision-trees, which applies a weighted information-gain ratio (WIGR) measure for selecting the classifying attributes. Unlike other known ordinal algorithms, our method does not assume monotonicity in the data. The proposed WIGR is an extension of an information-theoretic measure, in the sense that it adjusts to the case of an ordinal target and takes into account the error severity between two different target classes. Specifically, we use ordinal C4.5, random-forest, and AdaBoost algorithms, as well as an ensemble technique composed of ordinal and non-ordinal classifiers. Firstly, we find that the inclusion of LD and extended exam-time parameters improves prediction of exam performance (compared to specifications of the algorithms that do not include these variables). Secondly, when the indicator of exam performance includes 'actual time used' together with grade (as opposed to grade only), the prediction accuracy improves. Thirdly, our subgroup analyses show clear differences in the effect of extended exam time on exam performance among different courses and different student profiles. From a methodological perspective, we find that the ordinal decision-tree based algorithms outperform their conventional, non-ordinal counterparts. Further, we demonstrate that the ensemble-based approach leverages the strengths of each type of classifier (ordinal and non-ordinal) and yields better performance than each classifier individually.Keywords: actual exam time usage, ensemble learning, learning disabilities, ordinal classification, time extension
Procedia PDF Downloads 101335 Automated Evaluation Approach for Time-Dependent Question Answering Pairs on Web Crawler Based Question Answering System
Authors: Shraddha Chaudhary, Raksha Agarwal, Niladri Chatterjee
Abstract:
This work demonstrates a web crawler-based generalized end-to-end open domain Question Answering (QA) system. An efficient QA system requires a significant amount of domain knowledge to answer any question with the aim to find an exact and correct answer in the form of a number, a noun, a short phrase, or a brief piece of text for the user's questions. Analysis of the question, searching the relevant document, and choosing an answer are three important steps in a QA system. This work uses a web scraper (Beautiful Soup) to extract K-documents from the web. The value of K can be calibrated on the basis of a trade-off between time and accuracy. This is followed by a passage ranking process using the MS-Marco dataset trained on 500K queries to extract the most relevant text passage, to shorten the lengthy documents. Further, a QA system is used to extract the answers from the shortened documents based on the query and return the top 3 answers. For evaluation of such systems, accuracy is judged by the exact match between predicted answers and gold answers. But automatic evaluation methods fail due to the linguistic ambiguities inherent in the questions. Moreover, reference answers are often not exhaustive or are out of date. Hence correct answers predicted by the system are often judged incorrect according to the automated metrics. One such scenario arises from the original Google Natural Question (GNQ) dataset which was collected and made available in the year 2016. Use of any such dataset proves to be inefficient with respect to any questions that have time-varying answers. For illustration, if the query is where will be the next Olympics? Gold Answer for the above query as given in the GNQ dataset is “Tokyo”. Since the dataset was collected in the year 2016, and the next Olympics after 2016 were in 2020 that was in Tokyo which is absolutely correct. But if the same question is asked in 2022 then the answer is “Paris, 2024”. Consequently, any evaluation based on the GNQ dataset will be incorrect. Such erroneous predictions are usually given to human evaluators for further validation which is quite expensive and time-consuming. To address this erroneous evaluation, the present work proposes an automated approach for evaluating time-dependent question-answer pairs. In particular, it proposes a metric using the current timestamp along with top-n predicted answers from a given QA system. To test the proposed approach GNQ dataset has been used and the system achieved an accuracy of 78% for a test dataset comprising 100 QA pairs. This test data was automatically extracted using an analysis-based approach from 10K QA pairs of the GNQ dataset. The results obtained are encouraging. The proposed technique appears to have the possibility of developing into a useful scheme for gathering precise, reliable, and specific information in a real-time and efficient manner. Our subsequent experiments will be guided towards establishing the efficacy of the above system for a larger set of time-dependent QA pairs.Keywords: web-based information retrieval, open domain question answering system, time-varying QA, QA evaluation
Procedia PDF Downloads 101334 Enhancing Efficiency of Building through Translucent Concrete
Authors: Humaira Athar, Brajeshwar Singh
Abstract:
Generally, the brightness of the indoor environment of buildings is entirely maintained by the artificial lighting which has consumed a large amount of resources. It is reported that lighting consumes about 19% of the total generated electricity which accounts for about 30-40% of total energy consumption. One possible way is to reduce the lighting energy by exploiting sunlight either through the use of suitable devices or energy efficient materials like translucent concrete. Translucent concrete is one such architectural concrete which allows the passage of natural light as well as artificial light through it. Several attempts have been made on different aspects of translucent concrete such as light guiding materials (glass fibers, plastic fibers, cylinder etc.), concrete mix design and manufacturing methods for use as building elements. Concerns are, however, raised on various related issues such as poor compatibility between the optical fibers and cement paste, unaesthetic appearance due to disturbance occurred in the arrangement of fibers during vibration and high shrinkage in flowable concrete due to its high water/cement ratio. Need is felt to develop translucent concrete to meet the requirement of structural safety as OPC concrete with the maximized saving in energy towards the power of illumination and thermal load in buildings. Translucent concrete was produced using pre-treated plastic optical fibers (POF, 2mm dia.) and high slump white concrete. The concrete mix was proportioned in the ratio of 1:1.9:2.1 with a w/c ratio of 0.40. The POF was varied from 0.8-9 vol.%. The mechanical properties and light transmission of this concrete were determined. Thermal conductivity of samples was measured by a transient plate source technique. Daylight illumination was measured by a lux grid method as per BIS:SP-41. It was found that the compressive strength of translucent concrete increased with decreasing optical fiber content. An increase of ~28% in the compressive strength of concrete was noticed when fiber was pre-treated. FE-SEM images showed little-debonded zone between the fibers and cement paste which was well supported with pull-out bond strength test results (~187% improvement over untreated). The light transmission of concrete was in the range of 3-7% depending on fiber spacing (5-20 mm). The average daylight illuminance (~75 lux) was nearly equivalent to the criteria specified for illumination for circulation (80 lux). The thermal conductivity of translucent concrete was reduced by 28-40% with respect to plain concrete. The thermal load calculated by heat conduction equation was ~16% more than the plain concrete. Based on Design-Builder software, the total annual illumination energy load of a room using one side translucent concrete was 162.36 kW compared with the energy load of 249.75 kW for a room without concrete. The calculated energy saving on an account of the power of illumination was ~25%. A marginal improvement towards thermal comfort was also noticed. It is concluded that the translucent concrete has the advantages of the existing concrete (load bearing) with translucency and insulation characteristics. It saves a significant amount of energy by providing natural daylight instead of artificial power consumption of illumination.Keywords: energy saving, light transmission, microstructure, plastic optical fibers, translucent concrete
Procedia PDF Downloads 130333 Cut-Off of CMV Cobas® Taqman® (CAP/CTM Roche®) for Introduction of Ganciclovir Pre-Emptive Therapy in Allogeneic Hematopoietic Stem Cell Transplant Recipients
Authors: B. B. S. Pereira, M. O. Souza, L. P. Zanetti, L. C. S. Oliveira, J. R. P. Moreno, M. P. Souza, V. R. Colturato, C. M. Machado
Abstract:
Background: The introduction of prophylactic or preemptive therapies has effectively decreased the CMV mortality rates after hematopoietic stem cell transplantation (HSCT). CMV antigenemia (pp65) or quantitative PCR are methods currently approved for CMV surveillance in pre-emptive strategies. Commercial assays are preferred as cut-off levels defined by in-house assays may vary among different protocols and in general show low reproducibility. Moreover, comparison of published data among different centers is only possible if international standards of quantification are included in the assays. Recently, the World Health Organization (WHO) established the first international standard for CMV detection. The real time PCR COBAS Ampliprep/ CobasTaqMan (CAP/CTM) (Roche®) was developed using the WHO standard for CMV quantification. However, the cut-off for the introduction of antiviral has not been determined yet. Methods: We conducted a retrospective study to determine: 1) the sensitivity and specificity of the new CMV CAP/CTM test in comparison with pp65 antigenemia to detect episodes of CMV infection/reactivation, and 2) the cut-off of viral load for introduction of ganciclovir (GCV). Pp65 antigenemia was performed and the corresponding plasma samples were stored at -20°C for further CMV detection by CAP/CTM. Comparison of tests was performed by kappa index. The appearance of positive antigenemia was considered the state variable to determine the cut-off of CMV viral load by ROC curve. Statistical analysis was performed using SPSS software version 19 (SPSS, Chicago, IL, USA.). Results: Thirty-eight patients were included and followed from August 2014 through May 2015. The antigenemia test detected 53 episodes of CMV infection in 34 patients (89.5%), while CAP/CTM detected 37 episodes in 33 patients (86.8%). AG and PCR results were compared in 431 samples and Kappa index was 30.9%. The median time for first AG detection was 42 (28-140) days, while CAP/CTM detected at a median of 7 days earlier (34 days, ranging from 7 to 110 days). The optimum cut-off value of CMV DNA was 34.25 IU/mL to detect positive antigenemia with 88.2% of sensibility, 100% of specificity and AUC of 0.91. This cut-off value is below the limit of detection and quantification of the equipment which is 56 IU/mL. According to CMV recurrence definition, 16 episodes of CMV recurrence were detected by antigenemia (47.1%) and 4 (12.1%) by CAP/CTM. The duration of viremia as detected by antigenemia was shorter (60.5% of the episodes lasted ≤ 7 days) in comparison to CAP/CTM (57.9% of the episodes lasting 15 days or more). This data suggests that the use of antigenemia to define the duration of GCV therapy might prompt early interruption of antiviral, which may favor CMV reactivation. The CAP/CTM PCR could possibly provide a safer information concerning the duration of GCV therapy. As prolonged treatment may increase the risk of toxicity, this hypothesis should be confirmed in prospective trials. Conclusions: Even though CAP/CTM by ROCHE showed great qualitative correlation with the antigenemia technique, the fully automated CAP/CTM did not demonstrate increased sensitivity. The cut-off value below the limit of detection and quantification may result in delayed introduction of pre-emptive therapy.Keywords: antigenemia, CMV COBAS/TAQMAN, cytomegalovirus, antiviral cut-off
Procedia PDF Downloads 192332 Progress Towards Optimizing and Standardizing Fiducial Placement Geometry in Prostate, Renal, and Pancreatic Cancer
Authors: Shiva Naidoo, Kristena Yossef, Grimm Jimm, Mirza Wasique, Eric Kemmerer, Joshua Obuch, Anand Mahadevan
Abstract:
Background: Fiducial markers effectively enhance tumor target visibility prior to Stereotactic Body Radiation Therapy or Proton therapy. To streamline clinical practice, fiducial placement guidelines from a robotic radiosurgery vendor were examined with the goals of optimizing and standardizing feasible geometries for each treatment indication. Clinical examples of prostate, renal, and pancreatic cases are presented. Methods: Vendor guidelines (Accuray, Sunnyvale, Ca) suggest implantation of 4–6 fiducials at least 20 mm apart, with at least a 15-degree angular difference between fiducials, within 50 mm or less from the target centroid, to ensure that any potential fiducial motion (e.g., from respiration or abdominal/pelvic pressures) will mimic target motion. Also recommended is that all fiducials can be seen in 45-degree oblique views with no overlap to coincide with the robotic radiosurgery imaging planes. For the prostate, a standardized geometry that meets all these objectives is a 2 cm-by-2 cm square in the coronal plane. The transperineal implant of two pairs of preloaded tandem fiducials makes the 2 cm-by-2 cm square geometry clinically feasible. This technique may be applied for renal cancer, except repositioned in a sagittal plane, with the retroperitoneal placement of the fiducials into the tumor. Pancreatic fiducial placement via endoscopic ultrasound (EUS) is technically more challenging, as fiducial placement is operator-dependent, and lesion access may be limited by adjacent vasculature, tumor location, or restricted mobility of the EUS probe in the duodenum. Fluoroscopically assisted fiducial placement during EUS can help ensure fiducial markers are deployed with optimal geometry and visualization. Results: Among the first 22 fiducial cases on a newly installed robotic radiosurgery system, live x-ray images for all nine prostatic cases had excellent fiducial visualization at the treatment console. Renal and pancreatic fiducials were not as clearly visible due to difficult target access and smaller caliber insertion needle/fiducial usage. The geometry of the first prostate case was used to ensure accurate geometric marker placement for the remaining 8 cases. Initially, some of the renal and pancreatic fiducials were closer than the 20 mm recommendation, and interactive feedback with the proceduralists led to subsequent fiducials being too far to the edge of the tumor. Further feedback and discussion of all cases are being used to help guide standardized geometries and achieve ideal fiducial placement. Conclusion: The ideal tradeoffs of fiducial visibility versus the thinnest possible gauge needle to avoid complications needs to be systematically optimized among all patients, particularly in regards to body habitus. Multidisciplinary collaboration among proceduralists and radiation oncologists can lead to improved outcomes.Keywords: fiducial, prostate cancer, renal cancer, pancreatic cancer, radiotherapy
Procedia PDF Downloads 93331 Crosslinked Porous 3-Dimensional Cellulose Nanofibers/Gelatin Based Biocomposite Aerogels for Tissue Engineering Application
Authors: Ali Mirtaghavi, Andy Baldwin, Rajendarn Muthuraj, Jack Luo
Abstract:
Recent advances in biomaterials have led to utilizing biopolymers to develop 3D scaffolds in tissue regeneration. One of the major challenges of designing biomaterials for 3D scaffolds is to mimic the building blocks similar to the extracellular matrix (ECM) of the native tissues. Biopolymer based aerogels obtained by freeze-drying have shown to provide structural similarities to the ECM owing to their 3D format and a highly porous structure with interconnected pores, similar to the ECM. Gelatin (GEL) is known to be a promising biomaterial with inherent regenerative characteristics owing to its chemical similarities to the ECM in native tissue, biocompatibility abundance, cost-effectiveness and accessible functional groups, which makes it facile for chemical modifications with other biomaterials to form biocomposites. Despite such advantages, gelatin offers poor mechanical properties, sensitive enzymatic degradation and high viscosity at room temperature which limits its application and encourages its use to develop biocomposites. Hydrophilic biomass-based cellulose nanofibrous (CNF) has been explored to use as suspension for biocomposite aerogels for the development of 3D porous structures with excellent mechanical properties, biocompatibility and slow enzymatic degradation. In this work, CNF biocomposite aerogels with various ratios of CNF:GEL) (90:10, 70:30 and 50:50) were prepared by freeze-drying technique, and their properties were investigated in terms of physicochemical, mechanical and biological characteristics. Epichlorohydrin (EPH) was used to investigate the effect of chemical crosslinking on the molecular interaction of CNF: GEL, and its effects on physicochemical, mechanical and biological properties of the biocomposite aerogels. Ultimately, chemical crosslinking helped to improve the mechanical resilience of the resulting aerogels. Amongst all the CNF-GEL composites, the crosslinked CNF: GEL (70:30) biocomposite was found to be favourable for cell attachment and viability. It possessed highly porous structure (porosity of ~93%) with pore sizes ranging from 16-110 µm, adequate mechanical properties (compression modulus of ~47 kPa) and optimal biocompatibility both in-vitro and in-vivo, as well as controlled enzymatic biodegradation, high water penetration, which could be considered a suitable option for wound healing application. In-vivo experiments showed improvement on inflammation and foreign giant body cell reaction for the crosslinked CNF: GEL (70:30) compared to the other samples. This could be due to the superior interaction of CNF with gelatin through chemical crosslinking, resulting in more optimal in-vivo improvement. In-vitro cell culture investigation on human dermal fibroblasts showed satisfactory 3D cell attachment over time. Overall, it has been observed that the developed CNF: GEL aerogel can be considered as a potential scaffold for soft tissue regeneration application.Keywords: 3D scaffolds, aerogels, Biocomposites , tissue engineering
Procedia PDF Downloads 130330 Structure Modification of Leonurine to Improve Its Potency as Aphrodisiac
Authors: Ruslin, R. E. Kartasasmita, M. S. Wibowo, S. Ibrahim
Abstract:
An aphrodisiac is a substance contained in food or drug that can arouse sexual instinct and increase pleasure while working, these substances derived from plants, animals, and minerals. When consuming substances that have aphrodisiac activity and duration can improve the sexual instinct. The natural aphrodisiac effect can be obtained through plants, animals, and minerals. Leonurine compound has aphrodisiac activity, these compounds can be isolated from plants of Leonurus Sp, Sundanese people is known as deundereman, this plant is empirical has aphrodisiac activity and based on the isolation of active compounds from plants known to contain compounds leonurine, so that the compound is expected to have activity aphrodisiac. Leonurine compound can be isolated from plants or synthesized chemically with material dasa siringat acid. Leonurine compound can be obtained commercial and derivatives of these compounds can be synthesized in an effort to increase its activity. This study aims to obtain derivatives leonurine better aphrodisiac activity compared with the parent compound, modified the structure of the compounds in the form leonurin guanidino butyl ester group with butyl amin and bromoetanol. ArgusLab program version 4.0.1 is used to determine the binding energy, hydrogen bonds and amino acids involved in the interaction of the compound PDE5 receptor. The in vivo test leonurine compounds and derivatives as an aphrodisiac ingredients and hormone testosterone levels using 27 male rats Wistar strain and 9 female mice of the same species, ages ranged from 12 weeks rats weighing + 200 g / tail. The test animal is divided into 9 groups according to the type of compounds and the dose given. Each treatment group was orally administered 2 ml per day for 5 days. On the sixth day was observed male rat sexual behavior and taking blood from the heart to measure testosterone levels using ELISA technique. Statistical analysis was performed in this study is the ANOVA test Least Square Differences (LSD) using the program Statistical Product and Service Solutions (SPSS). Aphrodisiac efficacy of the leonurine compound and its derivatives have proven in silico and in vivo test, the in silico testing leonurine derivatives have smaller binding energy derivatives leonurine so that activity better than leonurine compounds. Testing in vivo using rats of wistar strain that better leonurine derivative of this compound shows leonurine that in silico studies in parallel with in vivo tests. Modification of the structure in the form of guanidine butyl ester group with butyl amin and bromoethanol increase compared leonurine compound for aphrodisiac activity, testosterone derivatives of compounds leonurine experienced a significant improvement especial is 1RD compounds especially at doses of 100 and 150 mg/bb. The results showed that the compound leonurine and its compounds contain aphrodisiac activity and increase the amount of testosterone in the blood. The compound test used in this study acts as a steroid precursor resulting in increased testosterone.Keywords: aphrodisiac dysfunction erectile leonurine 1-RD 2-RD, dysfunction, erectile leonurine, 1-RD 2-RD
Procedia PDF Downloads 279329 Examinations of Sustainable Protection Possibilities against Granary Weevil (Sitophilus granarius L.) on Stored Products
Authors: F. Pal-Fam, R. Hoffmann, S. Keszthelyi
Abstract:
Granary weevil, Sitophilus granarius (L.) (Col.: Curculionidae) is a typical cosmopolitan pest. It can cause significant damage to stored grains, and can drastically decrease yields. Damaged grain has reduced nutritional and market value, weaker germination, and reduced weight. The commonly used protectants against stored-product pests in Europe are residual insecticides, applied directly to the product. Unfortunately, these pesticides can be toxic to mammals, the residues can accumulate in the treated products, and many pest species could become resistant to the protectants. During recent years, alternative solutions of grain protection have received increased attention. These solutions are considered as the most promising alternatives to residual insecticides. The aims of our comparative study were to obtain information about the efficacies of the 1. diatomaceous earth, 2. sterile insect technology and 3. herbal oils against the S. granarius on grain (foremost maize), and to evaluate the influence of the dose rate on weevil mortality and progeny. The main results of our laboratory experiments are the followings: 1. Diatomaceous earth was especially efficacious against S. granarius, but its insecticidal properties depend on exposure time and applied dose. The efficacy on barley was better than on maize. Mortality value of the highest dose was 85% on the 21st day in the case of barley. It can be ascertained that complete elimination of progeny was evidenced on both gain types. To summarize, a satisfactory efficacy level was obtained only on barley at a rate of 4g/kg. Alteration of efficacy between grain types can be explained with differences in grain surface. 2. The mortality consequences of Roentgen irradiation on the S. granarius was highly influenced by the exposure time, and the dose applied. At doses of 50 and 70Gy, the efficacy accepted in plant protection (mortality: 95%) was recorded only on the 21st day. During the application of 100 and 200Gy doses, high mortality values (83.5% and 97.5%) were observed on the 14th day. Our results confirmed the complete sterilizing effect of the doses of 70Gy and above. The autocide effect of 50 and 70Gy doses were demonstrated when irradiated specimens were mixed into groups of fertile specimens. Consequently, these doses might be successfully applied to put sterile insect technique (SIT) into practice. 3. The results revealed that both studied essential oils (Callendula officinalis, Hippophae rhamnoides) exerted strong toxic effect on S. granarius, but C. officinalis triggered higher mortality. The efficacy (94.62 ± 2.63%) was reached after a 48 hours exposure to H. rhamnoides oil at 2ml/kg while the application of 2ml/kg of C. officinalis oil for 24 hours produced 98.94 ± 1.00% mortality rate. Mortality was 100% at 5 ml/kg of H. rhamnoides after 24 hours duration of its application, while with C. officinalis the same value could be reached after a 12 hour-exposure to the oil. Both essential oils applied were eliminated the progeny.Keywords: Sitophilus granarius, stored product, protection, alternative solutions
Procedia PDF Downloads 170328 Enhancing Industrial Wastewater Treatment: Efficacy and Optimization of Ultrasound-Assisted Laccase Immobilized on Magnetic Fe₃O₄ Nanoparticles
Authors: K. Verma, v. S. Moholkar
Abstract:
In developed countries, water pollution caused by industrial discharge has emerged as a significant environmental concern over the past decades. However, despite ongoing efforts, a fully effective and sustainable remediation strategy has yet to be identified. This paper describes how enzymatic and sonochemical treatments have demonstrated great promise in degrading bio-refractory pollutants. Mainly, a compelling area of interest lies in the combined technique of sono-enzymatic treatment, which has exhibited a synergistic enhancement effect surpassing that of the individual techniques. This study employed the covalent attachment method to immobilize Laccase from Trametes versicolor onto amino-functionalized magnetic Fe₃O₄ nanoparticles. To comprehensively characterize the synthesized free nanoparticles and the laccase-immobilized nanoparticles, various techniques such as X-ray diffraction (XRD), Fourier transform infrared spectroscopy (FT-IR), scanning electron microscope (SEM), vibrating sample magnetometer (VSM), and surface area through Brunauer-Emmett-Teller (BET) were employed. The size of immobilized Fe₃O₄@Laccase was found to be 60 nm, and the maximum loading of laccase was found to be 24 mg/g of nanoparticle. An investigation was conducted to study the effect of various process parameters, such as immobilized Fe₃O₄ Laccase dose, temperature, and pH, on the % Chemical oxygen demand (COD) removal as a response. The statistical design pinpointed the optimum conditions (immobilized Fe₃O₄ Laccase dose = 1.46 g/L, pH = 4.5, and temperature = 66 oC), resulting in a remarkable 65.58% COD removal within 60 minutes. An even more significant improvement (90.31% COD removal) was achieved with ultrasound-assisted enzymatic reaction utilizing a 10% duty cycle. The investigation of various kinetic models for free and immobilized laccase, such as the Haldane, Yano, and Koga, and Michaelis-Menten, showed that ultrasound application impacted the kinetic parameters Vmax and Km. Specifically, Vmax values for free and immobilized laccase were found to be 0.021 mg/L min and 0.045 mg/L min, respectively, while Km values were 147.2 mg/L for free laccase and 136.46 mg/L for immobilized laccase. The lower Km and higher Vmax for immobilized laccase indicate its enhanced affinity towards the substrate, likely due to ultrasound-induced alterations in the enzyme's confirmation and increased exposure of active sites, leading to more efficient degradation. Furthermore, the toxicity and Liquid chromatography-mass spectrometry (LC-MS) analysis revealed that after the treatment process, the wastewater exhibited 70% less toxicity than before treatment, with over 25 compounds degrading by more than 75%. At last, the prepared immobilized laccase had excellent recyclability retaining 70% activity up to 6 consecutive cycles. A straightforward manufacturing strategy and outstanding performance make the recyclable magnetic immobilized Laccase (Fe₃O₄ Laccase) an up-and-coming option for various environmental applications, particularly in water pollution control and treatment.Keywords: kinetic, laccase enzyme, sonoenzymatic, ultrasound irradiation
Procedia PDF Downloads 68327 Exploratory Study on Mediating Role of Commitment-to-Change in Relations between Employee Voice, Employee Involvement and Organizational Change Readiness
Authors: Rohini Sharma, Chandan Kumar Sahoo, Rama Krishna Gupta Potnuru
Abstract:
Strong competitive forces and requirements to achieve efficiency are forcing the organizations to realize the necessity and inevitability of change. What's more, the trend does not appear to be abating. Researchers have estimated that about two thirds of change project fails. Empirical evidences further shows that organizations invest significantly in the planned change but people side is accounted for in a token or instrumental way, which is identified as one of the important reason, why change endeavours fail. However, whatever be the reason for change, organizational change readiness must be gauged prior to the institutionalization of organizational change. Hence, in this study the influence of employee voice and employee involvement on organizational change readiness via commitment-to-change is examined, as it is an area yet to be extensively studied. Also, though a recent study has investigated the interrelationship between leadership, organizational change readiness and commitment to change, our study further examined these constructs in relation with employee voice and employee involvement that plays a consequential role for organizational change readiness. Further, integrated conceptual model weaving varied concepts relating to organizational readiness with focus on commitment to change as mediator was found to be an area, which required more theorizing and empirical validation, and this study rooted in an Indian public sector organization is a step in this direction. Data for the study were collected through a survey among employees of Rourkela Steel Plant (RSP), a unit of Steel Authority of India Limited (SAIL); the first integrated Steel Plant in the public sector in India, for which stratified random sampling method was adopted. The schedule was distributed to around 700 employees, out of which 516 complete responses were obtained. The pre-validated scales were used for the study. All the variables in the study were measured on a five-point Likert scale ranging from “strongly disagree (1)” to “strongly agree (5)”. Structural equation modeling (SEM) using AMOS 22 was used to examine the hypothesized model, which offers a simultaneous test of an entire system of variables in a model. The study results shows that inter-relationship between employee voice and commitment-to-change, employee involvement and commitment-to-change and commitment-to-change and organizational change readiness were significant. To test the mediation hypotheses, Baron and Kenny’s technique was used. Examination of direct and mediated effect of mediators confirmed that commitment-to-change partially mediated the relation between employee involvement and organizational change readiness. Furthermore, study results also affirmed that commitment-to-change does not mediate the relation between employee involvement and organizational change readiness. The empirical exploration therefore establishes that it is important to harness employee’s valuable suggestions regarding change for building organizational change readiness. Regarding employee involvement, it was found that sharing information and involving people in decision-making, leads to a creation of participative climate, which educes employee commitment during change and commitment-to-change further, fosters organizational change readiness.Keywords: commitment-to-change, change management, employee voice, employee involvement, organizational change readiness
Procedia PDF Downloads 328326 Statistical Optimization of Adsorption of a Harmful Dye from Aqueous Solution
Abstract:
Textile industries cater to varied customer preferences and contribute substantially to the economy. However, these textile industries also produce a considerable amount of effluents. Prominent among these are the azo dyes which impart considerable color and toxicity even at low concentrations. Azo dyes are also used as coloring agents in food and pharmaceutical industry. Despite their applications, azo dyes are also notorious pollutants and carcinogens. Popular techniques like photo-degradation, biodegradation and the use of oxidizing agents are not applicable for all kinds of dyes, as most of them are stable to these techniques. Chemical coagulation produces a large amount of toxic sludge which is undesirable and is also ineffective towards a number of dyes. Most of the azo dyes are stable to UV-visible light irradiation and may even resist aerobic degradation. Adsorption has been the most preferred technique owing to its less cost, high capacity and process efficiency and the possibility of regenerating and recycling the adsorbent. Adsorption is also most preferred because it may produce high quality of the treated effluent and it is able to remove different kinds of dyes. However, the adsorption process is influenced by many variables whose inter-dependence makes it difficult to identify optimum conditions. The variables include stirring speed, temperature, initial concentration and adsorbent dosage. Further, the internal diffusional resistance inside the adsorbent particle leads to slow uptake of the solute within the adsorbent. Hence, it is necessary to identify optimum conditions that lead to high capacity and uptake rate of these pollutants. In this work, commercially available activated carbon was chosen as the adsorbent owing to its high surface area. A typical azo dye found in textile effluent waters, viz. the monoazo Acid Orange 10 dye (CAS: 1936-15-8) has been chosen as the representative pollutant. Adsorption studies were mainly focused at obtaining equilibrium and kinetic data for the batch adsorption process at different process conditions. Studies were conducted at different stirring speed, temperature, adsorbent dosage and initial dye concentration settings. The Full Factorial Design was the chosen statistical design framework for carrying out the experiments and identifying the important factors and their interactions. The optimum conditions identified from the experimental model were validated with actual experiments at the recommended settings. The equilibrium and kinetic data obtained were fitted to different models and the model parameters were estimated. This gives more details about the nature of adsorption taking place. Critical data required to design batch adsorption systems for removal of Acid Orange 10 dye and identification of factors that critically influence the separation efficiency are the key outcomes from this research.Keywords: acid orange 10, activated carbon, optimum adsorption conditions, statistical design
Procedia PDF Downloads 171325 Insights into Particle Dispersion, Agglomeration and Deposition in Turbulent Channel Flow
Authors: Mohammad Afkhami, Ali Hassanpour, Michael Fairweather
Abstract:
The work described in this paper was undertaken to gain insight into fundamental aspects of turbulent gas-particle flows with relevance to processes employed in a wide range of applications, such as oil and gas flow assurance in pipes, powder dispersion from dry powder inhalers, and particle resuspension in nuclear waste ponds, to name but a few. In particular, the influence of particle interaction and fluid phase behavior in turbulent flow on particle dispersion in a horizontal channel is investigated. The mathematical modeling technique used is based on the large eddy simulation (LES) methodology embodied in the commercial CFD code FLUENT, with flow solutions provided by this approach coupled to a second commercial code, EDEM, based on the discrete element method (DEM) which is used for the prediction of particle motion and interaction. The results generated by LES for the fluid phase have been validated against direct numerical simulations (DNS) for three different channel flows with shear Reynolds numbers, Reτ = 150, 300 and 590. Overall, the LES shows good agreement, with mean velocities and normal and shear stresses matching those of the DNS in both magnitude and position. The research work has focused on the prediction of those conditions favoring particle aggregation and deposition within turbulent flows. Simulations have been carried out to investigate the effects of particle size, density and concentration on particle agglomeration. Furthermore, particles with different surface properties have been simulated in three channel flows with different levels of flow turbulence, achieved by increasing the Reynolds number of the flow. The simulations mimic the conditions of two-phase, fluid-solid flows frequently encountered in domestic, commercial and industrial applications, for example, air conditioning and refrigeration units, heat exchangers, oil and gas suction and pressure lines. The particle size, density, surface energy and volume fractions selected are 45.6, 102 and 150 µm, 250, 1000 and 2159 kg m-3, 50, 500, and 5000 mJ m-2 and 7.84 × 10-6, 2.8 × 10-5, and 1 × 10-4, respectively; such particle properties are associated with particles found in soil, as well as metals and oxides prevalent in turbulent bounded fluid-solid flows due to erosion and corrosion of inner pipe walls. It has been found that the turbulence structure of the flow dominates the motion of the particles, creating particle-particle interactions, with most of these interactions taking place at locations close to the channel walls and in regions of high turbulence where their agglomeration is aided both by the high levels of turbulence and the high concentration of particles. A positive relationship between particle surface energy, concentration, size and density, and agglomeration was observed. Moreover, the results derived for the three Reynolds numbers considered show that the rate of agglomeration is strongly influenced for high surface energy particles by, and increases with, the intensity of the flow turbulence. In contrast, for lower surface energy particles, the rate of agglomeration diminishes with an increase in flow turbulence intensity.Keywords: agglomeration, channel flow, DEM, LES, turbulence
Procedia PDF Downloads 318324 Hydroxyapatite Nanorods as Novel Fillers for Improving the Properties of PBSu
Authors: M. Nerantzaki, I. Koliakou, D. Bikiaris
Abstract:
This study evaluates the hypothesis that the incorporation of fibrous hydroxyapatite nanoparticles (nHA) with high crystallinity and high aspect ratio, synthesized by hydrothermal method, into Poly(butylene succinate) (PBSu), improves the bioactivity of the aliphatic polyester and affects new bone growth inhibiting resorption and enhancing bone formation. Hydroxyapatite nanorods were synthesized using a simple hydrothermal procedure. First, the HPO42- -containing solution was added drop-wise into the Ca2+-containing solution, while the molar ratio of Ca/P was adjusted at 1.67. The HA precursor was then treated hydrothermally at 200°C for 72 h. The resulting powder was characterized using XRD, FT-IR, TEM, and EDXA. Afterwards, PBSu nanocomposites containing 2.5wt% (nHA) were prepared by in situ polymerization technique for the first time and were examined as potential scaffolds for bone engineering applications. For comparison purposes composites containing either 2.5wt% micro-Bioglass (mBG) or 2.5wt% mBG-nHA were prepared and studied, too. The composite scaffolds were characterized using SEM, FTIR, and XRD. Mechanical testing (Instron 3344) and Contact Angle measurements were also carried out. Enzymatic degradation was studied in an aqueous solution containing a mixture of R. Oryzae and P. Cepacia lipases at 37°C and pH=7.2. In vitro biomineralization test was performed by immersing all samples in simulated body fluid (SBF) for 21 days. Biocompatibility was assessed using rat Adipose Stem Cells (rASCs), genetically modified by nucleofection with DNA encoding SB100x transposase and pT2-Venus-neo transposon expression plasmids in order to attain fluorescence images. Cell proliferation and viability of cells on the scaffolds were evaluated using fluoresce microscopy and MTT (3-(4,5-dimethylthiazol-2-yl)-2,5 diphenyltetrazolium bromide) assay. Finally, osteogenic differentiation was assessed by staining rASCs with alizarine red using cetylpyridinium chloride (CPC) method. TEM image of the fibrous HAp nanoparticles, synthesized in the present study clearly showed the fibrous morphology of the synthesized powder. The addition of nHA decreased significantly the contact angle of the samples, indicating that the materials become more hydrophilic and hence they absorb more water and subsequently degrade more rapidly. In vitro biomineralization test confirmed that all samples were bioactive as mineral deposits were detected by X-ray diffractometry after incubation in SBF. Metabolic activity of rASCs on all PBSu composites was high and increased from day 1 of culture to day 14. On day 28 metabolic activity of rASCs cultured on samples enriched with bioceramics was significantly decreased due to possible differentiation of rASCs to osteoblasts. Staining rASCs with alizarin red after 28 days in culture confirmed our initial hypothesis as the presence of calcium was detected, suggesting osteogenic differentiation of rACS on PBSu/nHAp/mBG 2.5% and PBSu/mBG 2.5% composite scaffolds.Keywords: biomaterials, hydroxyapatite nanorods, poly(butylene succinate), scaffolds
Procedia PDF Downloads 310323 Low-Temperature Poly-Si Nanowire Junctionless Thin Film Transistors with Nickel Silicide
Authors: Yu-Hsien Lin, Yu-Ru Lin, Yung-Chun Wu
Abstract:
This work demonstrates the ultra-thin poly-Si (polycrystalline Silicon) nanowire junctionless thin film transistors (NWs JL-TFT) with nickel silicide contact. For nickel silicide film, this work designs to use two-step annealing to form ultra-thin, uniform and low sheet resistance (Rs) Ni silicide film. The NWs JL-TFT with nickel silicide contact exhibits the good electrical properties, including high driving current (>10⁷ Å), subthreshold slope (186 mV/dec.), and low parasitic resistance. In addition, this work also compares the electrical characteristics of NWs JL-TFT with nickel silicide and non-silicide contact. Nickel silicide techniques are widely used for high-performance devices as the device scaling due to the source/drain sheet resistance issue. Therefore, the self-aligned silicide (salicide) technique is presented to reduce the series resistance of the device. Nickel silicide has several advantages including low-temperature process, low silicon consumption, no bridging failure property, smaller mechanical stress, and smaller contact resistance. The junctionless thin-film transistor (JL-TFT) is fabricated simply by heavily doping the channel and source/drain (S/D) regions simultaneously. Owing to the special doping profile, JL-TFT has some advantages such as lower thermal the budget which can integrate with high-k/metal-gate easier than conventional MOSFETs (Metal Oxide Semiconductor Field-Effect Transistors), longer effective channel length than conventional MOSFETs, and avoidance of complicated source/drain engineering. To solve JL-TFT has turn-off problem, JL-TFT needs ultra-thin body (UTB) structure to reach fully depleted channel region in off-state. On the other hand, the drive current (Iᴅ) is declined as transistor features are scaled. Therefore, this work demonstrates ultra thin poly-Si nanowire junctionless thin film transistors with nickel silicide contact. This work investigates the low-temperature formation of nickel silicide layer by physical-chemical deposition (PVD) of a 15nm Ni layer on the poly-Si substrate. Notably, this work designs to use two-step annealing to form ultrathin, uniform and low sheet resistance (Rs) Ni silicide film. The first step was promoted Ni diffusion through a thin interfacial amorphous layer. Then, the unreacted metal was lifted off after the first step. The second step was annealing for lower sheet resistance and firmly merged the phase.The ultra-thin poly-Si nanowire junctionless thin film transistors NWs JL-TFT with nickel silicide contact is demonstrated, which reveals high driving current (>10⁷ Å), subthreshold slope (186 mV/dec.), and low parasitic resistance. In silicide film analysis, the second step of annealing was applied to form lower sheet resistance and firmly merge the phase silicide film. In short, the NWs JL-TFT with nickel silicide contact has exhibited a competitive short-channel behavior and improved drive current.Keywords: poly-Si, nanowire, junctionless, thin-film transistors, nickel silicide
Procedia PDF Downloads 238322 A Generalized Framework for Adaptive Machine Learning Deployments in Algorithmic Trading
Authors: Robert Caulk
Abstract:
A generalized framework for adaptive machine learning deployments in algorithmic trading is introduced, tested, and released as open-source code. The presented software aims to test the hypothesis that recent data contains enough information to form a probabilistically favorable short-term price prediction. Further, the framework contains various adaptive machine learning techniques that are geared toward generating profit during strong trends and minimizing losses during trend changes. Results demonstrate that this adaptive machine learning approach is capable of capturing trends and generating profit. The presentation also discusses the importance of defining the parameter space associated with the dynamic training data-set and using the parameter space to identify and remove outliers from prediction data points. Meanwhile, the generalized architecture enables common users to exploit the powerful machinery while focusing on high-level feature engineering and model testing. The presentation also highlights common strengths and weaknesses associated with the presented technique and presents a broad range of well-tested starting points for feature set construction, target setting, and statistical methods for enforcing risk management and maintaining probabilistically favorable entry and exit points. The presentation also describes the end-to-end data processing tools associated with FreqAI, including automatic data fetching, data aggregation, feature engineering, safe and robust data pre-processing, outlier detection, custom machine learning and statistical tools, data post-processing, and adaptive training backtest emulation, and deployment of adaptive training in live environments. Finally, the generalized user interface is also discussed in the presentation. Feature engineering is simplified so that users can seed their feature sets with common indicator libraries (e.g. TA-lib, pandas-ta). The user also feeds data expansion parameters to fill out a large feature set for the model, which can contain as many as 10,000+ features. The presentation describes the various object-oriented programming techniques employed to make FreqAI agnostic to third-party libraries and external data sources. In other words, the back-end is constructed in such a way that users can leverage a broad range of common regression libraries (Catboost, LightGBM, Sklearn, etc) as well as common Neural Network libraries (TensorFlow, PyTorch) without worrying about the logistical complexities associated with data handling and API interactions. The presentation finishes by drawing conclusions about the most important parameters associated with a live deployment of the adaptive learning framework and provides the road map for future development in FreqAI.Keywords: machine learning, market trend detection, open-source, adaptive learning, parameter space exploration
Procedia PDF Downloads 89321 Training for Safe Tree Felling in the Forest with Symmetrical Collaborative Virtual Reality
Authors: Irene Capecchi, Tommaso Borghini, Iacopo Bernetti
Abstract:
One of the most common pieces of equipment still used today for pruning, felling, and processing trees is the chainsaw in forestry. However, chainsaw use highlights dangers and one of the highest rates of accidents in both professional and non-professional work. Felling is proportionally the most dangerous phase, both in severity and frequency, because of the risk of being hit by the plant the operator wants to cut down. To avoid this, a correct sequence of chainsaw cuts must be taught concerning the different conditions of the tree. Virtual reality (VR) makes it possible to virtually simulate chainsaw use without danger of injury. The limitations of the existing applications are as follow. The existing platforms are not symmetrical collaborative because the trainee is only in virtual reality, and the trainer can only see the virtual environment on a laptop or PC, and this results in an inefficient teacher-learner relationship. Therefore, most applications only involve the use of a virtual chainsaw, and the trainee thus cannot feel the real weight and inertia of a real chainsaw. Finally, existing applications simulate only a few cases of tree felling. The objectives of this research were to implement and test a symmetrical collaborative training application based on VR and mixed reality (MR) with the overlap between real and virtual chainsaws in MR. The research and training platform was developed for the Meta quest 2 head-mounted display. The research and training platform application is based on the Unity 3D engine, and Present Platform Interaction SDK (PPI-SDK) developed by Meta. PPI-SDK avoids the use of controllers and enables hand tracking and MR. With the combination of these two technologies, it was possible to overlay a virtual chainsaw with a real chainsaw in MR and synchronize their movements in VR. This ensures that the user feels the weight of the actual chainsaw, tightens the muscles, and performs the appropriate movements during the test allowing the user to learn the correct body posture. The chainsaw works only if the right sequence of cuts is made to felling the tree. Contact detection is done by Unity's physics system, which allows the interaction of objects that simulate real-world behavior. Each cut of the chainsaw is defined by a so-called collider, and the felling of the tree can only occur if the colliders are activated in the right order simulating a safe technique felling. In this way, the user can learn how to use the chainsaw safely. The system is also multiplayer, so the student and the instructor can experience VR together in a symmetrical and collaborative way. The platform simulates the following tree-felling situations with safe techniques: cutting the tree tilted forward, cutting the medium-sized tree tilted backward, cutting the large tree tilted backward, sectioning the trunk on the ground, and cutting branches. The application is being evaluated on a sample of university students through a special questionnaire. The results are expected to test both the increase in learning compared to a theoretical lecture and the immersive and telepresence of the platform.Keywords: chainsaw, collaborative symmetric virtual reality, mixed reality, operator training
Procedia PDF Downloads 107320 Assessment of the Living Conditions of Female Inmates in Correctional Service Centres in South West Nigeria
Authors: Ayoola Adekunle Dada, Tolulope Omolola Fateropa
Abstract:
There is no gain saying the fact that the Nigerian correctional services lack rehabilitation reformation. Owing to this, some so many inmates, including the female, become more emotionally bruised and hardened instead of coming out of the prison reformed. Although female inmates constitute only a small percentage worldwide, the challenges resulting from women falling under the provision of the penal system have prompted ficial and humanitarian bodies to consider female inmateas as vulnerable persons who need particular social work measures that meet their specific needs. Female inmates’condition may become worseinprisondue to the absence of the standard living condition. A survey of 100 female inmates will be used to determine the assessment of the living condition of the female inmates within the contexts in which they occur. Employing field methods from Medical Sociology and Law, the study seeks to make use of the collaboration of both disciplines for a comprehensive understanding of the scenario. Its specific objectives encompassed: (1) To examine access and use of health facilities among the female inmates;(2) To examine the effect of officers/warders attitude towards female inmates;(3)To investigate the perception of the female inmates towards the housing facilities in the centre and; (4) To investigate the feeding habit of the female inmates. Due to the exploratory nature of the study, the researchers will make use of mixed-method, such qualitative methods as interviews will be undertaken to complement survey research (quantitative). By adopting the above-explained inter-method triangulation, the study will not only ensure that the advantages of both methods are exploited but will also fulfil the basic purposes of research. The sampling for this study will be purposive. The study aims at sampling two correctional centres (Ado Ekiti and Akure) in order to generate representative data for the female inmates in South West Nigeria. In all, the total number of respondents will be 100. A cross-section of female inmates will be selected as respondents using a multi-stage sampling technique. 100 questionnaires will be administered. A semi structured (in-depth) interviews will be conducted among workers in the two selected correctional centres, respectively, to gain further insight on the living conditions of female inmates, which the survey may not readily elicit. These participants will be selected purposively in respect to their status in the organisation. Ethical issues in research on human subjects will be given due consideration. Such issues rest on principles of beneficence, non-maleficence, autonomy/justice and confidentiality. In the final analysis, qualitative data will be analyzed using manual content analysis. Both the descriptive and inferential statistics will be used for analytical purposes. Frequency, simple percentage, pie chart, bar chart, curve and cross-tabulations will form part of the descriptive analysis.Keywords: assessment, health facilities, inmates, perception, living conditions
Procedia PDF Downloads 98319 A Fast Multi-Scale Finite Element Method for Geophysical Resistivity Measurements
Authors: Mostafa Shahriari, Sergio Rojas, David Pardo, Angel Rodriguez- Rozas, Shaaban A. Bakr, Victor M. Calo, Ignacio Muga
Abstract:
Logging-While Drilling (LWD) is a technique to record down-hole logging measurements while drilling the well. Nowadays, LWD devices (e.g., nuclear, sonic, resistivity) are mostly used commercially for geo-steering applications. Modern borehole resistivity tools are able to measure all components of the magnetic field by incorporating tilted coils. The depth of investigation of LWD tools is limited compared to the thickness of the geological layers. Thus, it is a common practice to approximate the Earth’s subsurface with a sequence of 1D models. For a 1D model, we can reduce the dimensionality of the problem using a Hankel transform. We can solve the resulting system of ordinary differential equations (ODEs) either (a) analytically, which results in a so-called semi-analytic method after performing a numerical inverse Hankel transform, or (b) numerically. Semi-analytic methods are used by the industry due to their high performance. However, they have major limitations, namely: -The analytical solution of the aforementioned system of ODEs exists only for piecewise constant resistivity distributions. For arbitrary resistivity distributions, the solution of the system of ODEs is unknown by today’s knowledge. -In geo-steering, we need to solve inverse problems with respect to the inversion variables (e.g., the constant resistivity value of each layer and bed boundary positions) using a gradient-based inversion method. Thus, we need to compute the corresponding derivatives. However, the analytical derivatives of cross-bedded formation and the analytical derivatives with respect to the bed boundary positions have not been published to the best of our knowledge. The main contribution of this work is to overcome the aforementioned limitations of semi-analytic methods by solving each 1D model (associated with each Hankel mode) using an efficient multi-scale finite element method. The main idea is to divide our computations into two parts: (a) offline computations, which are independent of the tool positions and we precompute only once and use them for all logging positions, and (b) online computations, which depend upon the logging position. With the above method, (a) we can consider arbitrary resistivity distributions along the 1D model, and (b) we can easily and rapidly compute the derivatives with respect to any inversion variable at a negligible additional cost by using an adjoint state formulation. Although the proposed method is slower than semi-analytic methods, its computational efficiency is still high. In the presentation, we shall derive the mathematical variational formulation, describe the proposed multi-scale finite element method, and verify the accuracy and efficiency of our method by performing a wide range of numerical experiments and comparing the numerical solutions to semi-analytic ones when the latest are available.Keywords: logging-While-Drilling, resistivity measurements, multi-scale finite elements, Hankel transform
Procedia PDF Downloads 387318 Unlocking New Room of Production in Brown Field; Integration of Geological Data Conditioned 3D Reservoir Modelling of Lower Senonian Matulla Formation, RAS Budran Field, East Central Gulf of Suez, Egypt
Authors: Nader Mohamed
Abstract:
The Late Cretaceous deposits are well developed through-out Egypt. This is due to a transgression phase associated with the subsidence caused by the neo-Tethyan rift event that took place across the northern margin of Africa, resulting in a period of dominantly marine deposits in the Gulf of Suez. The Late Cretaceous Nezzazat Group represents the Cenomanian, Turonian and clastic sediments of the Lower Senonian. The Nezzazat Group has been divided into four formations namely, from base to top, the Raha Formation, the Abu Qada Formation, the Wata Formation and the Matulla Formation. The Cenomanian Raha and the Lower Senonian Matulla formations are the most important clastic sequence in the Nezzazat Group because they provide the highest net reservoir thickness and the highest net/gross ratio. This study emphasis on Matulla formation located in the eastern part of the Gulf of Suez. The three stratigraphic surface sections (Wadi Sudr, Wadi Matulla and Gabal Nezzazat) which represent the exposed Coniacian-Santonian sediments in Sinai are used for correlating Matulla sediments of Ras Budran field. Cutting description, petrographic examination, log behaviors, biostratigraphy with outcrops are used to identify the reservoir characteristics, lithology, facies environment logs and subdivide the Matulla formation into three units. The lower unit is believed to be the main reservoir where it consists mainly of sands with shale and sandy carbonates, while the other units are mainly carbonate with some streaks of shale and sand. Reservoir modeling is an effective technique that assists in reservoir management as decisions concerning development and depletion of hydrocarbon reserves, So It was essential to model the Matulla reservoir as accurately as possible in order to better evaluate, calculate the reserves and to determine the most effective way of recovering as much of the petroleum economically as possible. All available data on Matulla formation are used to build the reservoir structure model, lithofacies, porosity, permeability and water saturation models which are the main parameters that describe the reservoirs and provide information on effective evaluation of the need to develop the oil potentiality of the reservoir. This study has shown the effectiveness of; 1) the integration of geological data to evaluate and subdivide Matulla formation into three units. 2) Lithology and facies environment interpretation which helped in defining the nature of deposition of Matulla formation. 3) The 3D reservoir modeling technology as a tool for adequate understanding of the spatial distribution of property and in addition evaluating the unlocked new reservoir areas of Matulla formation which have to be drilled to investigate and exploit the un-drained oil. 4) This study led to adding a new room of production and additional reserves to Ras Budran field. Keywords: geology, oil and gas, geoscience, sequence stratigraphy
Procedia PDF Downloads 106317 Health and Performance Fitness Assessment of Adolescents in Middle Income Schools in Lagos State
Authors: Onabajo Paul
Abstract:
The testing and assessment of physical fitness of school-aged adolescents in Nigeria has been going on for several decades. Originally, these tests strictly focused on identifying health and physical fitness status and comparing the results of adolescents with others. There is a considerable interest in health and performance fitness of adolescents in which results attained are compared with criteria representing positive health rather than simply on score comparisons with others. Despite the fact that physical education program is being studied in secondary schools and physical activities are encouraged, it is observed that regular assessment of students’ fitness level and health status seems to be scarce or not being done in these schools. The purpose of the study was to assess the heath and performance fitness of adolescents in middle-income schools in Lagos State. A total number of 150 students were selected using the simple random sampling technique. Participants were measured on hand grip strength, sit-up, pacer 20 meter shuttle run, standing long jump, weight and height. The data collected were analyzed with descriptive statistics of means, standard deviations, and range and compared with fitness norms. It was concluded that majority 111(74.0%) of the adolescents achieved the healthy fitness zone, 33(22.0%) were very lean, and 6(4.0%) needed improvement according to the normative standard of Body Mass Index test. For muscular strength, majority 78(52.0%) were weak, 66(44.0%) were normal, and 6(4.0%) were strong according to the normative standard of hand-grip strength test. For aerobic capacity fitness, majority 93(62.0%) needed improvement and were at health risk, 36(24.0%) achieved healthy fitness zone, and 21(14.0%) needed improvement according to the normative standard of PACER test. Majority 48(32.0%) of the participants had good hip flexibility, 38(25.3%) had fair status, 27(18.0%) needed improvement, 24(16.0%) had very good hip flexibility status, and 13(8.7%) of the participants had excellent status. Majority 61(40.7%) had average muscular endurance status, 30(20.0%) had poor status, 29(18.3%) had good status, 28(18.7%) had fair muscular endurance status, and 2(1.3%) of the participants had excellent status according to the normative standard of sit-up test. Majority 52(34.7%) had low jump ability fitness, 47(31.3%) had marginal fitness, 31(20.7%) had good fitness, and 20(13.3%) had high performance fitness according to the normative standard of standing long jump test. Based on the findings, it was concluded that majority of the adolescents had better Body Mass Index status, and performed well in both hip flexibility and muscular endurance tests. Whereas majority of the adolescents performed poorly in aerobic capacity test, muscular strength and jump ability test. It was recommended that to enhance wellness, adolescents should be involved in physical activities and recreation lasting 30 minutes three times a week. Schools should engage in fitness program for students on regular basis at both senior and junior classes so as to develop good cardio-respiratory, muscular fitness and improve overall health of the students.Keywords: adolescents, health-related fitness, performance-related fitness, physical fitness
Procedia PDF Downloads 354316 Use of Socially Assistive Robots in Early Rehabilitation to Promote Mobility for Infants with Motor Delays
Authors: Elena Kokkoni, Prasanna Kannappan, Ashkan Zehfroosh, Effrosyni Mavroudi, Kristina Strother-Garcia, James C. Galloway, Jeffrey Heinz, Rene Vidal, Herbert G. Tanner
Abstract:
Early immobility affects the motor, cognitive, and social development. Current pediatric rehabilitation lacks the technology that will provide the dosage needed to promote mobility for young children at risk. The addition of socially assistive robots in early interventions may help increase the mobility dosage. The aim of this study is to examine the feasibility of an early intervention paradigm where non-walking infants experience independent mobility while socially interacting with robots. A dynamic environment is developed where both the child and the robot interact and learn from each other. The environment involves: 1) a range of physical activities that are goal-oriented, age-appropriate, and ability-matched for the child to perform, 2) the automatic functions that perceive the child’s actions through novel activity recognition algorithms, and decide appropriate actions for the robot, and 3) a networked visual data acquisition system that enables real-time assessment and provides the means to connect child behavior with robot decision-making in real-time. The environment was tested by bringing a two-year old boy with Down syndrome for eight sessions. The child presented delays throughout his motor development with the current being on the acquisition of walking. During the sessions, the child performed physical activities that required complex motor actions (e.g. climbing an inclined platform and/or staircase). During these activities, a (wheeled or humanoid) robot was either performing the action or was at its end point 'signaling' for interaction. From these sessions, information was gathered to develop algorithms to automate the perception of activities which the robot bases its actions on. A Markov Decision Process (MDP) is used to model the intentions of the child. A 'smoothing' technique is used to help identify the model’s parameters which are a critical step when dealing with small data sets such in this paradigm. The child engaged in all activities and socially interacted with the robot across sessions. With time, the child’s mobility was increased, and the frequency and duration of complex and independent motor actions were also increased (e.g. taking independent steps). Simulation results on the combination of the MDP and smoothing support the use of this model in human-robot interaction. Smoothing facilitates learning MDP parameters from small data sets. This paradigm is feasible and provides an insight on how social interaction may elicit mobility actions suggesting a new early intervention paradigm for very young children with motor disabilities. Acknowledgment: This work has been supported by NIH under grant #5R01HD87133.Keywords: activity recognition, human-robot interaction, machine learning, pediatric rehabilitation
Procedia PDF Downloads 294315 Subcutan Isosulfan Blue Administration May Interfere with Pulse Oximetry
Authors: Esra Yuksel, Dilek Duman, Levent Yeniay, Sezgin Ulukaya
Abstract:
Sentinel lymph node biopsy (SLNB) is a minimal invasive technique with lower morbidity in axillary staging of breast cancer. Isosulfan blue stain is frequently used in SLNB and regarded as safe. The present case report aimed to report severe decrement in SpO2 following isosulfan blue administration, as well as skin and urine signs and inconsistency with clinical picture in a 67-year-old ,77 kg, ASA II female case that underwent SLNB under general anesthesia. Ten minutes after subcutaneous administration of 10 ml 1% isosulfan blue by the surgeons into the patient, who were hemodynamically stable, SpO2 first reduced to 87% from 99%, and then to 75% in minutes despite 100% oxygen support. Meanwhile, blood pressure and EtCO2 monitoring was unremarkable. After specifying that anesthesia device worked normally, airway pressure did not increase and the endotracheal tube has been placed accurately, the blood sample was taken from the patient for arterial gas analysis. A severe increase was thought in MetHb concentration since SpO2 persisted to be 75% although the concentration of inspired oxygen was 100%, and solution of 2500 mg ascorbic acid in 500 ml 5% Dextrose was given to the patient via intravenous route until the results of arterial blood gas were obtained. However, arterial blood gas results were as follows: pH: 7.54, PaCO2: 23.3 mmHg, PaO2: 281 mmHg, SaO2: %99, and MetHb: %2.7. Biochemical analysis revealed a blood MetHb concentration of 2%.However, since arterial blood gas parameters were good, hemodynamics of the patient was stable and methemoglobin concentration was not so high, the patient was extubated after surgery when she was relaxed, cooperated and had adequate respiration. Despite the absence of respiratory or neurological distress, SpO2 value was increased only up to 85% within 2 hours with 5 L/min oxygen support via face mask in the surgery room as the patient was extubated. At that time, the skin of particularly the upper part of her body has turned into blue, more remarkable on the face. The color of plasma of the blood taken from the patient for biochemical analysis was blue. The color of urine coming throughout the urinary catheter placed in intensive care unit was also blue. Twelve hours after 5 L/min. oxygen inhalation via a mask, the SpO2 reached to 90%. During monitoring in intensive care unit on the postoperative 1st day, facial color and urine color of the patient was still blue, SpO2 was 92%, and arterial blood gas levels were as follows: pH: 7.44, PaO2: 76.1 mmHg, PaCO2: 38.2 mmHg, SaO2: 99%, and MetHb 1%. During monitoring in clinic on the postoperative 2nd day, SpO2 was 95% without oxygen support and her facial and urine color turned into normal. The patient was discharged on the 3rd day without any problem.In conclusion, SLNB is a less invasive alternative to axillary dissection. However, false pulse oximeter reading due to pigment interference is a rare complication of this procedure. Arterial blood gas analysis should be used to confirm any fall in SpO2 reading during monitoring.Keywords: isosulfan blue, pulse oximetry, SLNB, methemoglobinemia
Procedia PDF Downloads 315314 Decision-Making, Expectations and Life Project in Dependent Adults Due to Disability
Authors: Julia Córdoba
Abstract:
People are not completely autonomous, as we live in society; therefore, people could be defined as relationally dependent. The lack, decrease or loss of physical, psychological and/or social interdependence due to a disability situation is known as dependence. This is related to the need for help from another person in order to carry out activities of daily living. This population group lives with major social limitations that significantly reduce their participation and autonomy. They have high levels of stigma and invisibility from private environments (family and close networks), as well as from the public order (environment, community). The importance of this study lies in the fact that the lack of support and adjustments leads to what authors call the circle of exclusion. This circle describes how not accessing services - due to the difficulties caused by the disability situation impacts biological, social and psychological levels. This situation produces higher levels of exclusion and vulnerability. This study will focus on the process of autonomy and dependence of adults with disability from the model of disability proposed by the International Classification of Functioning, Health and Disability (ICF). The objectives are: i) to write down the relationship between autonomy and dependence based on socio-health variables and ii) to determine the relationship between the situation of autonomy and dependence and the expectations and interests of the participants. We propose a study that will use a survey technique through a previously validated virtual questionnaire. The data obtained will be analyzed using quantitative and qualitative methods for the details of the profiles obtained. No less than 200 questionnaires will be administered to people between 18 and 64 years of age who self-identify as having some degree of dependency due to disability. For the analysis of the results, the two main variables of autonomy and dependence will be considered. Socio-demographic variables such as age, gender identity, area of residence and family composition will be used. In relation to the biological dimension of the situation, the diagnosis, if any, and the type of disability will be asked. For the description of these profiles of autonomy and dependence, the following variables will be used: self-perception, decision-making, interests, expectations and life project, care of their health condition, support and social network, and labor and educational inclusion. The relationship between the target population and the variables collected provides several guidelines that could form the basis for the analysis of other research of interest in terms of self-perception, autonomy and dependence. The areas and situations where people state that they have greater possibilities to decide and have a say will be obtained. It will identify social (networks and support, educational background), demographic (age, gender identity and residence) and health-related variables (diagnosis and type of disability, quality of care) that may have a greater relationship with situations of dependency or autonomy. It will be studied whether the level of autonomy and/or dependence has an impact on the type of expectations and interests of the people surveyed.Keywords: life project, disability, inclusion, autonomy
Procedia PDF Downloads 68313 Precise Determination of the Residual Stress Gradient in Composite Laminates Using a Configurable Numerical-Experimental Coupling Based on the Incremental Hole Drilling Method
Authors: A. S. Ibrahim Mamane, S. Giljean, M.-J. Pac, G. L’Hostis
Abstract:
Fiber reinforced composite laminates are particularly subject to residual stresses due to their heterogeneity and the complex chemical, mechanical and thermal mechanisms that occur during their processing. Residual stresses are now well known to cause damage accumulation, shape instability, and behavior disturbance in composite parts. Many works exist in the literature on techniques for minimizing residual stresses in thermosetting and thermoplastic composites mainly. To study in-depth the influence of processing mechanisms on the formation of residual stresses and to minimize them by establishing a reliable correlation, it is essential to be able to measure very precisely the profile of residual stresses in the composite. Residual stresses are important data to consider when sizing composite parts and predicting their behavior. The incremental hole drilling is very effective in measuring the gradient of residual stresses in composite laminates. This method is semi-destructive and consists of drilling incrementally a hole through the thickness of the material and measuring relaxation strains around the hole for each increment using three strain gauges. These strains are then converted into residual stresses using a matrix of coefficients. These coefficients, called calibration coefficients, depending on the diameter of the hole and the dimensions of the gauges used. The reliability of the incremental hole drilling depends on the accuracy with which the calibration coefficients are determined. These coefficients are calculated using a finite element model. The samples’ features and the experimental conditions must be considered in the simulation. Any mismatch can lead to inadequate calibration coefficients, thus introducing errors on residual stresses. Several calibration coefficient correction methods exist for isotropic material, but there is a lack of information on this subject concerning composite laminates. In this work, a Python program was developed to automatically generate the adequate finite element model. This model allowed us to perform a parametric study to assess the influence of experimental errors on the calibration coefficients. The results highlighted the sensitivity of the calibration coefficients to the considered errors and gave an order of magnitude of the precisions required on the experimental device to have reliable measurements. On the basis of these results, improvements were proposed on the experimental device. Furthermore, a numerical method was proposed to correct the calibration coefficients for different types of materials, including thick composite parts for which the analytical approach is too complex. This method consists of taking into account the experimental errors in the simulation. Accurate measurement of the experimental errors (such as eccentricity of the hole, angular deviation of the gauges from their theoretical position, or errors on increment depth) is therefore necessary. The aim is to determine more precisely the residual stresses and to expand the validity domain of the incremental hole drilling technique.Keywords: fiber reinforced composites, finite element simulation, incremental hole drilling method, numerical correction of the calibration coefficients, residual stresses
Procedia PDF Downloads 132312 Diversity of Rhopalocera in Different Vegetation Types of PC Hills, Philippines
Authors: Sean E. Gregory P. Igano, Ranz Brendan D. Gabor, Baron Arthur M. Cabalona, Numeriano Amer E. Gutierrez
Abstract:
Distribution patterns and abundance of butterflies respond in the long term to variations in habitat quality. Studying butterfly populations would give evidence on how vegetation types influence their diversity. In this research, the Rhopalocera diversity of PC Hills was assessed to provide information on diversity trends in varying vegetation types. PC Hills, located in Palo, Leyte, Philippines, is a relatively undisturbed area having forests and rivers. Despite being situated nearby inhabited villages; the area is observed to have a possible rich butterfly population. To assess the Rhopalocera species richness and diversity, transect sampling technique was applied to monitor and document butterflies. Transects were placed in locations that can be mapped, described and relocated easily. Three transects measuring three hundred meters each with a 5-meter diameter were established based on the different vegetation types present. The three main vegetation types identified were the agroecosystem (transect 1), dipterocarp forest (transect 2), and riparian (transect 3). Sample collections were done only from 9:00 A.M to 3:00 P.M. under warm and bright weather, with no more than moderate winds and when it was not raining. When weather conditions did not permit collection, it was moved to another day. A GPS receiver was used to record the location of the selected sample sites and the coordinates of where each sample was collected. Morphological analysis was done for the first phase of the study to identify the voucher specimen to the lowest taxonomic level possible using books about butterfly identification guides and species lists as references. For the second phase, DNA barcoding will be used to further identify the voucher specimen into the species taxonomic level. After eight (8) sampling sessions, seven hundred forty-two (742) individuals were seen, and twenty-two (22) Rhopalocera genera were identified through morphological identification. Nymphalidae family of genus Ypthima and the Pieridae family of genera Eurema and Leptosia were the most dominant species observed. Twenty (20) of the thirty-one (31) voucher specimen were already identified to their species taxonomic level using DNA Barcoding. Shannon-Weiner index showed that the highest diversity level was observed in the third transect (H’ = 2.947), followed by the second transect (H’ = 2.6317) and the lowest being in the first transect (H’ = 1.767). This indicates that butterflies are likely to inhabit dipterocarp and riparian vegetation types than agroecosystem, which influences their species composition and diversity. Moreover, the appearance of a river in the riparian vegetation supported its diversity value since butterflies have the tendency to fly into areas near rivers. Species identification of other voucher specimen will be done in order to compute the overall species richness in PC Hills. Further butterfly sampling sessions of PC Hills is recommended for a more reliable diversity trend and to discover more butterfly species. Expanding the research by assessing the Rhopalocera diversity in other locations should be considered along with studying factors that affect butterfly species composition other than vegetation types.Keywords: distribution patterns, DNA barcoding, morphological analysis, Rhopalocera
Procedia PDF Downloads 155311 Disability in the Course of a Chronic Disease: The Example of People Living with Multiple Sclerosis in Poland
Authors: Milena Trojanowska
Abstract:
Disability is a phenomenon for which meanings and definitions have evolved over the decades. This became the trigger to start a project to answer the question of what disability constitutes in the course of an incurable chronic disease. The chosen research group are people living with multiple sclerosis.The contextual phase of the research was participant observation at the Polish Multiple Sclerosis Society, the largest NGO in Poland supporting people living with MS and their relatives. The research techniques used in the project are (in order of implementation): group interviews with people living with MS and their relatives, narrative interviews, asynchronous technique, participant observation during events organised for people living with MS and their relatives.The researcher is currently conducting follow-up interviews, as inaccuracies in the respondents' narratives were identified during the data analysis. Interviews and supplementary research techniques were used over the four years of the research, and the researcher also benefited from experience gained from 12 years of working with NGOs (diaries, notes). The research was carried out in Poland with the participation of people living in this country only.The research has been based on grounded theory methodology in a constructivist perspectivedeveloped by Kathy Charmaz. The goal was to follow the idea that research must be reliable, original, and useful. The aim was to construct an interpretive theory that assumes temporality and the processualityof social life. TheAtlas.ti software was used to collect research material and analyse it. It is a program from the CAQDAS(Computer-Assisted Qualitative Data Analysis Software) group.Several key factors influencing the construction of a disability identity by people living with multiple sclerosis was identified:-course of interaction with significant relatives,- the expectation of identification with disability (expressed by close relatives),- economic profitability (pension, allowances),- institutional advantages (e.g. parking card),- independence and autonomy (not equated with physical condition, but access to adapted infrastructure and resources to support daily functioning),- the way a person with MS construes the meaning of disability,- physical and mental state,- medical diagnosis of illness.In addition, it has been shown that making an assumption about the experience of disability in the course of MS is a form of cognitive reductionism leading to further phenomenon such as: the expectation of the person with MS to construct a social identity as a person with a disability (e.g. giving up work), the occurrence of institutional inequalities. It can also be a determinant of the choice of a life strategy that limits social and individual functioning, even if this necessity is not influenced by the person's physical or psychological condition.The results of the research are important for the development of knowledge about the phenomenon of disability. It indicates the contextuality and complexity of the disability phenomenon, which in the light of the research is a set of different phenomenon of heterogeneous nature and multifaceted causality. This knowledge can also be useful for institutions and organisations in the non-governmental sector supporting people with disabilities and people living with multiple sclerosis.Keywords: disability, multiple sclerosis, grounded theory, poland
Procedia PDF Downloads 108310 Preparation of Activated Carbon From Waste Feedstock: Activation Variables Optimization and Influence
Authors: Oluwagbemi Victor Aladeokin
Abstract:
In the last decade, the global peanut cultivation has seen increased demand, which is attributed to their health benefits, rising to ~ 41.4 MMT in 2019/2020. Peanut and other nutshells are considered as waste in various parts of the world and are usually used for their fuel value. However, this agricultural by-product can be converted to a higher value product such as activated carbon. For many years, due to the highly porous structure of activated carbon, it has been widely and effectively used as an adsorbent in the purification and separation of gases and liquids. Those used for commercial purposes are primarily made from a range of precursors such as wood, coconut shell, coal, bones, etc. However, due to difficulty in regeneration and high cost, various agricultural residues such as rice husk, corn stalks, apricot stones, almond shells, coffee beans, etc, have been explored to produce activated carbons. In the present study, the potential of peanut shells as precursors in the production of activated carbon and their adsorption capacity is investigated. Usually, precursors used to produce activated carbon have carbon content above 45 %. A typical raw peanut shell has 42 wt.% carbon content. To increase the yield, this study has employed chemical activation method using zinc chloride. Zinc chloride is well known for its effectiveness in increasing porosity of porous carbonaceous materials. In chemical activation, activation temperature and impregnation ratio are parameters commonly reported to be the most significant, however, this study has also studied the influence of activation time on the development of activated carbon from peanut shells. Activated carbons are applied for different purposes, however, as the application of activated carbon becomes more specific, an understanding of the influence of activation variables to have a better control of the quality of the final product becomes paramount. A traditional approach to experimentally investigate the influence of the activation parameters, involves varying each parameter at a time. However, a more efficient way to reduce the number of experimental runs is to apply design of experiment. One of the objectives of this study is to optimize the activation variables. Thus, this work has employed response surface methodology of design of experiment to study the interactions between the activation parameters and consequently optimize the activation parameters (temperature, impregnation ratio, and activation time). The optimum activation conditions found were 485 °C, 15 min and 1.7, temperature, activation time, and impregnation ratio respectively. The optimum conditions resulted in an activated carbon with relatively high surface area ca. 1700 m2/g, 47 % yield, relatively high density, low ash, and high fixed carbon content. Impregnation ratio and temperature were found to mostly influence the final characteristics of the produced activated carbon from peanut shells. The results of this study, using response surface methodology technique, have revealed the potential and the most significant parameters that influence the chemical activation process, of peanut shells to produce activated carbon which can find its use in both liquid and gas phase adsorption applications.Keywords: chemical activation, fixed carbon, impregnation ratio, optimum, surface area
Procedia PDF Downloads 146309 Sensor and Sensor System Design, Selection and Data Fusion Using Non-Deterministic Multi-Attribute Tradespace Exploration
Authors: Matthew Yeager, Christopher Willy, John Bischoff
Abstract:
The conceptualization and design phases of a system lifecycle consume a significant amount of the lifecycle budget in the form of direct tasking and capital, as well as the implicit costs associated with unforeseeable design errors that are only realized during downstream phases. Ad hoc or iterative approaches to generating system requirements oftentimes fail to consider the full array of feasible systems or product designs for a variety of reasons, including, but not limited to: initial conceptualization that oftentimes incorporates a priori or legacy features; the inability to capture, communicate and accommodate stakeholder preferences; inadequate technical designs and/or feasibility studies; and locally-, but not globally-, optimized subsystems and components. These design pitfalls can beget unanticipated developmental or system alterations with added costs, risks and support activities, heightening the risk for suboptimal system performance, premature obsolescence or forgone development. Supported by rapid advances in learning algorithms and hardware technology, sensors and sensor systems have become commonplace in both commercial and industrial products. The evolving array of hardware components (i.e. sensors, CPUs, modular / auxiliary access, etc…) as well as recognition, data fusion and communication protocols have all become increasingly complex and critical for design engineers during both concpetualization and implementation. This work seeks to develop and utilize a non-deterministic approach for sensor system design within the multi-attribute tradespace exploration (MATE) paradigm, a technique that incorporates decision theory into model-based techniques in order to explore complex design environments and discover better system designs. Developed to address the inherent design constraints in complex aerospace systems, MATE techniques enable project engineers to examine all viable system designs, assess attribute utility and system performance, and better align with stakeholder requirements. Whereas such previous work has been focused on aerospace systems and conducted in a deterministic fashion, this study addresses a wider array of system design elements by incorporating both traditional tradespace elements (e.g. hardware components) as well as popular multi-sensor data fusion models and techniques. Furthermore, statistical performance features to this model-based MATE approach will enable non-deterministic techniques for various commercial systems that range in application, complexity and system behavior, demonstrating a significant utility within the realm of formal systems decision-making.Keywords: multi-attribute tradespace exploration, data fusion, sensors, systems engineering, system design
Procedia PDF Downloads 189