Search results for: MODELICA simulation language
370 The Use of Social Stories and Digital Technology as Interventions for Autistic Children; A State-Of-The-Art Review and Qualitative Data Analysis
Authors: S. Hussain, C. Grieco, M. Brosnan
Abstract:
Background and Aims: Autism is a complex neurobehavioural disorder, characterised by impairments in the development of language and communication skills. The study involved a state-of-art systematic review, in addition to qualitative data analysis, to establish the evidence for social stories as an intervention strategy for autistic children. An up-to-date review of the use of digital technologies in the delivery of interventions to autistic children was also carried out; to propose the efficacy of digital technologies and the use of social stories to improve intervention outcomes for autistic children. Methods: Two student researchers reviewed a range of randomised control trials and observational studies. The aim of the review was to establish if there was adequate evidence to justify recommending social stories to autistic patients. Students devised their own search strategies to be used across a range of search engines, including Ovid-Medline, Google Scholar and PubMed. Students then critically appraised the generated literature. Additionally, qualitative data obtained from a comprehensive online questionnaire on social stories was also thematically analysed. The thematic analysis was carried out independently by each researcher, using a ‘bottom-up’ approach, meaning contributors read and analysed responses to questions and devised semantic themes from reading the responses to a given question. The researchers then placed each response into a semantic theme or sub-theme. The students then joined to discuss the merging of their theme headings. The Inter-rater reliability (IRR) was calculated before and after theme headings were merged, giving IRR for pre- and post-discussion. Lastly, the thematic analysis was assessed by a third researcher, who is a professor of psychology and the director for the ‘Centre for Applied Autism Research’ at the University of Bath. Results: A review of the literature, as well as thematic analysis of qualitative data found supporting evidence for social story use. The thematic analysis uncovered some interesting themes from the questionnaire responses, relating to the reasons why social stories were used and the factors influencing their effectiveness in each case. However, overall, the evidence for digital technologies interventions was limited, and the literature could not prove a causal link between better intervention outcomes for autistic children and the use of technologies. However, they did offer valid proposed theories for the suitability of digital technologies for autistic children. Conclusions: Overall, the review concluded that there was adequate evidence to justify advising the use of social stories with autistic children. The role of digital technologies is clearly a fast-emerging field and appears to be a promising method of intervention for autistic children; however, it should not yet be considered an evidence-based approach. The students, using this research, developed ideas on social story interventions which aim to help autistic children.Keywords: autistic children, digital technologies, intervention, social stories
Procedia PDF Downloads 121369 Stochastic Pi Calculus in Financial Markets: An Alternate Approach to High Frequency Trading
Authors: Jerome Joshi
Abstract:
The paper presents the modelling of financial markets using the Stochastic Pi Calculus model. The Stochastic Pi Calculus model is mainly used for biological applications; however, the feature of this model promotes its use in financial markets, more prominently in high frequency trading. The trading system can be broadly classified into exchange, market makers or intermediary traders and fundamental traders. The exchange is where the action of the trade is executed, and the two types of traders act as market participants in the exchange. High frequency trading, with its complex networks and numerous market participants (intermediary and fundamental traders) poses a difficulty while modelling. It involves the participants to seek the advantage of complex trading algorithms and high execution speeds to carry out large volumes of trades. To earn profits from each trade, the trader must be at the top of the order book quite frequently by executing or processing multiple trades simultaneously. This would require highly automated systems as well as the right sentiment to outperform other traders. However, always being at the top of the book is also not best for the trader, since it was the reason for the outbreak of the ‘Hot – Potato Effect,’ which in turn demands for a better and more efficient model. The characteristics of the model should be such that it should be flexible and have diverse applications. Therefore, a model which has its application in a similar field characterized by such difficulty should be chosen. It should also be flexible in its simulation so that it can be further extended and adapted for future research as well as be equipped with certain tools so that it can be perfectly used in the field of finance. In this case, the Stochastic Pi Calculus model seems to be an ideal fit for financial applications, owing to its expertise in the field of biology. It is an extension of the original Pi Calculus model and acts as a solution and an alternative to the previously flawed algorithm, provided the application of this model is further extended. This model would focus on solving the problem which led to the ‘Flash Crash’ which is the ‘Hot –Potato Effect.’ The model consists of small sub-systems, which can be integrated to form a large system. It is designed in way such that the behavior of ‘noise traders’ is considered as a random process or noise in the system. While modelling, to get a better understanding of the problem, a broader picture is taken into consideration with the trader, the system, and the market participants. The paper goes on to explain trading in exchanges, types of traders, high frequency trading, ‘Flash Crash,’ ‘Hot-Potato Effect,’ evaluation of orders and time delay in further detail. For the future, there is a need to focus on the calibration of the module so that they would interact perfectly with other modules. This model, with its application extended, would provide a basis for researchers for further research in the field of finance and computing.Keywords: concurrent computing, high frequency trading, financial markets, stochastic pi calculus
Procedia PDF Downloads 79368 Challenge in Teaching Physics during the Pandemic: Another Way of Teaching and Learning
Authors: Edson Pierre, Gustavo de Jesus Lopez Nunez
Abstract:
The objective of this work is to analyze how physics can be taught remotely through the use of platforms and software to attract the attention of 2nd-year high school students at Colégio Cívico Militar Professor Carmelita Souza Dias and point out how remote teaching can be a teaching-learning strategy during the period of social distancing. Teaching physics has been a challenge for teachers and students, permeating common sense with the great difficulty of teaching and learning the subject. The challenge increased in 2020 and 2021 with the impact caused by the new coronavirus pandemic (Sars-Cov-2) and its variants that have affected the entire world. With these changes, a new teaching modality emerged: remote teaching. It brought new challenges and one of them was promoting distance research experiences, especially in physics teaching, since there are learning difficulties and it is often impossible for the student to relate the theory observed in class with the reality that surrounds them. Teaching physics in schools faces some difficulties, which makes it increasingly less attractive for young people to choose this profession. Bearing in mind that the study of physics is very important, as it puts students in front of concrete and real situations, situations that physical principles can respond to, helping to understand nature, nourishing and nurturing a taste for science. The use of new platforms and software, such as PhET Interactive Simulations from the University of Colorado at Boulder, is a virtual laboratory that has numerous simulations of scientific experiments, which serve to improve the understanding of the content taught practically, facilitating student learning and absorption of content, being a simple, practical and free simulation tool, attracts more attention from students, causing them to acquire greater knowledge about the subject studied, or even a quiz, bringing certain healthy competitiveness to students, generating knowledge and interest in the themes used. The present study takes the Theory of Social Representations as a theoretical reference, examining the content and process of constructing the representations of teachers, subjects of our investigation, on the evaluation of teaching and learning processes, through a methodology of qualitative. The result of this work has shown that remote teaching was really a very important strategy for the process of teaching and learning physics in the 2nd year of high school. It provided greater interaction between the teacher and the student. Therefore, the teacher also plays a fundamental role since technology is increasingly present in the educational environment, and he is the main protagonist of this process.Keywords: physics teaching, technologies, remote learning, pandemic
Procedia PDF Downloads 67367 Design of Photonic Crystal with Defect Layer to Eliminate Interface Corrugations for Obtaining Unidirectional and Bidirectional Beam Splitting under Normal Incidence
Authors: Evrim Colak, Andriy E. Serebryannikov, Pavel V. Usik, Ekmel Ozbay
Abstract:
Working with a dielectric photonic crystal (PC) structure which does not include surface corrugations, unidirectional transmission and dual-beam splitting are observed under normal incidence as a result of the strong diffractions caused by the embedded defect layer. The defect layer has twice the period of the regular PC segments which sandwich the defect layer. Although the PC has even number of rows, the structural symmetry is broken due to the asymmetric placement of the defect layer with respect to the symmetry axis of the regular PC. The simulations verify that efficient splitting and occurrence of strong diffractions are related to the dispersion properties of the Floquet-Bloch modes of the photonic crystal. Unidirectional and bi-directional splitting, which are associated with asymmetric transmission, arise due to the dominant contribution of the first positive and first negative diffraction orders. The effect of the depth of the defect layer is examined by placing single defect layer in varying rows, preserving the asymmetry of PC. Even for deeply buried defect layer, asymmetric transmission is still valid even if the zeroth order is not coupled. This transmission is due to evanescent waves which reach to the deeply embedded defect layer and couple to higher order modes. In an additional selected performance, whichever surface is illuminated, i.e., in both upper and lower surface illumination cases, incident beam is split into two beams of equal intensity at the output surface where the intensity of the out-going beams are equal for both illumination cases. That is, although the structure is asymmetric, symmetric bidirectional transmission with equal transmission values is demonstrated and the structure mimics the behavior of symmetric structures. Finally, simulation studies including the examination of a coupled-cavity defect for two different permittivity values (close to the permittivity values of GaAs or Si and alumina) reveal unidirectional splitting for a wider band of operation in comparison to the bandwidth obtained in the case of a single embedded defect layer. Since the dielectric materials that are utilized are low-loss and weakly dispersive in a wide frequency range including microwave and optical frequencies, the studied structures should be scalable to the mentioned ranges.Keywords: asymmetric transmission, beam deflection, blazing, bi-directional splitting, defect layer, dual beam splitting, Floquet-Bloch modes, isofrequency contours, line defect, oblique incidence, photonic crystal, unidirectionality
Procedia PDF Downloads 185366 A Multifactorial Algorithm to Automate Screening of Drug-Induced Liver Injury Cases in Clinical and Post-Marketing Settings
Authors: Osman Turkoglu, Alvin Estilo, Ritu Gupta, Liliam Pineda-Salgado, Rajesh Pandey
Abstract:
Background: Hepatotoxicity can be linked to a variety of clinical symptoms and histopathological signs, posing a great challenge in the surveillance of suspected drug-induced liver injury (DILI) cases in the safety database. Additionally, the majority of such cases are rare, idiosyncratic, highly unpredictable, and tend to demonstrate unique individual susceptibility; these qualities, in turn, lend to a pharmacovigilance monitoring process that is often tedious and time-consuming. Objective: Develop a multifactorial algorithm to assist pharmacovigilance physicians in identifying high-risk hepatotoxicity cases associated with DILI from the sponsor’s safety database (Argus). Methods: Multifactorial selection criteria were established using Structured Query Language (SQL) and the TIBCO Spotfire® visualization tool, via a combination of word fragments, wildcard strings, and mathematical constructs, based on Hy’s law criteria and pattern of injury (R-value). These criteria excluded non-eligible cases from monthly line listings mined from the Argus safety database. The capabilities and limitations of these criteria were verified by comparing a manual review of all monthly cases with system-generated monthly listings over six months. Results: On an average, over a period of six months, the algorithm accurately identified 92% of DILI cases meeting established criteria. The automated process easily compared liver enzyme elevations with baseline values, reducing the screening time to under 15 minutes as opposed to multiple hours exhausted using a cognitively laborious, manual process. Limitations of the algorithm include its inability to identify cases associated with non-standard laboratory tests, naming conventions, and/or incomplete/incorrectly entered laboratory values. Conclusions: The newly developed multifactorial algorithm proved to be extremely useful in detecting potential DILI cases, while heightening the vigilance of the drug safety department. Additionally, the application of this algorithm may be useful in identifying a potential signal for DILI in drugs not yet known to cause liver injury (e.g., drugs in the initial phases of development). This algorithm also carries the potential for universal application, due to its product-agnostic data and keyword mining features. Plans for the tool include improving it into a fully automated application, thereby completely eliminating a manual screening process.Keywords: automation, drug-induced liver injury, pharmacovigilance, post-marketing
Procedia PDF Downloads 153365 A Hydrometallurgical Route for the Recovery of Molybdenum from Spent Mo-Co Catalyst
Authors: Bina Gupta, Rashmi Singh, Harshit Mahandra
Abstract:
Molybdenum is a strategic metal and finds applications in petroleum refining, thermocouples, X-ray tubes and in making of steel alloy owing to its high melting temperature and tensile strength. The growing significance and economic value of molybdenum has increased interest in the development of efficient processes aiming its recovery from secondary sources. Main secondary sources of Mo are molybdenum catalysts which are used for hydrodesulphurisation process in petrochemical refineries. The activity of these catalysts gradually decreases with time during the desulphurisation process as the catalysts get contaminated with toxic material and are dumped as waste which leads to environmental issues. In this scenario, recovery of molybdenum from spent catalyst is significant from both economic and environmental point of view. Recently ionic liquids have gained prominence due to their low vapour pressure, high thermal stability, good extraction efficiency and recycling capacity. The present study reports recovery of molybdenum from Mo-Co spent leach liquor using Cyphos IL 102[trihexyl(tetradecyl)phosphonium bromide] as an extractant. Spent catalyst was leached with 3.0 mol/L HCl, and the leach liquor containing Mo-870 ppm, Co-341 ppm, Al-508 ppm and Fe-42 ppm was subjected to extraction step. The effect of extractant concentration on the leach liquor was investigated and almost 85% extraction of Mo was achieved with 0.05 mol/L Cyphos IL 102. Results of stripping studies revealed that 2.0 mol/L HNO3 can effectively strip 94% of the extracted Mo from the loaded organic phase. McCabe- Thiele diagrams were constructed to determine the number of stages required for quantitative extraction and stripping of molybdenum and were confirmed by countercurrent simulation studies. According to McCabe- Thiele extraction and stripping isotherms, two stages are required for quantitative extraction and stripping of molybdenum at A/O= 1:1. Around 95.4% extraction of molybdenum was achieved in two-stage counter current at A/O= 1:1 with the negligible extraction of Co and Al. However, iron was coextracted and removed from the loaded organic phase by scrubbing with 0.01 mol/L HCl. Quantitative stripping (~99.5 %) of molybdenum was achieved with 2.0 mol/L HNO₃ in two stages at O/A=1:1. Overall ~95.0% molybdenum with 99 % purity was recovered from Mo-Co spent catalyst. From the strip solution, MoO₃ was obtained by crystallization followed by thermal decomposition. The product obtained after thermal decomposition was characterized by XRD, FE-SEM and EDX techniques. XRD peaks of MoO₃ correspond to molybdite Syn-MoO₃ structure. FE-SEM depicts the rod-like morphology of synthesized MoO₃. EDX analysis of MoO₃ shows 1:3 atomic percentage of molybdenum and oxygen. The synthesised MoO₃ can find application in gas sensors, electrodes of batteries, display devices, smart windows, lubricants and as a catalyst.Keywords: cyphos Il 102, extraction, spent mo-co catalyst, recovery
Procedia PDF Downloads 173364 Tests for Zero Inflation in Count Data with Measurement Error in Covariates
Authors: Man-Yu Wong, Siyu Zhou, Zhiqiang Cao
Abstract:
In quality of life, health service utilization is an important determinant of medical resource expenditures on Colorectal cancer (CRC) care, a better understanding of the increased utilization of health services is essential for optimizing the allocation of healthcare resources to services and thus for enhancing the service quality, especially for high expenditure on CRC care like Hong Kong region. In assessing the association between the health-related quality of life (HRQOL) and health service utilization in patients with colorectal neoplasm, count data models can be used, which account for over dispersion or extra zero counts. In our data, the HRQOL evaluation is a self-reported measure obtained from a questionnaire completed by the patients, misreports and variations in the data are inevitable. Besides, there are more zero counts from the observed number of clinical consultations (observed frequency of zero counts = 206) than those from a Poisson distribution with mean equal to 1.33 (expected frequency of zero counts = 156). This suggests that excess of zero counts may exist. Therefore, we study tests for detecting zero-inflation in models with measurement error in covariates. Method: Under classical measurement error model, the approximate likelihood function for zero-inflation Poisson regression model can be obtained, then Approximate Maximum Likelihood Estimation(AMLE) can be derived accordingly, which is consistent and asymptotically normally distributed. By calculating score function and Fisher information based on AMLE, a score test is proposed to detect zero-inflation effect in ZIP model with measurement error. The proposed test follows asymptotically standard normal distribution under H0, and it is consistent with the test proposed for zero-inflation effect when there is no measurement error. Results: Simulation results show that empirical power of our proposed test is the highest among existing tests for zero-inflation in ZIP model with measurement error. In real data analysis, with or without considering measurement error in covariates, existing tests, and our proposed test all imply H0 should be rejected with P-value less than 0.001, i.e., zero-inflation effect is very significant, ZIP model is superior to Poisson model for analyzing this data. However, if measurement error in covariates is not considered, only one covariate is significant; if measurement error in covariates is considered, only another covariate is significant. Moreover, the direction of coefficient estimations for these two covariates is different in ZIP regression model with or without considering measurement error. Conclusion: In our study, compared to Poisson model, ZIP model should be chosen when assessing the association between condition-specific HRQOL and health service utilization in patients with colorectal neoplasm. and models taking measurement error into account will result in statistically more reliable and precise information.Keywords: count data, measurement error, score test, zero inflation
Procedia PDF Downloads 289363 Students’ Speech Anxiety in Blended Learning
Authors: Mary Jane B. Suarez
Abstract:
Public speaking anxiety (PSA), also known as speech anxiety, is innumerably persistent in any traditional communication classes, especially for students who learn English as a second language. The speech anxiety intensifies when communication skills assessments have taken their toll in an online or a remote mode of learning due to the perils of the COVID-19 virus. Both teachers and students have experienced vast ambiguity on how to realize a still effective way to teach and learn speaking skills amidst the pandemic. Communication skills assessments like public speaking, oral presentations, and student reporting have defined their new meaning using Google Meet, Zoom, and other online platforms. Though using such technologies has paved for more creative ways for students to acquire and develop communication skills, the effectiveness of using such assessment tools stands in question. This mixed method study aimed to determine the factors that affected the public speaking skills of students in a communication class, to probe on the assessment gaps in assessing speaking skills of students attending online classes vis-à-vis the implementation of remote and blended modalities of learning, and to recommend ways on how to address the public speaking anxieties of students in performing a speaking task online and to bridge the assessment gaps based on the outcome of the study in order to achieve a smooth segue from online to on-ground instructions maneuvering towards a much better post-pandemic academic milieu. Using a convergent parallel design, both quantitative and qualitative data were reconciled by probing on the public speaking anxiety of students and the potential assessment gaps encountered in an online English communication class under remote and blended learning. There were four phases in applying the convergent parallel design. The first phase was the data collection, where both quantitative and qualitative data were collected using document reviews and focus group discussions. The second phase was data analysis, where quantitative data was treated using statistical testing, particularly frequency, percentage, and mean by using Microsoft Excel application and IBM Statistical Package for Social Sciences (SPSS) version 19, and qualitative data was examined using thematic analysis. The third phase was the merging of data analysis results to amalgamate varying comparisons between desired learning competencies versus the actual learning competencies of students. Finally, the fourth phase was the interpretation of merged data that led to the findings that there was a significantly high percentage of students' public speaking anxiety whenever students would deliver speaking tasks online. There were also assessment gaps identified by comparing the desired learning competencies of the formative and alternative assessments implemented and the actual speaking performances of students that showed evidence that public speaking anxiety of students was not properly identified and processed.Keywords: blended learning, communication skills assessment, public speaking anxiety, speech anxiety
Procedia PDF Downloads 103362 Hardware Implementation on Field Programmable Gate Array of Two-Stage Algorithm for Rough Set Reduct Generation
Authors: Tomasz Grzes, Maciej Kopczynski, Jaroslaw Stepaniuk
Abstract:
The rough sets theory developed by Prof. Z. Pawlak is one of the tools that can be used in the intelligent systems for data analysis and processing. Banking, medicine, image recognition and security are among the possible fields of utilization. In all these fields, the amount of the collected data is increasing quickly, but with the increase of the data, the computation speed becomes the critical factor. Data reduction is one of the solutions to this problem. Removing the redundancy in the rough sets can be achieved with the reduct. A lot of algorithms of generating the reduct were developed, but most of them are only software implementations, therefore have many limitations. Microprocessor uses the fixed word length, consumes a lot of time for either fetching as well as processing of the instruction and data; consequently, the software based implementations are relatively slow. Hardware systems don’t have these limitations and can process the data faster than a software. Reduct is the subset of the decision attributes that provides the discernibility of the objects. For the given decision table there can be more than one reduct. Core is the set of all indispensable condition attributes. None of its elements can be removed without affecting the classification power of all condition attributes. Moreover, every reduct consists of all the attributes from the core. In this paper, the hardware implementation of the two-stage greedy algorithm to find the one reduct is presented. The decision table is used as an input. Output of the algorithm is the superreduct which is the reduct with some additional removable attributes. First stage of the algorithm is calculating the core using the discernibility matrix. Second stage is generating the superreduct by enriching the core with the most common attributes, i.e., attributes that are more frequent in the decision table. Described above algorithm has two disadvantages: i) generating the superreduct instead of reduct, ii) additional first stage may be unnecessary if the core is empty. But for the systems focused on the fast computation of the reduct the first disadvantage is not the key problem. The core calculation can be achieved with a combinational logic block, and thus add respectively little time to the whole process. Algorithm presented in this paper was implemented in Field Programmable Gate Array (FPGA) as a digital device consisting of blocks that process the data in a single step. Calculating the core is done by the comparators connected to the block called 'singleton detector', which detects if the input word contains only single 'one'. Calculating the number of occurrences of the attribute is performed in the combinational block made up of the cascade of the adders. The superreduct generation process is iterative and thus needs the sequential circuit for controlling the calculations. For the research purpose, the algorithm was also implemented in C language and run on a PC. The times of execution of the reduct calculation in a hardware and software were considered. Results show increase in the speed of data processing.Keywords: data reduction, digital systems design, field programmable gate array (FPGA), reduct, rough set
Procedia PDF Downloads 220361 Study on Health Status and Health Promotion Models for Prevention of Cardiovascular Disease in Asylum Seekers at Asylum Seekers Center, Kupang-Indonesia
Authors: Era Dorihi Kale, Sabina Gero, Uly Agustine
Abstract:
Asylum seekers are people who come to other countries to get asylum. In line with that, they also carry the culture and health behavior of their country, which is very different from the new country they currently live in. This situation raises problems, also in the health sector. The approach taken must also be a culturally sensitive approach, where the culture and habits of the refugee's home area are also valued so that the health services provided can be right on target. Some risk factors that already exist in this group are lack of activity, consumption of fast food, smoking, and stress levels that are quite high. Overall this condition will increase the risk of an increased incidence of cardiovascular disease. This research is a descriptive and experimental study. The purpose of this study is to identify health status and develop a culturally sensitive health promotion model, especially related to the risk of cardiovascular disease for asylum seekers in detention homes in the city of Kupang. This research was carried out in 3 stages, stage 1 was conducting a survey of health problems and the risk of asylum seeker cardiovascular disease, Stage 2 developed a health promotion model, and stage 3 conducted a testing model of health promotion carried out. There were 81 respondents involved in this study. The variables measured were: health status, risk of cardiovascular disease and, health promotion models. Method of data collection: Instruments (questionnaires) were distributed to respondents answered for anamnese health status; then, cardiovascular risk measurements were taken. After that, the preparation of information needs and the compilation of booklets on the prevention of cardiovascular disease is carried out. The compiled booklet was then translated into Farsi. After that, the booklet was tested. Respondent characteristics: average lived in Indonesia for 4.38 years, the majority were male (90.1%), and most were aged 15-34 years (90.1%). There are several diseases that are often suffered by asylum seekers, namely: gastritis, headaches, diarrhea, acute respiratory infections, skin allergies, sore throat, cough, and depression. The level of risk for asylum seekers experiencing cardiovascular problems is 4 high risk people, 6 moderate risk people, and 71 low risk people. This condition needs special attention because the number of people at risk is quite high when compared to the age group of refugees. This is very related to the level of stress experienced by the refugees. The health promotion model that can be used is the transactional stress and coping model, using Persian (oral) and English for written information. It is recommended for health practitioners who care for refugees to always pay attention to aspects of culture (especially language) as well as the psychological condition of asylum seekers to make it easier to conduct health care and promotion. As well for further research, it is recommended to conduct research, especially relating to the effect of psychological stress on the risk of cardiovascular disease in asylum seekers.Keywords: asylum seekers, health status, cardiovascular disease, health promotion
Procedia PDF Downloads 105360 The Use of Platelet-rich Plasma in the Treatment of Diabetic Foot Ulcers: A Scoping Review
Authors: Kiran Sharma, Viktor Kunder, Zerha Rizvi, Ricardo Soubelet
Abstract:
Platelet rich plasma (PRP) has been recognized as a method of treatment in medicine since the 1980s. It primarily functions by releasing cytokines and growth factors that promote wound healing; these growth promoting factors released by PRP enact new processes such as angiogenesis, collagen deposition, and tissue formation that can change wound healing outcomes. Many studies recognize that PRP aids in chronic wound healing, which is advantageous for patients who suffer from chronic diabetic foot ulcers (DFUs). This scoping review aims to examine literature to identify the efficacy of PRP use in the healing of DFUs. Following PRISMA guidelines, we searched randomized-controlled trials involving PRP use in diabetic patients with foot ulcers using PubMed, Medline, CINAHL Complete, and Cochrane Database of Systematic Reviews. We restricted the search to articles published during 2005-2022, full texts in the English language, articles involving patients aged 19 years or older, articles that used PRP on specifically DFUs, articles that included a control group, articles on human subjects. The initial search yielded 119 articles after removing duplicates. Final analysis for relevance yielded 8 articles. In all cases except one, the PRP group showed either faster healing, more complete healing, or a larger percentage of healed participants. There were no situations in the included studies where the control group had a higher rate of healing or decreased wound size as compared to a group with isolated PRP-only use. Only one study did not show conclusive evidence that PRP caused accelerated healing in DFUs, and this study did not have an isolated PRP variable group. Application styles of PRP for treatment were shown to influence the level of healing in patients, with injected PRP appearing to achieve the best results as compared to topical PRP application. However, this was not conclusive due to the involvement of several other variables. Two studies additionally found PRP to be useful in healing refractory DFUs, and one study found that PRP use in patients with additional comorbidities was still more effective in healing DFUs than the standard control groups. The findings of this review suggest that PRP is a useful tool in reducing healing times and improving rates of complete wound healing in DFUs. There is room for further research in the application styles of PRP before conclusive statements can be made on the efficacy of injected versus topical PRP healing based on the findings in this study. The results of this review provide a baseline for further research in PRP use in diabetic patients and can be used by both physicians and public health experts to guide future treatment options for DFUs.Keywords: diabetic foot ulcer, DFU, platelet rich plasma, PRP
Procedia PDF Downloads 76359 Role of Artificial Intelligence in Nano Proteomics
Authors: Mehrnaz Mostafavi
Abstract:
Recent advances in single-molecule protein identification (ID) and quantification techniques are poised to revolutionize proteomics, enabling researchers to delve into single-cell proteomics and identify low-abundance proteins crucial for biomedical and clinical research. This paper introduces a different approach to single-molecule protein ID and quantification using tri-color amino acid tags and a plasmonic nanopore device. A comprehensive simulator incorporating various physical phenomena was designed to predict and model the device's behavior under diverse experimental conditions, providing insights into its feasibility and limitations. The study employs a whole-proteome single-molecule identification algorithm based on convolutional neural networks, achieving high accuracies (>90%), particularly in challenging conditions (95–97%). To address potential challenges in clinical samples, where post-translational modifications affecting labeling efficiency, the paper evaluates protein identification accuracy under partial labeling conditions. Solid-state nanopores, capable of processing tens of individual proteins per second, are explored as a platform for this method. Unlike techniques relying solely on ion-current measurements, this approach enables parallel readout using high-density nanopore arrays and multi-pixel single-photon sensors. Convolutional neural networks contribute to the method's versatility and robustness, simplifying calibration procedures and potentially allowing protein ID based on partial reads. The study also discusses the efficacy of the approach in real experimental conditions, resolving functionally similar proteins. The theoretical analysis, protein labeler program, finite difference time domain calculation of plasmonic fields, and simulation of nanopore-based optical sensing are detailed in the methods section. The study anticipates further exploration of temporal distributions of protein translocation dwell-times and the impact on convolutional neural network identification accuracy. Overall, the research presents a promising avenue for advancing single-molecule protein identification and quantification with broad applications in proteomics research. The contributions made in methodology, accuracy, robustness, and technological exploration collectively position this work at the forefront of transformative developments in the field.Keywords: nano proteomics, nanopore-based optical sensing, deep learning, artificial intelligence
Procedia PDF Downloads 102358 An Inquiry of the Impact of Flood Risk on Housing Market with Enhanced Geographically Weighted Regression
Authors: Lin-Han Chiang Hsieh, Hsiao-Yi Lin
Abstract:
This study aims to determine the impact of the disclosure of flood potential map on housing prices. The disclosure is supposed to mitigate the market failure by reducing information asymmetry. On the other hand, opponents argue that the official disclosure of simulated results will only create unnecessary disturbances on the housing market. This study identifies the impact of the disclosure of the flood potential map by comparing the hedonic price of flood potential before and after the disclosure. The flood potential map used in this study is published by Taipei municipal government in 2015, which is a result of a comprehensive simulation based on geographical, hydrological, and meteorological factors. The residential property sales data of 2013 to 2016 is used in this study, which is collected from the actual sales price registration system by the Department of Land Administration (DLA). The result shows that the impact of flood potential on residential real estate market is statistically significant both before and after the disclosure. But the trend is clearer after the disclosure, suggesting that the disclosure does have an impact on the market. Also, the result shows that the impact of flood potential differs by the severity and frequency of precipitation. The negative impact for a relatively mild, high frequency flood potential is stronger than that for a heavy, low possibility flood potential. The result indicates that home buyers are of more concern to the frequency, than the intensity of flood. Another contribution of this study is in the methodological perspective. The classic hedonic price analysis with OLS regression suffers from two spatial problems: the endogeneity problem caused by omitted spatial-related variables, and the heterogeneity concern to the presumption that regression coefficients are spatially constant. These two problems are seldom considered in a single model. This study tries to deal with the endogeneity and heterogeneity problem together by combining the spatial fixed-effect model and geographically weighted regression (GWR). A series of literature indicates that the hedonic price of certain environmental assets varies spatially by applying GWR. Since the endogeneity problem is usually not considered in typical GWR models, it is arguable that the omitted spatial-related variables might bias the result of GWR models. By combing the spatial fixed-effect model and GWR, this study concludes that the effect of flood potential map is highly sensitive by location, even after controlling for the spatial autocorrelation at the same time. The main policy application of this result is that it is improper to determine the potential benefit of flood prevention policy by simply multiplying the hedonic price of flood risk by the number of houses. The effect of flood prevention might vary dramatically by location.Keywords: flood potential, hedonic price analysis, endogeneity, heterogeneity, geographically-weighted regression
Procedia PDF Downloads 290357 Needs-Gap Analysis on Culturally and Linguistically Diverse Grandparent Carers ‘Hidden Issues’: An Insight for Community Nurses
Authors: Mercedes Sepulveda, Saras Henderson, Dana Farrell, Gaby Heuft
Abstract:
In Australia, there is a significant number of Culturally and Linguistically Diverse (CALD) Grandparent Carers who are sole carers for their grandchildren. Services in the community such as accessible healthcare, financial support, legal aid, and transport to services can assist Grandparent Carers to continue to live in their own home whilst caring for their grandchildren. Community nurses can play a major role by being aware of the needs of these grandparents and link them to services via information and referrals. The CALD Grandparent Carer experiences have only been explored marginally and may be similar to the general Grandparent Carer population, although cultural aspects may add to their difficulties. This Needs-Gap Analysis aimed to uncover ‘hidden issues’ for CALD Grandparent Carers such as service gaps and actions needed to address these issues. The stakeholders selected for this Needs-Gap Analysis were drawn from relevant service providers such as community and aged care services, child and/or grandparents support services and CALD specific services. One hundred relevant service providers were surveyed using six structured questions via face to face, phone interviews, or email correspondence. CALD Grandparents who had a significant or sole role of being a carer for grandchildren were invited to participate through their CALD community leaders. Consultative Forums asking five questions that focused on the caring role, issues encountered, and what needed to be done, were conducted with the African, Asian, Spanish-Speaking, Middle Eastern, European, Pacific Islander and Maori Grandparent Carers living in South-east Queensland, Australia. Data from the service provider survey and the CALD Grandparent Carer forums were content analysed using thematic principles. Our findings highlighted social determinants of health grouped into six themes. These were; 1) service providers and Grandparent Carer perception that there was limited research data on CALD grandparents as carers; 2) inadequate legal and financial support; 3) barriers to accessing information and advice; 4) lack of childcare options in the light of aging and health issues; 5) difficulties around transport; and 6) inadequate technological skills often leading to social isolation for both carer and grandchildren. Our Needs-Gap Analysis provides insight to service providers especially health practitioners such as doctors and community nurses, particularly on the impact of caring for grandchildren on CALD Grandparent Carers. Furthermore, factors such as cultural differences, English language difficulties, and migration experiences also impacted on the way CALD Grandparent Carers are able to cope. The findings of this Need-Gap Analysis signposts some of the ‘ hidden issues’ that CALD Grandparents Carers face and draws together recommendations for the future as put forward by the stakeholders themselves.Keywords: CALD grandparents, carer needs, community nurses, grandparent carers
Procedia PDF Downloads 313356 A Hydrometallurgical Route for the Recovery of Molybdenum from Mo-Co Spent Catalyst
Authors: Bina Gupta, Rashmi Singh, Harshit Mahandra
Abstract:
Molybdenum is a strategic metal and finds applications in petroleum refining, thermocouples, X-ray tubes and in making of steel alloy owing to its high melting temperature and tensile strength. The growing significance and economic value of molybdenum have increased interest in the development of efficient processes aiming its recovery from secondary sources. Main secondary sources of Mo are molybdenum catalysts which are used for hydrodesulphurisation process in petrochemical refineries. The activity of these catalysts gradually decreases with time during the desulphurisation process as the catalysts get contaminated with toxic material and are dumped as waste which leads to environmental issues. In this scenario, recovery of molybdenum from spent catalyst is significant from both economic and environmental point of view. Recently ionic liquids have gained prominence due to their low vapour pressure, high thermal stability, good extraction efficiency and recycling capacity. Present study reports recovery of molybdenum from Mo-Co spent leach liquor using Cyphos IL 102[trihexyl(tetradecyl)phosphonium bromide] as an extractant. Spent catalyst was leached with 3 mol/L HCl and the leach liquor containing Mo-870 ppm, Co-341 ppm, Al-508 ppm and Fe-42 ppm was subjected to extraction step. The effect of extractant concentration on the leach liquor was investigated and almost 85% extraction of Mo was achieved with 0.05 mol/L Cyphos IL 102. Results of stripping studies revealed that 2 mol/L HNO3 can effectively strip 94% of the extracted Mo from the loaded organic phase. McCabe-Thiele diagrams were constructed to determine the number of stages required for quantitative extraction and stripping of molybdenum and were confirmed by counter current simulation studies. According to McCabe-Thiele extraction and stripping isotherms, two stages are required for quantitative extraction and stripping of molybdenum at A/O= 1:1. Around 95.4% extraction of molybdenum was achieved in two stage counter current at A/O= 1:1 with negligible extraction of Co and Al. However, iron was coextracted and removed from the loaded organic phase by scrubbing with 0.01 mol/L HCl. Quantitative stripping (~99.5 %) of molybdenum was achieved with 2.0 mol/L HNO3 in two stages at O/A=1:1. Overall ~95.0% molybdenum with 99 % purity was recovered from Mo-Co spent catalyst. From the strip solution, MoO3 was obtained by crystallization followed by thermal decomposition. The product obtained after thermal decomposition was characterized by XRD, FE-SEM and EDX techniques. XRD peaks of MoO3correspond to molybdite Syn-MoO3 structure. FE-SEM depicts the rod like morphology of synthesized MoO3. EDX analysis of MoO3 shows 1:3 atomic percentage of molybdenum and oxygen. The synthesised MoO3 can find application in gas sensors, electrodes of batteries, display devices, smart windows, lubricants and as catalyst.Keywords: cyphos IL 102, extraction, Mo-Co spent catalyst, recovery
Procedia PDF Downloads 269355 Disaster Capitalism, Charter Schools, and the Reproduction of Inequality in Poor, Disabled Students: An Ethnographic Case Study
Authors: Sylvia Mac
Abstract:
This ethnographic case study examines disaster capitalism, neoliberal market-based school reforms, and disability through the lens of Disability Studies in Education. More specifically, it explores neoliberalism and special education at a small, urban charter school in a large city in California and the (re)production of social inequality. The study uses Sociology of Special Education to examine the ways in which special education is used to sort and stratify disabled students. At a time when rhetoric surrounding public schools is framed in catastrophic and dismal language in order to justify the privatization of public education, small urban charter schools must be examined to learn if they are living up to their promise or acting as another way to maintain economic and racial segregation. The study concludes that neoliberal contexts threaten successful inclusive education and normalize poor, disabled students’ continued low achievement and poor post-secondary outcomes. This ethnographic case study took place at a small urban charter school in a large city in California. Participants included three special education students, the special education teacher, the special education assistant, a regular education teacher, and the two founders and charter writers. The school claimed to have a push-in model of special education where all special education students were fully included in the general education classroom. Although presented as fully inclusive, some special education students also attended a pull-out class called Study Skills. The study found that inclusion and neoliberalism are differing ideologies that cannot co-exist. Successful inclusive environments cannot thrive while under the influences of neoliberal education policies such as efficiency and cost-cutting. Additionally, the push for students to join the global knowledge economy means that more and more low attainers are further marginalized and kept in poverty. At this school, neoliberal ideology eclipsed the promise of inclusive education for special education students. This case study has shown the need for inclusive education to be interrogated through lenses that consider macro factors, such as neoliberal ideology in public education, as well as the emerging global knowledge economy and increasing income inequality. Barriers to inclusion inside the school, such as teachers’ attitudes, teacher preparedness, and school infrastructure paint only part of the picture. Inclusive education is also threatened by neoliberal ideology that shifts the responsibility from the state to the individual. This ideology is dangerous because it reifies the stereotypes of disabled students as lazy, needs drains on already dwindling budgets. If these stereotypes persist, inclusive education will have a difficult time succeeding. In order to more fully examine the ways in which inclusive education can become truly emancipatory, we need more analysis on the relationship between neoliberalism, disability, and special education.Keywords: case study, disaster capitalism, inclusive education, neoliberalism
Procedia PDF Downloads 223354 Biomechanical Evaluation for Minimally Invasive Lumbar Decompression: Unilateral Versus Bilateral Approaches
Authors: Yi-Hung Ho, Chih-Wei Wang, Chih-Hsien Chen, Chih-Han Chang
Abstract:
Unilateral laminotomy and bilateral laminotomies were successful decompressions methods for managing spinal stenosis that numerous studies have reported. Thus, unilateral laminotomy was rated technically much more demanding than bilateral laminotomies, whereas the bilateral laminotomies were associated with a positive benefit to reduce more complications. There were including incidental durotomy, increased radicular deficit, and epidural hematoma. However, no relative biomechanical analysis for evaluating spinal instability treated with unilateral and bilateral laminotomies. Therefore, the purpose of this study was to compare the outcomes of different decompressions methods by experimental and finite element analysis. Three porcine lumbar spines were biomechanically evaluated for their range of motion, and the results were compared following unilateral or bilateral laminotomies. The experimental protocol included flexion and extension in the following procedures: intact, unilateral, and bilateral laminotomies (L2–L5). The specimens in this study were tested in flexion (8 Nm) and extension (6 Nm) of pure moment. Spinal segment kinematic data was captured by using the motion tracking system. A 3D finite element lumbar spine model (L1-S1) containing vertebral body, discs, and ligaments were constructed. This model was used to simulate the situation of treating unilateral and bilateral laminotomies at L3-L4 and L4-L5. The bottom surface of S1 vertebral body was fully geometrically constrained in this study. A 10 Nm pure moment also applied on the top surface of L1 vertebral body to drive lumbar doing different motion, such as flexion and extension. The experimental results showed that in the flexion, the ROMs (±standard deviation) of L3–L4 were 1.35±0.23, 1.34±0.67, and 1.66±0.07 degrees of the intact, unilateral, and bilateral laminotomies, respectively. The ROMs of L4–L5 were 4.35±0.29, 4.06±0.87, and 4.2±0.32 degrees, respectively. No statistical significance was observed in these three groups (P>0.05). In the extension, the ROMs of L3–L4 were 0.89±0.16, 1.69±0.08, and 1.73±0.13 degrees, respectively. In the L4-L5, the ROMs were 1.4±0.12, 2.44±0.26, and 2.5±0.29 degrees, respectively. Significant differences were observed among all trials, except between the unilateral and bilateral laminotomy groups. At the simulation results portion, the similar results were discovered with the experiment. No significant differences were found at L4-L5 both flexion and extension in each group. Only 0.02 and 0.04 degrees variation were observed during flexion and extension between the unilateral and bilateral laminotomy groups. In conclusions, the present results by finite element analysis and experimental reveal that no significant differences were observed during flexion and extension between unilateral and bilateral laminotomies in short-term follow-up. From a biomechanical point of view, bilateral laminotomies seem to exhibit a similar stability as unilateral laminotomy. In clinical practice, the bilateral laminotomies are likely to reduce technical difficulties and prevent perioperative complications; this study proved this benefit through biomechanical analysis. The results may provide some recommendations for surgeons to make the final decision.Keywords: unilateral laminotomy, bilateral laminotomies, spinal stenosis, finite element analysis
Procedia PDF Downloads 403353 Validating Quantitative Stormwater Simulations in Edmonton Using MIKE URBAN
Authors: Mohamed Gaafar, Evan Davies
Abstract:
Many municipalities within Canada and abroad use chloramination to disinfect drinking water so as to avert the production of the disinfection by-products (DBPs) that result from conventional chlorination processes and their consequential public health risks. However, the long-lasting monochloramine disinfectant (NH2Cl) can pose a significant risk to the environment. As, it can be introduced into stormwater sewers, from different water uses, and thus freshwater sources. Little research has been undertaken to monitor and characterize the decay of NH2Cl and to study the parameters affecting its decomposition in stormwater networks. Therefore, the current study was intended to investigate this decay starting by building a stormwater model and validating its hydraulic and hydrologic computations, and then modelling water quality in the storm sewers and examining the effects of different parameters on chloramine decay. The presented work here is only the first stage of this study. The 30th Avenue basin in Southern Edmonton was chosen as a case study, because the well-developed basin has various land-use types including commercial, industrial, residential, parks and recreational. The City of Edmonton has already built a MIKE-URBAN stormwater model for modelling floods. Nevertheless, this model was built to the trunk level which means that only the main drainage features were presented. Additionally, this model was not calibrated and known to consistently compute pipe flows higher than the observed values; not to the benefit of studying water quality. So the first goal was to complete modelling and updating all stormwater network components. Then, available GIS Data was used to calculate different catchment properties such as slope, length and imperviousness. In order to calibrate and validate this model, data of two temporary pipe flow monitoring stations, collected during last summer, was used along with records of two other permanent stations available for eight consecutive summer seasons. The effect of various hydrological parameters on model results was investigated. It was found that model results were affected by the ratio of impervious areas. The catchment length was tested, however calculated, because it is approximate representation of the catchment shape. Surface roughness coefficients were calibrated using. Consequently, computed flows at the two temporary locations had correlation coefficients of values 0.846 and 0.815, where the lower value pertained to the larger attached catchment area. Other statistical measures, such as peak error of 0.65%, volume error of 5.6%, maximum positive and negative differences of 2.17 and -1.63 respectively, were all found in acceptable ranges.Keywords: stormwater, urban drainage, simulation, validation, MIKE URBAN
Procedia PDF Downloads 300352 Additive Friction Stir Manufacturing Process: Interest in Understanding Thermal Phenomena and Numerical Modeling of the Temperature Rise Phase
Authors: Antoine Lauvray, Fabien Poulhaon, Pierre Michaud, Pierre Joyot, Emmanuel Duc
Abstract:
Additive Friction Stir Manufacturing (AFSM) is a new industrial process that follows the emergence of friction-based processes. The AFSM process is a solid-state additive process using the energy produced by the friction at the interface between a rotating non-consumable tool and a substrate. Friction depends on various parameters like axial force, rotation speed or friction coefficient. The feeder material is a metallic rod that flows through a hole in the tool. Unlike in Friction Stir Welding (FSW) where abundant literature exists and addresses many aspects going from process implementation to characterization and modeling, there are still few research works focusing on AFSM. Therefore, there is still a lack of understanding of the physical phenomena taking place during the process. This research work aims at a better AFSM process understanding and implementation, thanks to numerical simulation and experimental validation performed on a prototype effector. Such an approach is considered a promising way for studying the influence of the process parameters and to finally identify a process window that seems relevant. The deposition of material through the AFSM process takes place in several phases. In chronological order these phases are the docking phase, the dwell time phase, the deposition phase, and the removal phase. The present work focuses on the dwell time phase that enables the temperature rise of the system composed of the tool, the filler material, and the substrate and due to pure friction. Analytic modeling of heat generation based on friction considers as main parameters the rotational speed and the contact pressure. Another parameter considered influential is the friction coefficient assumed to be variable due to the self-lubrication of the system with the rise in temperature or the materials in contact roughness smoothing over time. This study proposes, through numerical modeling followed by experimental validation, to question the influence of the various input parameters on the dwell time phase. Rotation speed, temperature, spindle torque, and axial force are the main monitored parameters during experimentations and serve as reference data for the calibration of the numerical model. This research shows that the geometry of the tool as well as fluctuations of the input parameters like axial force and rotational speed are very influential on the temperature reached and/or the time required to reach the targeted temperature. The main outcome is the prediction of a process window which is a key result for a more efficient process implementation.Keywords: numerical model, additive manufacturing, friction, process
Procedia PDF Downloads 147351 Quantitative Evaluation of Efficiency of Surface Plasmon Excitation with Grating-Assisted Metallic Nanoantenna
Authors: Almaz R. Gazizov, Sergey S. Kharintsev, Myakzyum Kh. Salakhov
Abstract:
This work deals with background signal suppression in tip-enhanced near-field optical microscopy (TENOM). The background appears because an optical signal is detected not only from the subwavelength area beneath the tip but also from a wider diffraction-limited area of laser’s waist that might contain another substance. The background can be reduced by using a taper probe with a grating on its lateral surface where an external illumination causes surface plasmon excitation. It requires the grating with parameters perfectly matched with a given incident light for effective light coupling. This work is devoted to an analysis of the light-grating coupling and a quest of grating parameters to enhance a near-field light beneath the tip apex. The aim of this work is to find the figure of merit of plasmon excitation depending on grating period and location of grating in respect to the apex. In our consideration the metallic grating on the lateral surface of the tapered plasmonic probe is illuminated by a plane wave, the electric field is perpendicular to the sample surface. Theoretical model of efficiency of plasmon excitation and propagation toward the apex is tested by fdtd-based numerical simulation. An electric field of the incident light is enhanced on the grating by every single slit due to lightning rod effect. Hence, grating causes amplitude and phase modulation of the incident field in various ways depending on geometry and material of grating. The phase-modulating grating on the probe is a sort of metasurface that provides manipulation by spatial frequencies of the incident field. The spatial frequency-dependent electric field is found from the angular spectrum decomposition. If one of the components satisfies the phase-matching condition then one can readily calculate the figure of merit of plasmon excitation, defined as a ratio of the intensities of the surface mode and the incident light. During propagation towards the apex, surface wave undergoes losses in probe material, radiation losses, and mode compression. There is an optimal location of the grating in respect to the apex. One finds the value by matching quadratic law of mode compression and the exponential law of light extinction. Finally, performed theoretical analysis and numerical simulations of plasmon excitation demonstrate that various surface waves can be effectively excited by using the overtones of a period of the grating or by phase modulation of the incident field. The gratings with such periods are easy to fabricate. Tapered probe with the grating effectively enhances and localizes the incident field at the sample.Keywords: angular spectrum decomposition, efficiency, grating, surface plasmon, taper nanoantenna
Procedia PDF Downloads 283350 Proposed Design Principles for Low-Income Housing in South Africa
Authors: Gerald Steyn
Abstract:
Despite the huge number of identical, tiny, boxy, freestanding houses built by the South African government after the advent of democracy in 1994, squatter camps continue to mushroom, and there is no evidence that the backlog is being reduced. Not only is the wasteful low-density detached-unit approach of the past being perpetuated, but the social, spatial, and economic marginalization is worse than before 1994. The situation is precarious since squatters are vulnerable to fires and flooding. At the same time, the occupants of the housing schemes are trapped far from employment opportunities or any public amenities. Despite these insecurities, the architectural, urban design, and city planning professions are puzzlingly quiet. Design projects address these issues only at the universities, albeit inevitably with somewhat Utopian notions. Geoffrey Payne, the renowned urban housing and urban development consultant and researcher focusing on issues in the Global South, once proclaimed that “we do not have a housing problem – we have a settlement problem.” This dictum was used as the guiding philosophy to conceptualize urban design and architectural principles that foreground the needs of low-income households and allow them to be fully integrated into the larger conurbation. Information was derived from intensive research over two decades, involving frequent visits to informal settlements, historic Black townships, and rural villages. Observations, measured site surveys, and interviews resulted in several scholarly articles from which a set of desirable urban and architectural criteria could be extracted. To formulate culturally appropriate design principles, existing vernacular and informal patterns were analyzed, reconciled with contemporary designs that align with the requirements for the envisaged settlement attributes, and reimagined as residential design principles. Five interrelated design principles are proposed, ranging in scale from (1) Integrating informal settlements into the city, (2) linear neighborhoods, (3) market streets as wards, (4) linear neighborhoods, and (5) typologies and densities for clustered and aggregated patios and courtyards. Each design principle is described, first in terms of its context and associated issues of concern, followed by a discussion of the patterns available to inform a possible solution, and finally, an explanation and graphic illustration of the proposed design. The approach is predominantly bottom-up since each of the five principles is unfolded from existing informal and vernacular practices studied in situ. They are, however, articulated and represented in terms of contemporary design language. Contrary to an idealized vision of housing for South Africa’s low-income urban households, this study proposes actual principles for critical assessment by peers in the tradition of architectural research in design.Keywords: culturally appropriate design principles, informal settlements, South Africa’s housing backlog, squatter camps
Procedia PDF Downloads 51349 The Effectiveness of Multiphase Flow in Well- Control Operations
Authors: Ahmed Borg, Elsa Aristodemou, Attia Attia
Abstract:
Well control involves managing the circulating drilling fluid within the wells and avoiding kicks and blowouts as these can lead to losses in human life and drilling facilities. Current practices for good control incorporate predictions of pressure losses through computational models. Developing a realistic hydraulic model for a good control problem is a very complicated process due to the existence of a complex multiphase region, which usually contains a non-Newtonian drilling fluid and the miscibility of formation gas in drilling fluid. The current approaches assume an inaccurate flow fluid model within the well, which leads to incorrect pressure loss calculations. To overcome this problem, researchers have been considering the more complex two-phase fluid flow models. However, even these more sophisticated two-phase models are unsuitable for applications where pressure dynamics are important, such as in managed pressure drilling. This study aims to develop and implement new fluid flow models that take into consideration the miscibility of fluids as well as their non-Newtonian properties for enabling realistic kick treatment. furthermore, a corresponding numerical solution method is built with an enriched data bank. The research work considers and implements models that take into consideration the effect of two phases in kick treatment for well control in conventional drilling. In this work, a corresponding numerical solution method is built with an enriched data bank. Software STARCCM+ for the computational studies to study the important parameters to describe wellbore multiphase flow, the mass flow rate, volumetric fraction, and velocity of each phase. Results showed that based on the analysis of these simulation studies, a coarser full-scale model of the wellbore, including chemical modeling established. The focus of the investigations was put on the near drill bit section. This inflow area shows certain characteristics that are dominated by the inflow conditions of the gas as well as by the configuration of the mud stream entering the annulus. Without considering the gas solubility effect, the bottom hole pressure could be underestimated by 4.2%, while the bottom hole temperature is overestimated by 3.2%. and without considering the heat transfer effect, the bottom hole pressure could be overestimated by 11.4% under steady flow conditions. Besides, larger reservoir pressure leads to a larger gas fraction in the wellbore. However, reservoir pressure has a minor effect on the steady wellbore temperature. Also as choke pressure increases, less gas will exist in the annulus in the form of free gas.Keywords: multiphase flow, well- control, STARCCM+, petroleum engineering and gas technology, computational fluid dynamic
Procedia PDF Downloads 119348 Notes on Matter: Ibn Arabi, Bernard Silvestris, and Other Ghosts
Authors: Brad Fox
Abstract:
Between something and nothing, a bit of both, neither/nor, a figment of the imagination, the womb of the universe - questions of what matter is, where it exists and what it means continue to surge up from the bottom of our concepts and theories. This paper looks at divergences and convergences, intimations and mistranslations, in a lineage of thought that begins with Plato’s Timaeus, travels through Arabic Spain and Syria, finally to end up in the language of science. Up to the 13th century, philosophers in Christian France based such inquiries on a questionable and fragmented translation of the Timaeus by Calcidius, with a commentary that conflated the Platonic concept of khora (‘space’ or ‘void’) with Aristotle’s hyle (‘primal matter’ as derived from ‘wood’ as a building material). Both terms were translated by Calcidius as silva. For 700 years, this was the only source for philosophers of matter in the Latin-speaking world. Bernard Silvestris, in his Cosmographia, exemplifies the concepts developed before new translations from Arabic began to pour into the Latin world from such centers as the court of Toledo. Unlike their counterparts across the Pyrenees, 13th century philosophers in Muslim Spain had access to a broad vocabulary for notions of primal matter. The prolific and visionary theologian, philosopher, and poet Muhyiddin Ibn Arabi could draw on the Ikhwan Al-Safa’s 10th Century renderings of Aristotle, which translated the Greek hyle as the everyday Arabic word maddah, still used for building materials today. He also often used the simple transliteration of hyle as hayula, probably taken from Ibn Sina. The prophet’s son-in-law Ali talked of dust in the air, invisible until it is struck by sunlight. Ibn Arabi adopted this dust - haba - as an expression for an original metaphysical substance, nonexistent but susceptible to manifesting forms. Ibn Arabi compares the dust to a phoenix, because we have heard about it and can conceive of it, but it has no existence unto itself and can be described only in similes. Elsewhere he refers to it as quwwa wa salahiyya - pure potentiality and readiness. The final portion of the paper will compare Bernard and Ibn Arabi’s notions of matter to the recent ontology developed by theoretical physicist and philosopher Karen Barad. Looking at Barad’s work with the work of Nils Bohr, it will argue that there is a rich resonance between Ibn Arabi’s paradoxical conceptions of matter and the quantum vacuum fluctuations verified by recent lab experiments. The inseparability of matter and meaning in Barad recall Ibn Arabi’s original response to Ibn Rushd’s question: Does revelation offer the same knowledge as rationality? ‘Yes and No,’ Ibn Arabi said, ‘and between the yes and no spirit is divided from matter and heads are separated from bodies.’ Ibn Arabi’s double affirmation continues to offer insight into our relationship to momentary experience at its most fundamental level.Keywords: Karen Barad, Muhyiddin Ibn Arabi, primal matter, Bernard Silvestris
Procedia PDF Downloads 427347 Exploring the Social Health and Well-Being Factors of Hydraulic Fracturing
Authors: S. Grinnell
Abstract:
A PhD Research Project exploring the Social Health and Well-Being Impacts associated with Hydraulic Fracturing, with an aim to produce a Best Practice Support Guidance for those anticipating dealing with planning applications or submitting Environmental Impact Assessments (EIAs). Amid a possible global energy crisis, founded upon a number of factors, including unstable political situations, increasing world population growth, people living longer, it is perhaps inevitable that Hydraulic Fracturing (commonly referred to as ‘fracking’) will become a major player within the global long-term energy and sustainability agenda. As there is currently no best practice guidance for governing bodies the Best Practice Support Document will be targeted at a number of audiences including, consultants undertaking EIAs, Planning Officers, those commissioning EIAs Industry and interested public stakeholders. It will offer a robust, evidence-based criteria and recommendations which provide a clear narrative and consistent and shared approach to the language used along with containing an understanding of the issues identified. It is proposed that the Best Practice Support Document will also support the mitigation of health impacts identified. The Best Practice Support Document will support the newly amended Environmental Impact Assessment Directive (2011/92/EU), to be transposed into UK law by 2017. A significant amendment introduced focuses on, ‘higher level of protection to the environment and health.’ Methodology: A qualitative research methods approach is being taken with this research. It will have a number of key stages. A literature review has been undertaken and been critically reviewed and analysed. This was followed by a descriptive content analysis of a selection of international and national policies, programmes and strategies along with published Environmental Impact Assessments and associated planning guidance. In terms of data collection, a number of stakeholders were interviewed as well as a number of focus groups of local community groups potentially affected by fracking. These were determined from across the UK. A theme analysis of all the data collected and the literature review will be undertaken, using NVivo. Best Practice Supporting Document will be developed based on the outcomes of the analysis and be tested and piloted in the professional fields, before a live launch. Concluding statement: Whilst fracking is not a new concept, the technology is now driving a new force behind the use of this engineering to supply fuels. A number of countries have pledged moratoria on fracking until further investigation from the impacts on health have been explored, whilst other countries including Poland and the UK are pushing to support the use of fracking. If this should be the case, it will be important that the public’s concerns, perceptions, fears and objections regarding the wider social health and well-being impacts are considered along with the more traditional biomedical health impacts.Keywords: fracking, hydraulic fracturing, socio-economic health, well-being
Procedia PDF Downloads 244346 Stable Diffusion, Context-to-Motion Model to Augmenting Dexterity of Prosthetic Limbs
Authors: André Augusto Ceballos Melo
Abstract:
Design to facilitate the recognition of congruent prosthetic movements, context-to-motion translations guided by image, verbal prompt, users nonverbal communication such as facial expressions, gestures, paralinguistics, scene context, and object recognition contributes to this process though it can also be applied to other tasks, such as walking, Prosthetic limbs as assistive technology through gestures, sound codes, signs, facial, body expressions, and scene context The context-to-motion model is a machine learning approach that is designed to improve the control and dexterity of prosthetic limbs. It works by using sensory input from the prosthetic limb to learn about the dynamics of the environment and then using this information to generate smooth, stable movements. This can help to improve the performance of the prosthetic limb and make it easier for the user to perform a wide range of tasks. There are several key benefits to using the context-to-motion model for prosthetic limb control. First, it can help to improve the naturalness and smoothness of prosthetic limb movements, which can make them more comfortable and easier to use for the user. Second, it can help to improve the accuracy and precision of prosthetic limb movements, which can be particularly useful for tasks that require fine motor control. Finally, the context-to-motion model can be trained using a variety of different sensory inputs, which makes it adaptable to a wide range of prosthetic limb designs and environments. Stable diffusion is a machine learning method that can be used to improve the control and stability of movements in robotic and prosthetic systems. It works by using sensory feedback to learn about the dynamics of the environment and then using this information to generate smooth, stable movements. One key aspect of stable diffusion is that it is designed to be robust to noise and uncertainty in the sensory feedback. This means that it can continue to produce stable, smooth movements even when the sensory data is noisy or unreliable. To implement stable diffusion in a robotic or prosthetic system, it is typically necessary to first collect a dataset of examples of the desired movements. This dataset can then be used to train a machine learning model to predict the appropriate control inputs for a given set of sensory observations. Once the model has been trained, it can be used to control the robotic or prosthetic system in real-time. The model receives sensory input from the system and uses it to generate control signals that drive the motors or actuators responsible for moving the system. Overall, the use of the context-to-motion model has the potential to significantly improve the dexterity and performance of prosthetic limbs, making them more useful and effective for a wide range of users Hand Gesture Body Language Influence Communication to social interaction, offering a possibility for users to maximize their quality of life, social interaction, and gesture communication.Keywords: stable diffusion, neural interface, smart prosthetic, augmenting
Procedia PDF Downloads 102345 The Projection of Breaking Sexual Repression: Modern Women in Indian Fictions in Marathi
Authors: Suresh B. Shinde
Abstract:
The present paper examined the selective fictional works of the Indian writers in the Marathi language which reflects the gradual erosion of sexual repression of modern women characters. Furthermore, the study employed the attitudinal survey method to counter check the fictional reality of the Indian women in real life in the modern era. The Indian writers in an early stage from the pre and post-independence period pictured the women characters such as sexually suppressed and adherence to male sexual dominance. Gangadhar Gadgil a ‘Sahitya Akademi’ award winner writer in his story ‘Ek Manus’ shown that a husband, abnormally exploited her wife. G. A. Kulkarni a ‘Sahitya Akademi’ award winner writer shown that a young lady character suppressed her proposal of marriage with she loved due to the social pressure and conventions. Arvind Gokhale and Kamal Desai have also pictured lady characters who suppressed their sexual urges even they were highly educated. In the late 20th century and early 21st century, the trends of Marathi literature is dramatically changed accordingly the women fictions. Gouri Deshpande, the popular story writer, penetrates modern woman very clearly. Two lady characters are living happily together accepting revolts of society for a sexual relationship. Meghna Pethe, another well-known writer in her story, depicts a women character who was lived with her friend as live-in-relationship and enjoying the erotic sex. How so far, it was seen that the pre and post-independence women fictions are gradually changed regarding her sexually urges. This reality leads to design the survey research design in which 100 college girls and 100 middle-aged women were surveyed with sexual attitude scale and feminist identity test. It was hypothesized that the today's college girls would higher on sexual attitude and feminist identity than middle-aged women. Moreover, it was also assumed that sexual attitude and feminist identity would have a strong positive correlation. The obtained data analyzed through Students’ test and Pearson Product Moment Correlation (PPMC). The results reveal that the today's college girls are having a high level of sexual attitude and feminist identity than middle-aged women. Results also reveal that sexual attitude and feminist identity have a strongest positive correlation. How so far the survey research has provided the reality ground to the modern women in Indian fictions in Marathi literature. The findings of the research have been discussed accordingly the gender equality as well as psychological perspectives.Keywords: sexual repression, women in Indian fictions, sexual attitude, feminist perspectives
Procedia PDF Downloads 333344 The Effect of Finding and Development Costs and Gas Price on Basins in the Barnett Shale
Authors: Michael Kenomore, Mohamed Hassan, Amjad Shah, Hom Dhakal
Abstract:
Shale gas reservoirs have been of greater importance compared to shale oil reservoirs since 2009 and with the current nature of the oil market, understanding the technical and economic performance of shale gas reservoirs is of importance. Using the Barnett shale as a case study, an economic model was developed to quantify the effect of finding and development costs and gas prices on the basins in the Barnett shale using net present value as an evaluation parameter. A rate of return of 20% and a payback period of 60 months or less was used as the investment hurdle in the model. The Barnett was split into four basins (Strawn Basin, Ouachita Folded Belt, Forth-worth Syncline and Bend-arch Basin) with analysis conducted on each of the basin to provide a holistic outlook. The dataset consisted of only horizontal wells that started production from 2008 to at most 2015 with 1835 wells coming from the strawn basin, 137 wells from the Ouachita folded belt, 55 wells from the bend-arch basin and 724 wells from the forth-worth syncline. The data was analyzed initially on Microsoft Excel to determine the estimated ultimate recoverable (EUR). The range of EUR from each basin were loaded in the Palisade Risk software and a log normal distribution typical of Barnett shale wells was fitted to the dataset. Monte Carlo simulation was then carried out over a 1000 iterations to obtain a cumulative distribution plot showing the probabilistic distribution of EUR for each basin. From the cumulative distribution plot, the P10, P50 and P90 EUR values for each basin were used in the economic model. Gas production from an individual well with a EUR similar to the calculated EUR was chosen and rescaled to fit the calculated EUR values for each basin at the respective percentiles i.e. P10, P50 and P90. The rescaled production was entered into the economic model to determine the effect of the finding and development cost and gas price on the net present value (10% discount rate/year) as well as also determine the scenario that satisfied the proposed investment hurdle. The finding and development costs used in this paper (assumed to consist only of the drilling and completion costs) were £1 million, £2 million and £4 million while the gas price was varied from $2/MCF-$13/MCF based on Henry Hub spot prices from 2008-2015. One of the major findings in this study was that wells in the bend-arch basin were least economic, higher gas prices are needed in basins containing non-core counties and 90% of the Barnet shale wells were not economic at all finding and development costs irrespective of the gas price in all the basins. This study helps to determine the percentage of wells that are economic at different range of costs and gas prices, determine the basins that are most economic and the wells that satisfy the investment hurdle.Keywords: shale gas, Barnett shale, unconventional gas, estimated ultimate recoverable
Procedia PDF Downloads 302343 Definition of Aerodynamic Coefficients for Microgravity Unmanned Aerial System
Authors: Gamaliel Salazar, Adriana Chazaro, Oscar Madrigal
Abstract:
The evolution of Unmanned Aerial Systems (UAS) has made it possible to develop new vehicles capable to perform microgravity experiments which due its cost and complexity were beyond the reach for many institutions. In this study, the aerodynamic behavior of an UAS is studied through its deceleration stage after an initial free fall phase (where the microgravity effect is generated) using Computational Fluid Dynamics (CFD). Due to the fact that the payload would be analyzed under a microgravity environment and the nature of the payload itself, the speed of the UAS must be reduced in a smoothly way. Moreover, the terminal speed of the vehicle should be low enough to preserve the integrity of the payload and vehicle during the landing stage. The UAS model is made by a study pod, control surfaces with fixed and mobile sections, landing gear and two semicircular wing sections. The speed of the vehicle is decreased by increasing the angle of attack (AoA) of each wing section from 2° (where the airfoil S1091 has its greatest aerodynamic efficiency) to 80°, creating a circular wing geometry. Drag coefficients (Cd) and forces (Fd) are obtained employing CFD analysis. A simplified 3D model of the vehicle is analyzed using Ansys Workbench 16. The distance between the object of study and the walls of the control volume is eight times the length of the vehicle. The domain is discretized using an unstructured mesh based on tetrahedral elements. The refinement of the mesh is made by defining an element size of 0.004 m in the wing and control surfaces in order to figure out the fluid behavior in the most important zones, as well as accurate approximations of the Cd. The turbulent model k-epsilon is selected to solve the governing equations of the fluids while a couple of monitors are placed in both wing and all-body vehicle to visualize the variation of the coefficients along the simulation process. Employing a statistical approximation response surface methodology the case of study is parametrized considering the AoA of the wing as the input parameter and Cd and Fd as output parameters. Based on a Central Composite Design (CCD), the Design Points (DP) are generated so the Cd and Fd for each DP could be estimated. Applying a 2nd degree polynomial approximation the drag coefficients for every AoA were determined. Using this values, the terminal speed at each position is calculated considering a specific Cd. Additionally, the distance required to reach the terminal velocity at each AoA is calculated, so the minimum distance for the entire deceleration stage without comprising the payload could be determine. The Cd max of the vehicle is 1.18, so its maximum drag will be almost like the drag generated by a parachute. This guarantees that aerodynamically the vehicle can be braked, so it could be utilized for several missions allowing repeatability of microgravity experiments.Keywords: microgravity effect, response surface, terminal speed, unmanned system
Procedia PDF Downloads 173342 Just Child Protection Practice for Immigrant and Racialized Families in Multicultural Western Settings: Considerations for Context and Culture
Authors: Sarah Maiter
Abstract:
Heightened globalization, migration, displacement of citizens, and refugee needs is putting increasing demand for approaches to social services for diverse populations that responds to families to ensure the safety and protection of vulnerable members while providing supports and services. Along with this social works re-focus on socially just approaches to practice increasingly asks social workers to consider the challenging circumstances of families when providing services rather than a focus on individual shortcomings alone. Child protection workers then struggle to ensure safety of children while assessing the needs of families. This assessment can prove to be difficult when providing services to immigrant, refugee, and racially diverse families as understanding of and familiarity with these families is often limited. Furthermore, child protection intervention in western countries is state mandated having legal authority when intervening in the lives of families where child protection concerns have been identified. Within this context, racialized immigrant and refugee families are at risk of misunderstandings that can result in interventions that are overly intrusive, unhelpful, and harsh. Research shows disproportionality and overrepresentation of racial and ethnic minorities, and immigrant families in the child protection system. Reasons noted include: a) possibilities of racial bias in reporting and substantiating abuse, b) struggles on the part of workers when working with families from diverse ethno-racial backgrounds and who are immigrants and may have limited proficiency in the national language of the country, c) interventions during crisis and differential ongoing services for these families, d) diverse contexts of these families that poses additional challenges for families and children, and e) possible differential definitions of child maltreatment. While cultural and ethnic diversity in child rearing approaches have been cited as contributors to child protection concerns, this approach should be viewed cautiously as it can result in stereotyping and generalizing that then results in inappropriate assessment and intervention. However, poverty and the lack of social supports, both well-known contributors to child protection concerns, also impact these families disproportionately. Child protection systems, therefore, need to continue to examine policy and practice approaches with these families that ensures safety of children while balancing the needs of families. This presentation provides data from several research studies that examined definitions of child maltreatment among a sample of racialized immigrant families, experiences of a sample of immigrant families with the child protection system, concerns of a sample of child protection workers in the provision of services to these families, and struggles of families in the transitions to their new country. These studies, along with others provide insights into areas of consideration for practice that can contribute to safety for children while ensuring just and equitable responses that have greater potential for keeping families together rather than premature apprehension and removal of children to state care.Keywords: child protection, child welfare services, immigrant families, racial and ethnic diversity
Procedia PDF Downloads 292341 Solid State Drive End to End Reliability Prediction, Characterization and Control
Authors: Mohd Azman Abdul Latif, Erwan Basiron
Abstract:
A flaw or drift from expected operational performance in one component (NAND, PMIC, controller, DRAM, etc.) may affect the reliability of the entire Solid State Drive (SSD) system. Therefore, it is important to ensure the required quality of each individual component through qualification testing specified using standards or user requirements. Qualification testing is time-consuming and comes at a substantial cost for product manufacturers. A highly technical team, from all the eminent stakeholders is embarking on reliability prediction from beginning of new product development, identify critical to reliability parameters, perform full-blown characterization to embed margin into product reliability and establish control to ensure the product reliability is sustainable in the mass production. The paper will discuss a comprehensive development framework, comprehending SSD end to end from design to assembly, in-line inspection, in-line testing and will be able to predict and to validate the product reliability at the early stage of new product development. During the design stage, the SSD will go through intense reliability margin investigation with focus on assembly process attributes, process equipment control, in-process metrology and also comprehending forward looking product roadmap. Once these pillars are completed, the next step is to perform process characterization and build up reliability prediction modeling. Next, for the design validation process, the reliability prediction specifically solder joint simulator will be established. The SSD will be stratified into Non-Operating and Operating tests with focus on solder joint reliability and connectivity/component latent failures by prevention through design intervention and containment through Temperature Cycle Test (TCT). Some of the SSDs will be subjected to the physical solder joint analysis called Dye and Pry (DP) and Cross Section analysis. The result will be feedbacked to the simulation team for any corrective actions required to further improve the design. Once the SSD is validated and is proven working, it will be subjected to implementation of the monitor phase whereby Design for Assembly (DFA) rules will be updated. At this stage, the design change, process and equipment parameters are in control. Predictable product reliability at early product development will enable on-time sample qualification delivery to customer and will optimize product development validation, effective development resource and will avoid forced late investment to bandage the end-of-life product failures. Understanding the critical to reliability parameters earlier will allow focus on increasing the product margin that will increase customer confidence to product reliability.Keywords: e2e reliability prediction, SSD, TCT, solder joint reliability, NUDD, connectivity issues, qualifications, characterization and control
Procedia PDF Downloads 174