Search results for: cost effective conservation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 14956

Search results for: cost effective conservation

1456 Entertainment-Education for the Prevention & Intervention of Eating Disorders in Adolescents

Authors: Tracey Lion-Cachet

Abstract:

Eating disorders typically manifest in adolescence and are notoriously difficult to treat. There are two notable reasons for this. Firstly, research consistently demonstrates that early intervention is a critical mediator of prognosis, with early intervention leading to a better prognosis. However, because eating disorders do not originate as full-syndrome diagnoses but rather as prodromal cases, they often go undetected; by the time symptoms meet diagnostic criteria, they have become recalcitrant. Another interrelated issue is motivation to change. Research demonstrates that in the early stages of an eating disorder, adolescents are highly resistant to change, and motivation increases only once symptoms have shifted from egosyntonic to egodystonic in nature. The purpose of this project was to design a prevention model based on the social psychology paradigm of Entertainment-Education, which embeds messages within the genre of film as a means of affecting change. The resulting project was a narrative screenplay targeting teenagers/young adults from diverse backgrounds. The goals of the project were to create a film script that, if ultimately made into a film, could serve to: 1) interrupt symptom progression and improve prognosis through early intervention; 2) incorporate techniques from third-wave cognitive behavioral treatment models, acceptance and commitment therapy (ACT) and rational recovery (RR), with a focus on the effects of mindfulness as a means of informing recovery; 3) target issues to do with motivation to change by shifting the perception of eating disorders from culturally specific psychiatric illnesses to habit-based brain wiring issues. Nine licensed clinicians were asked to evaluate two excerpts taken from the final script. They subsequently provided feedback on a Likert-scale, which assessed whether the script had achieved its goals. Overall, evaluators agreed that the project’s etiological and intervention models have the potential to inspire change and serve as an effective means of prevention and treatment of eating disorders. However, one-third of the evaluators did not find the content developmentally appropriate. This is a notable limitation to the study and will need to be addressed in the larger script before the final project can potentially be targeted to a teenage and young adult audience.

Keywords: adolescents, eating disorders, pediatrics, entertainment-education, mindfulness-based intervention, prevention

Procedia PDF Downloads 80
1455 Detection of High Fructose Corn Syrup in Honey by Near Infrared Spectroscopy and Chemometrics

Authors: Mercedes Bertotto, Marcelo Bello, Hector Goicoechea, Veronica Fusca

Abstract:

The National Service of Agri-Food Health and Quality (SENASA), controls honey to detect contamination by synthetic or natural chemical substances and establishes and controls the traceability of the product. The utility of near-infrared spectroscopy for the detection of adulteration of honey with high fructose corn syrup (HFCS) was investigated. First of all, a mixture of different authentic artisanal Argentinian honey was prepared to cover as much heterogeneity as possible. Then, mixtures were prepared by adding different concentrations of high fructose corn syrup (HFCS) to samples of the honey pool. 237 samples were used, 108 of them were authentic honey and 129 samples corresponded to honey adulterated with HFCS between 1 and 10%. They were stored unrefrigerated from time of production until scanning and were not filtered after receipt in the laboratory. Immediately prior to spectral collection, honey was incubated at 40°C overnight to dissolve any crystalline material, manually stirred to achieve homogeneity and adjusted to a standard solids content (70° Brix) with distilled water. Adulterant solutions were also adjusted to 70° Brix. Samples were measured by NIR spectroscopy in the range of 650 to 7000 cm⁻¹. The technique of specular reflectance was used, with a lens aperture range of 150 mm. Pretreatment of the spectra was performed by Standard Normal Variate (SNV). The ant colony optimization genetic algorithm sample selection (ACOGASS) graphical interface was used, using MATLAB version 5.3, to select the variables with the greatest discriminating power. The data set was divided into a validation set and a calibration set, using the Kennard-Stone (KS) algorithm. A combined method of Potential Functions (PF) was chosen together with Partial Least Square Linear Discriminant Analysis (PLS-DA). Different estimators of the predictive capacity of the model were compared, which were obtained using a decreasing number of groups, which implies more demanding validation conditions. The optimal number of latent variables was selected as the number associated with the minimum error and the smallest number of unassigned samples. Once the optimal number of latent variables was defined, we proceeded to apply the model to the training samples. With the calibrated model for the training samples, we proceeded to study the validation samples. The calibrated model that combines the potential function methods and PLSDA can be considered reliable and stable since its performance in future samples is expected to be comparable to that achieved for the training samples. By use of Potential Functions (PF) and Partial Least Square Linear Discriminant Analysis (PLS-DA) classification, authentic honey and honey adulterated with HFCS could be identified with a correct classification rate of 97.9%. The results showed that NIR in combination with the PT and PLS-DS methods can be a simple, fast and low-cost technique for the detection of HFCS in honey with high sensitivity and power of discrimination.

Keywords: adulteration, multivariate analysis, potential functions, regression

Procedia PDF Downloads 119
1454 Enabling Participation of Deaf People in the Co-Production of Services: An Example in Service Design, Commissioning and Delivery in a London Borough

Authors: Stephen Bahooshy

Abstract:

Co-producing services with the people that access them is considered best practice in the United Kingdom, with the Care Act 2014 arguing that people who access services and their carers should be involved in the design, commissioning and delivery of services. Co-production is a way of working with the community, breaking down barriers of access and providing meaningful opportunity for people to engage. Unfortunately, owing to a number of reported factors such as time constraints, practitioner experience and departmental budget restraints, this process is not always followed. In 2019, in a south London borough, d/Deaf people who access services were engaged in the design, commissioning and delivery of an information and advice service that would support their community to access local government services. To do this, sensory impairment social workers and commissioners collaborated to host a series of engagement events with the d/Deaf community. Interpreters were used to enable communication between the commissioners and d/Deaf participants. Initially, the community’s opinions, ideas and requirements were noted. This was then summarized and fed back to the community to ensure accuracy. Subsequently, a service specification was developed which included performance metrics, inclusive of qualitative and quantitative indicators, such as ‘I statements’, whereby participants respond on an adapted Likert scale how much they agree or disagree with a particular statement in relation to their experience of the service. The service specification was reviewed by a smaller group of d/Deaf residents and social workers, to ensure that it met the community’s requirements. The service was then tendered using the local authority’s e-tender process. Bids were evaluated and scored in two parts; part one was by commissioners and social workers and part two was a presentation by prospective providers to an evaluation panel formed of four d/Deaf residents. The internal evaluation panel formed 75% of the overall score, whilst the d/Deaf resident evaluation panel formed 25% of the overall tender score. Co-producing the evaluation panel with social workers and the d/Deaf community meant that commissioners were able to meet the requirements of this community by developing evaluation questions and tools that were easily understood and use by this community. For example, the wording of questions were reviewed and the scoring mechanism consisted of three faces to reflect the d/Deaf residents’ scores instead of traditional numbering. These faces were a happy face, a neutral face and a sad face. By making simple changes to the commissioning and tender evaluation process, d/Deaf people were able to have meaningful involvement in the design and commissioning process for a service that would benefit their community. Co-produced performance metrics means that it is incumbent on the successful provider to continue to engage with people accessing the service and ensure that the feedback is utilized. d/Deaf residents were grateful to have been involved in this process as this was not an opportunity that they had previously been afforded. In recognition of their time, each d/Deaf resident evaluator received a £40 gift voucher, bringing the total cost of this co-production to £160.

Keywords: co-production, community engagement, deaf and hearing impaired, service design

Procedia PDF Downloads 270
1453 Use of 3D Printed Bioscaffolds from Decellularized Umbilical Cord for Cartilage Regeneration

Authors: Tayyaba Bari, Muhammad Hamza Anjum, Samra Kanwal, Fakhera Ikram

Abstract:

Osteoarthritis, a degenerative condition, affects more than 213 million individuals globally. Since articular cartilage has no or limited vessels, therefore, after deteriorating, it is unable to rejuvenate. Traditional approaches for cartilage repair, like autologous chondrocyte implantation, microfracture and cartilage transplantation are often associated with postoperative complications and lead to further degradation. Decellularized human umbilical cord has gained interest as a viable treatment for cartilage repair. Decellularization removes all cellular contents as well as debris, leaving a biologically active 3D network known as extracellular matrix (ECM). This matrix is biodegradable, non-immunogenic and provides a microenvironment for homeostasis, growth and repair. UC derived bioink function as 3D scaffolding material, not only mediates cell-matrix interactions but also adherence, proliferation and propagation of cells for 3D organoids. This study comprises different physical, chemical and biological approaches to optimize the decellularization of human umbilical cord (UC) tissues followed by the solubilization of these tissues to bioink formation. The decellularization process consisted of two cycles of freeze thaw where the umbilical cord at -20˚C was thawed at room temperature followed by dissection in small sections from 0.5 to 1cm. Similarly decellularization with ionic and non-ionic detergents Sodium dodecyl sulfate (SDS) and Triton-X 100 revealed that both concentrations of SDS i.e 0.1% and 1% were effective in complete removal of cells from the small UC tissues. The results of decellularization was further confirmed by running them on 1% agarose gel. Histological analysis revealed the efficacy of decellularization, which involves paraffin embedded samples of 4μm processed for Hematoxylin-eosin-safran and 4,6-diamidino-2-phenylindole (DAPI). ECM preservation was confirmed by Alcian Blue, and Masson’s trichrome staining on consecutive sections and images were obtained. Sulfated GAG’s content were determined by 1,9-dimethyl-methylene blue (DMMB) assay, similarly collagen quantification was done by hydroxy proline assay. This 3D bioengineered scaffold will provide a typical atmosphere as in the extracellular matrix of the tissue, which would be seeded with the mesenchymal cells to generate the desired 3D ink for in vitro and in vivo cartilage regeneration applications.

Keywords: umbilical cord, 3d printing, bioink, tissue engineering, cartilage regeneration

Procedia PDF Downloads 95
1452 Tip-Apex Distance as a Long-Term Risk Factor for Hospital Readmission Following Intramedullary Fixation of Intertrochanteric Fractures

Authors: Brandon Knopp, Matthew Harris

Abstract:

Purpose: Tip-apex distance (TAD) has long been discussed as a metric for determining risk of failure in the fixation of peritrochanteric fractures. TAD measurements over 25 millimeters (mm) have been associated with higher rates of screw cut out and other complications in the first several months after surgery. However, there is limited evidence for the efficacy of this measurement in predicting the long-term risk of negative outcomes following hip fixation surgery. The purpose of our study was to investigate risk factors including TAD for hospital readmission, loss of pre-injury ambulation and development of complications within 1 year after hip fixation surgery. Methods: A retrospective review of proximal hip fractures treated with single screw intramedullary devices between 2016 and 2020 was performed at a 327-bed regional medical center. Patients included had a postoperative follow-up of at least 12 months or surgery-related complications developing within that time. Results: 44 of the 67 patients in this study met the inclusion criteria with adequate follow-up post-surgery. There was a total of 10 males (22.7%) and 34 females (77.3%) meeting inclusion criteria with a mean age of 82.1 (± 12.3) at the time of surgery. The average TAD in our study population was 19.57mm and the average 1-year readmission rate was 15.9%. 3 out of 6 patients (50%) with a TAD > 25mm were readmitted within one year due to surgery-related complications. In contrast, 3 out of 38 patients (7.9%) with a TAD < 25mm were readmitted within one year due to surgery-related complications (p=0.0254). Individual TAD measurements, averaging 22.05mm in patients readmitted within 1 year of surgery and 19.18mm in patients not readmitted within 1 year of surgery, were not significantly different between the two groups (p=0.2113). Conclusions: Our data indicate a significant improvement in hospital readmission rates up to one year after hip fixation surgery in patients with a TAD < 25mm with a decrease in readmissions of over 40% (50% vs 7.9%). This result builds upon past investigations by extending the follow-up time to 1 year after surgery and utilizing hospital readmissions as a metric for surgical success. With the well-documented physical and financial costs of hospital readmission after hip surgery, our study highlights a reduction of TAD < 25mm as an effective method of improving patient outcomes and reducing financial costs to patients and medical institutions. No relationship was found between TAD measurements and secondary outcomes, including loss of pre-injury ambulation and development of complications.

Keywords: hip fractures, hip reductions, readmission rates, open reduction internal fixation

Procedia PDF Downloads 138
1451 Application of Neuroscience in Aligning Instructional Design to Student Learning Style

Authors: Jayati Bhattacharjee

Abstract:

Teaching is a very dynamic profession. Teaching Science is as much challenging as Learning the subject if not more. For instance teaching of Chemistry. From the introductory concepts of subatomic particles to atoms of elements and their symbols and further presenting the chemical equation and so forth is a challenge on both side of the equation Teaching Learning. This paper combines the Neuroscience of Learning and memory with the knowledge of Learning style (VAK) and presents an effective tool for the teacher to authenticate Learning. The model of ‘Working Memory’, the Visio-spatial sketchpad, the central executive and the phonological loop that transforms short-term memory to long term memory actually supports the psychological theory of Learning style i.e. Visual –Auditory-Kinesthetic. A closer examination of David Kolbe’s learning model suggests that learning requires abilities that are polar opposites, and that the learner must continually choose which set of learning abilities he or she will use in a specific learning situation. In grasping experience some of us perceive new information through experiencing the concrete, tangible, felt qualities of the world, relying on our senses and immersing ourselves in concrete reality. Others tend to perceive, grasp, or take hold of new information through symbolic representation or abstract conceptualization – thinking about, analyzing, or systematically planning, rather than using sensation as a guide. Similarly, in transforming or processing experience some of us tend to carefully watch others who are involved in the experience and reflect on what happens, while others choose to jump right in and start doing things. The watchers favor reflective observation, while the doers favor active experimentation. Any lesson plan based on the model of Prescriptive design: C+O=M (C: Instructional condition; O: Instructional Outcome; M: Instructional method). The desired outcome and conditions are independent variables whereas the instructional method is dependent hence can be planned and suited to maximize the learning outcome. The assessment for learning rather than of learning can encourage, build confidence and hope amongst the learners and go a long way to replace the anxiety and hopelessness that a student experiences while learning Science with a human touch in it. Application of this model has been tried in teaching chemistry to high school students as well as in workshops with teachers. The response received has proven the desirable results.

Keywords: working memory model, learning style, prescriptive design, assessment for learning

Procedia PDF Downloads 347
1450 Modeling and Simulation of Primary Atomization and Its Effects on Internal Flow Dynamics in a High Torque Low Speed Diesel Engine

Authors: Muteeb Ulhaq, Rizwan Latif, Sayed Adnan Qasim, Imran Shafi

Abstract:

Diesel engines are most efficient and reliable in terms of efficiency, reliability and adaptability. Most of the research and development up till now have been directed towards High-Speed Diesel Engine, for Commercial use. In these engines objective is to optimize maximum acceleration by reducing exhaust emission to meet international standards. In high torque low-speed engines the requirement is altogether different. These types of Engines are mostly used in Maritime Industry, Agriculture industry, Static Engines Compressors Engines etc. Unfortunately due to lack of research and development, these engines have low efficiency and high soot emissions and one of the most effective way to overcome these issues is by efficient combustion in an engine cylinder, the fuel spray atomization process plays a vital role in defining mixture formation, fuel consumption, combustion efficiency and soot emissions. Therefore, a comprehensive understanding of the fuel spray characteristics and atomization process is of a great importance. In this research, we will examine the effects of primary breakup modeling on the spray characteristics under diesel engine conditions. KH-ACT model is applied to cater the effect of aerodynamics in an engine cylinder and also cavitations and turbulence generated inside the injector. It is a modified form of most commonly used KH model, which considers only the aerodynamically induced breakup based on the Kelvin–Helmholtz instability. Our model is extensively evaluated by performing 3-D time-dependent simulations on Open FOAM, which is an open source flow solver. Spray characteristics like Spray Penetration, Liquid length, Spray cone angle and Souter mean diameter (SMD) were validated by comparing the results of Open Foam and Matlab. Including the effects of cavitation and turbulence enhances primary breakup, leading to smaller droplet sizes, decrease in liquid penetration, and increase in the radial dispersion of spray. All these properties favor early evaporation of fuel which enhances Engine efficiency.

Keywords: Kelvin–Helmholtz instability, open foam, primary breakup, souter mean diameter, turbulence

Procedia PDF Downloads 207
1449 Self-Organizing Maps for Exploration of Partially Observed Data and Imputation of Missing Values in the Context of the Manufacture of Aircraft Engines

Authors: Sara Rejeb, Catherine Duveau, Tabea Rebafka

Abstract:

To monitor the production process of turbofan aircraft engines, multiple measurements of various geometrical parameters are systematically recorded on manufactured parts. Engine parts are subject to extremely high standards as they can impact the performance of the engine. Therefore, it is essential to analyze these databases to better understand the influence of the different parameters on the engine's performance. Self-organizing maps are unsupervised neural networks which achieve two tasks simultaneously: they visualize high-dimensional data by projection onto a 2-dimensional map and provide clustering of the data. This technique has become very popular for data exploration since it provides easily interpretable results and a meaningful global view of the data. As such, self-organizing maps are usually applied to aircraft engine condition monitoring. As databases in this field are huge and complex, they naturally contain multiple missing entries for various reasons. The classical Kohonen algorithm to compute self-organizing maps is conceived for complete data only. A naive approach to deal with partially observed data consists in deleting items or variables with missing entries. However, this requires a sufficient number of complete individuals to be fairly representative of the population; otherwise, deletion leads to a considerable loss of information. Moreover, deletion can also induce bias in the analysis results. Alternatively, one can first apply a common imputation method to create a complete dataset and then apply the Kohonen algorithm. However, the choice of the imputation method may have a strong impact on the resulting self-organizing map. Our approach is to address simultaneously the two problems of computing a self-organizing map and imputing missing values, as these tasks are not independent. In this work, we propose an extension of self-organizing maps for partially observed data, referred to as missSOM. First, we introduce a criterion to be optimized, that aims at defining simultaneously the best self-organizing map and the best imputations for the missing entries. As such, missSOM is also an imputation method for missing values. To minimize the criterion, we propose an iterative algorithm that alternates the learning of a self-organizing map and the imputation of missing values. Moreover, we develop an accelerated version of the algorithm by entwining the iterations of the Kohonen algorithm with the updates of the imputed values. This method is efficiently implemented in R and will soon be released on CRAN. Compared to the standard Kohonen algorithm, it does not come with any additional cost in terms of computing time. Numerical experiments illustrate that missSOM performs well in terms of both clustering and imputation compared to the state of the art. In particular, it turns out that missSOM is robust to the missingness mechanism, which is in contrast to many imputation methods that are appropriate for only a single mechanism. This is an important property of missSOM as, in practice, the missingness mechanism is often unknown. An application to measurements on one type of part is also provided and shows the practical interest of missSOM.

Keywords: imputation method of missing data, partially observed data, robustness to missingness mechanism, self-organizing maps

Procedia PDF Downloads 148
1448 Didactic Games for the Development of Reading and Writing: Proeduca Program

Authors: Andreia Osti

Abstract:

The context experienced in the face of the COVID-19 pandemic substantially changed the way children communicate and the way literacy teaching was carried out. Officially, according to the Brazilian Institute of Geography and Statistics, children who should be literate were seriously impacted by the pandemic, and it was found that the number of illiterate children increased from 1.4 million, in 2019, to 2.4 million in 2021. In this context, this work presents partial results of an intervention project in which classroom monitoring of students in the literacy phase was carried out. Methodologically, pedagogical games were developed that work on specific reading and writing content, such as 1) games with direct regularities and; 2) Games with contextual regularities. The project involves the elaboration and production of games and their application by the classroom teacher. All work focused on literacy and improving understanding of grapheme and phoneme relationships among students, aiming to improve reading and writing comprehension levels. The project, still under development, is carried out in two schools and supports 60 students. The teachers participate in the research, as they apply the games produced at the university and monitor the children's learning process. The project is developed with financial support for research from FAPESP - in the public education improvement program – PROEDUCA. The initial results show that children are more involved in playful activities, that games provide better moments of interaction in the classroom and that they result in effective learning since they constitute a different way of approaching the content to be taught. It is noteworthy that the pedagogical games produced directly involve the teaching and learning processes of curricular components – in this case, reading and writing, which are basic components in elementary education and constitute teaching methodologies as specific and guided activities are planned in literacy methods. In this presentation, some of the materials developed will be shown, as well as the results of the assessments carried out with the students. In relation to the Sustainable Development objectives (SDGs) linked to this project, we have 4 – Quality Education, 10 – Reduction of inequalities. It is noteworthy that the research seeks to improve Public Education and promote the articulation between theory and practice in the educational context with a view to consolidating the tripod of teaching, research and university extension and promoting a humanized education.

Keywords: didactic, teaching, games, learning, literacy

Procedia PDF Downloads 15
1447 Land Cover Mapping Using Sentinel-2, Landsat-8 Satellite Images, and Google Earth Engine: A Study Case of the Beterou Catchment

Authors: Ella Sèdé Maforikan

Abstract:

Accurate land cover mapping is essential for effective environmental monitoring and natural resources management. This study focuses on assessing the classification performance of two satellite datasets and evaluating the impact of different input feature combinations on classification accuracy in the Beterou catchment, situated in the northern part of Benin. Landsat-8 and Sentinel-2 images from June 1, 2020, to March 31, 2021, were utilized. Employing the Random Forest (RF) algorithm on Google Earth Engine (GEE), a supervised classification categorized the land into five classes: forest, savannas, cropland, settlement, and water bodies. GEE was chosen due to its high-performance computing capabilities, mitigating computational burdens associated with traditional land cover classification methods. By eliminating the need for individual satellite image downloads and providing access to an extensive archive of remote sensing data, GEE facilitated efficient model training on remote sensing data. The study achieved commendable overall accuracy (OA), ranging from 84% to 85%, even without incorporating spectral indices and terrain metrics into the model. Notably, the inclusion of additional input sources, specifically terrain features like slope and elevation, enhanced classification accuracy. The highest accuracy was achieved with Sentinel-2 (OA = 91%, Kappa = 0.88), slightly surpassing Landsat-8 (OA = 90%, Kappa = 0.87). This underscores the significance of combining diverse input sources for optimal accuracy in land cover mapping. The methodology presented herein not only enables the creation of precise, expeditious land cover maps but also demonstrates the prowess of cloud computing through GEE for large-scale land cover mapping with remarkable accuracy. The study emphasizes the synergy of different input sources to achieve superior accuracy. As a future recommendation, the application of Light Detection and Ranging (LiDAR) technology is proposed to enhance vegetation type differentiation in the Beterou catchment. Additionally, a cross-comparison between Sentinel-2 and Landsat-8 for assessing long-term land cover changes is suggested.

Keywords: land cover mapping, Google Earth Engine, random forest, Beterou catchment

Procedia PDF Downloads 59
1446 Comparing the Gap Formation around Composite Restorations in Three Regions of Tooth Using Optical Coherence Tomography (OCT)

Authors: Rima Zakzouk, Yasushi Shimada, Yuan Zhou, Yasunori Sumi, Junji Tagami

Abstract:

Background and Purpose: Swept source optical coherence tomography (OCT) is an interferometric imaging technique that has been recently used in cariology. In spite of progress made in adhesive dentistry, the composite restoration has been failing due to secondary caries which occur due to environmental factors in oral cavities. Therefore, a precise assessment to effective marginal sealing of restoration is highly required. The aim of this study was evaluating gap formation at composite/cavity walls interface with or without phosphoric acid etching using SS-OCT. Materials and Methods: Round tapered cavities (2×2 mm) were prepared in three locations, mid-coronal, cervical, and root of bovine incisors teeth in two groups (SE and PA Groups). While self-etching adhesive (Clearfil SE Bond) was applied for the both groups, Group PA had been already pretreated with phosphoric acid etching (K-Etchant gel). Subsequently, both groups were restored by Estelite Flow Quick Flowable Composite Resin. Following 5000 thermal cycles, three cross-sectionals were obtained from each cavity using OCT at 1310-nm wavelength at 0°, 60°, 120° degrees. Scanning was repeated after two months to monitor the gap progress. Then the average percentage of gap length was calculated using image analysis software, and the difference of mean between both groups was statistically analyzed by t-test. Subsequently, the results were confirmed by sectioning and observing representative specimens under Confocal Laser Scanning Microscope (CLSM). Results: The results showed that pretreatment with phosphoric acid etching, Group PA, led to significantly bigger gaps in mid-coronal and cervical compared to SE group, while in the root cavity no significant difference was observed between both groups. On the other hand, the gaps formed in root’s cavities were significantly bigger than those in mid-coronal and cervical within the same group. This study investigated the effect of phosphoric acid on gap length progress on the composite restorations. In conclusions, phosphoric acid etching treatment did not reduce the gap formation even in different regions of the tooth. Significance: The cervical region of tooth was more exposing to gap formation than mid-coronal region, especially when we added pre-etching treatment.

Keywords: image analysis, optical coherence tomography, phosphoric acid etching, self-etch adhesives

Procedia PDF Downloads 218
1445 Design of Ultra-Light and Ultra-Stiff Lattice Structure for Performance Improvement of Robotic Knee Exoskeleton

Authors: Bing Chen, Xiang Ni, Eric Li

Abstract:

With the population ageing, the number of patients suffering from chronic diseases is increasing, among which stroke is a high incidence for the elderly. In addition, there is a gradual increase in the number of patients with orthopedic or neurological conditions such as spinal cord injuries, nerve injuries, and other knee injuries. These diseases are chronic, with high recurrence and complications, and normal walking is difficult for such patients. Nowadays, robotic knee exoskeletons have been developed for individuals with knee impairments. However, the currently available robotic knee exoskeletons are generally developed with heavyweight, which makes the patients uncomfortable to wear, prone to wearing fatigue, shortening the wearing time, and reducing the efficiency of exoskeletons. Some lightweight materials, such as carbon fiber and titanium alloy, have been used for the development of robotic knee exoskeletons. However, this increases the cost of the exoskeletons. This paper illustrates the design of a new ultra-light and ultra-stiff truss type of lattice structure. The lattice structures are arranged in a fan shape, which can fit well with circular arc surfaces such as circular holes, and it can be utilized in the design of rods, brackets, and other parts of a robotic knee exoskeleton to reduce the weight. The metamaterial is formed by continuous arrangement and combination of small truss structure unit cells, which changes the diameter of the pillar section, geometrical size, and relative density of each unit cell. It can be made quickly through additive manufacturing techniques such as metal 3D printing. The unit cell of the truss structure is small, and the machined parts of the robotic knee exoskeleton, such as connectors, rods, and bearing brackets, can be filled and replaced by gradient arrangement and non-uniform distribution. Under the condition of satisfying the mechanical properties of the robotic knee exoskeleton, the weight of the exoskeleton is reduced, and hence, the patient’s wearing fatigue is relaxed, and the wearing time of the exoskeleton is increased. Thus, the efficiency and wearing comfort, and safety of the exoskeleton can be improved. In this paper, a brief description of the hardware design of the prototype of the robotic knee exoskeleton is first presented. Next, the design of the ultra-light and ultra-stiff truss type of lattice structures is proposed, and the mechanical analysis of the single-cell unit is performed by establishing the theoretical model. Additionally, simulations are performed to evaluate the maximum stress-bearing capacity and compressive performance of the uniform arrangement and gradient arrangement of the cells. Finally, the static analysis is performed for the cell-filled rod and the unmodified rod, respectively, and the simulation results demonstrate the effectiveness and feasibility of the designed ultra-light and ultra-stiff truss type of lattice structures. In future studies, experiments will be conducted to further evaluate the performance of the designed lattice structures.

Keywords: additive manufacturing, lattice structures, metamaterial, robotic knee exoskeleton

Procedia PDF Downloads 102
1444 Experiences and Challenges of Community Participation in Urban Renewal Projects: A Case Study of Bhendi Bazzar, Mumbai, India

Authors: Madhura Yadav

Abstract:

Urban redevelopment planning initiatives in developing countries have been largely criticised due to top-down planning approach and lack of involvement of the targeted beneficiaries which have led to a challenging situation which is contrary to the perceived needs of beneficiaries. Urban renewal projects improve the lives of people and meaningful participation of community plays a pivotal role. Public perceptions on satisfaction and participation have been given less priority in the investigation, which hinders effective planning and implementation of urban renewal projects. Moreover, challenges of community participation in urban renewal projects are less documented, particularly in relation to public participation and satisfaction. There is a need for new paradigm shift focusing on community participatory approach in urban renewal projects. The over 125-year-old Bhendi Bazar in Mumbai, India is the country’s first ever cluster redevelopment project, popularly known as Bhendi Bazaar redevelopment and it will be one of the largest projects for urban rejuvenation of one of Mumbai’s oldest and dying inner city areas. The project is led by the community trust, inputs were taken from various stakeholders, including residents, commercial tenants and expert consultants to shape the master plan and design of the project. The project started in 2016 but there is a significant delay in implementing the project. The study aimed at studying and assessing public perceptions on satisfaction and the relationship between community participation and community satisfaction in Bhendi Bazaar of Mumbai, India. Furthermore, the study will outline the challenges and problems of community participation in urban renewal projects and it suggests recommendations for the future. The qualitative and quantitative methods such as reconnaissance survey, key informant interviews, focus group discussions, walking interviews, a narrative inquiry is used for analysis of data. Preliminary findings revealed that all tenants are satisfied for the redevelopment of an area but the willingness of residential tenants to move in transit accommodation has made the projects successful and reductant of some residential and commercial tenants, regulatory provisions rising to face challenges in implementation. Experiences from the case study can help to understand dynamics behind public participation and government. At the same time, they serve as an inspiration and learning opportunity for future projects to ensure that they are sustainable not only from an economic standpoint but also, a social perspective.

Keywords: urban renewal, Bhendi Bazaar, community participation, satisfaction, social perspective

Procedia PDF Downloads 172
1443 The Effectiveness of Extracorporeal Shockwave Therapy on Pain and Motor Function in Subjects with Knee Osteoarthritis A Systematic Review and Meta-Analysis of Randomized Clinical Trial

Authors: Vu Hoang Thu Huong

Abstract:

Background and Purpose: The effects of Extracorporeal Shockwave Therapy (ESWT) in the participants with knee osteoarthritis (KOA) were unclear on physical performance although its effects on pain had been investiagted. This study aims to explore the effects of ESWT on pain relief and physical performance on KOA. Methods: The studies with the randomized controlled design to investigate the effects of ESWT on KOA were systematically searched using inclusion and exclusion criteria through seven electronic databases including Pubmed etc. between 1990 and Dec 2022. To summarize those data, visual analog scale (VAS) or pain scores were determined for measure of pain intensity. Range of knee motion, or the scores of physical activities including Lequesne index (LI), Knee Injury and Osteoarthritis Outcome Score (KOOS), and Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) were determined for measure of physical performances. The first evaluate after treatment period was define as the effect of post-treatment period or immediately effect; and the last evaluate was defined as the effect of following period or the end effect in our study. Data analysis was performed using RevMan 5.4.1 software. A significant level was set at p<0.05. Results: Eight studies (number of participant= 499) reporting the ESWT effects on mild-to-moderate severity (Grades I to III Kellgren–Lawrence) of KOA were qualified for meta-analysis. Compared with sham or placebo group, the ESWT group had a significant decrease of VAS rest score (0.90[0.12~1.67] as mean difference [95% confidence interval]) and pain score WOMAC (2.49[1.22~3.76]), and a significant improvement of physical performance with a decrease of the scores of WOMAC activities (8.18[3.97~12.39]), LI (3.47[1.68~5.26]), and KOOS (5.87[1.73~ 10.00]) in the post-treatment period. There were also a significant decrease of WOMAC pain score (2.83[2.12~3.53]) and a significant decrease of the scores of WOMAC activities (9.47[7.65~11.28]) and LI (4.12[2.34 to 5.89]) in the following period. Besides, compared with other treatment groups, ESWT also displayed the improvement in pain and physical performance, but it is not significant. Conclusions: The ESWT was effective and valuable method in pain relief as well as in improving physical activities in the participants with mild-to-moderate KOA. Clinical Relevance: There are the effects of ESWT on pain relief and the improvement of physical performance in the with KOA.

Keywords: knee osteoarthritis, extracorporeal shockwave therapy, pain relief, physical performance, shockwave

Procedia PDF Downloads 82
1442 Influence of Surface Wettability on Imbibition Dynamics of Protein Solution in Microwells

Authors: Himani Sharma, Amit Agrawal

Abstract:

Stability of the Cassie and Wenzel wetting states depends on intrinsic contact angle and geometric features on a surface that was exploited in capturing biofluids in microwells. However, the mechanism of imbibition of biofluids in the microwells is not well implied in terms of wettability of a substrate. In this work, we experimentally demonstrated filling dynamics in hydrophilic and hydrophobic microwells by protein solutions. Towards this, we utilized lotus leaf as a mold to fabricate microwells on a Polydimethylsiloxane (PDMS) surface. Lotus leaf containing micrometer-sized blunt-conical shaped pillars with a height of 8-15 µm and diameter of 3-8 µm were transferred on to PDMS. Furthermore, PDMS surface was treated with oxygen plasma to render the hydrophilic nature. A 10µL droplets containing fluorescein isothiocyanate (FITC) - labelled bovine serum albumin (BSA) were rested on both hydrophobic (θa = 108o, where θa is the apparent contact angle) and hydrophilic (θa = 60o) PDMS surfaces. A time-dependent fluorescence microscopy was conducted on these modified PDMS surfaces by recording the fluorescent intensity over a 5 minute period. It was observed that, initially (at t=1 min) FITC-BSA was accumulated on the periphery of both hydrophilic and hydrophobic microwells due to incomplete penetration of liquid-gas meniscus. This deposition of FITC-BSA on periphery of microwell was not changed with time for hydrophobic surfaces, whereas, a complete filling was occurred in hydrophilic microwells (at t=5 mins). This attributes to a gradual movement of three-phase contact line along the vertical surface of the hydrophilic microwells as compared to stable pinning in the hydrophobic microwells as confirmed by Surface Evolver simulations. In addition, if the cavities are presented on hydrophobic surfaces, air bubbles will be trapped inside the cavities once the aqueous solution is placed over these surfaces, resulting in the Cassie-Baxter wetting state. This condition hinders trapping of proteins inside the microwells. Thus, it is necessary to impart hydrophilicity to the microwell surfaces so as to induce the Wenzel state, such that, an entire solution will be fully in contact with the walls of microwells. Imbibition of microwells by protein solutions was analyzed in terms fluorescent intensity versus time. The present work underlines the importance of geometry of microwells and surface wettability of substrate in wetting and effective capturing of solid sub-phases in biofluids.

Keywords: BSA, microwells, surface evolver, wettability

Procedia PDF Downloads 195
1441 The Effect of Filter Design and Face Velocity on Air Filter Performance

Authors: Iyad Al-Attar

Abstract:

Air filters installed in HVAC equipment and gas turbine for power generation confront several atmospheric contaminants with various concentrations while operating in different environments (tropical, coastal, hot). This leads to engine performance degradation, as contaminants are capable of deteriorating components and fouling compressor assembly. Compressor fouling is responsible for 70 to 85% of gas turbine performance degradation leading to reduction in power output and availability and an increase in the heat rate and fuel consumption. Therefore, filter design must take into account face velocities, pleat count and its corresponding surface area; to verify filter performance characteristics (Efficiency and Pressure Drop). The experimental work undertaken in the current study examined two groups of four filters with different pleating densities were investigated for the initial pressure drop response and fractional efficiencies. The pleating densities used for this study is 28, 30, 32 and 34 pleats per 100mm for each pleated panel and measured for ten different flow rates ranging from 500 to 5000 m3/h with increment of 500m3/h. This experimental work of the current work has highlighted the underlying reasons behind the reduction in filter permeability due to the increase in face velocity and pleat density. The reasons that led to surface area losses of filtration media are due to one or combination of the following effects: pleat-crowding, deflection of the entire pleated panel, pleat distortion at the corner of the pleat and/or filtration medium compression. It is evident from entire array of experiments that as the particle size increases, the efficiency decreases until the MPPS is reached. Beyond the MPPS, the efficiency increases with increase in particle size. The MPPS shifts to a smaller particle size as the face velocity increases, while the pleating density and orientation did not have a pronounced effect on the MPPS. Throughout the study, an optimal pleat count which satisfies initial pressure drop and efficiency requirements may not have necessarily existed. The work has also suggested that a valid comparison of the pleat densities should be based on the effective surface area that participates in the filtration action and not the total surface area the pleat density provides.

Keywords: air filters, fractional efficiency, gas cleaning, glass fibre, HEPA filter, permeability, pressure drop

Procedia PDF Downloads 133
1440 Isolation, Characterization, and Antibacterial Evaluation of Antimicrobial Peptides and Derivatives from Fly Larvae Sarconesiopsis magellanica (Diptera: Calliphoridae)

Authors: A. Díaz-Roa, P. I. Silva Junior, F. J. Bello

Abstract:

Sarconesiopsis magellanica (Diptera: Calliphoridae) is a medically important necrophagous fly which is used for establishing the post-mortem interval. Dipterous maggots release diverse proteins and peptides contained in larval excretion and secretion (ES) products playing a key role in digestion. The most important mechanism for combating infection using larval therapy depends on larval ES. These larvae are protected against infection by a diverse spectrum of antimicrobial peptides (AMPs), one already known like lucifensin. Special interest in these peptides has also been aroused regarding understanding their role in wound healing since they degrade necrotic tissue and kill different bacteria during larval therapy. The action of larvae on wounds occurs through 3 mechanisms of action: removal of necrotic tissue, stimulation of granulation tissue, and antibacterial action of larval ES. Some components of the ES include calcium, urea, allantoin ammonium bicarbonate and reducing the viability of Gram positive and Gram negative bacteria. The Lucilia sericata fly larvae have been the most used, however, we need to evaluate new species that could potentially be similar or more effective than fly above. This study was thus aimed at identifying and characterizing S. magellanica AMPs contained in ES products for the first time and compared them with the common fly used L. sericata. These products were obtained from third-instar larvae taken from a previously established colony. For the first analysis, ES fractions were separate by Sep-Pak C18 disposable columns (first step). The material obtained was fractionated by RP-HPLC by using Júpiter C18 semi-preparative column. The products were then lyophilized and their antimicrobial activity was characterized by incubation with different bacterial strains. The first chromatographic analysis of ES from L. sericata gives 6 fractions with antimicrobial activity against Gram-positive bacteria Micrococus luteus, and 3 fractions with activity against Gram-negative bacteria Pseudomonae aeruginosa while the one from S. magellanica gaves 1 fraction against M. luteus and 4 against P. aeruginosa. Maybe one of these fractions could correspond to the peptide already known from L. sericata. These results show the first work for supporting further experiments aimed at validating S. magellanica use in larval therapy. We still need to search if we find some new molecules, by making mass spectrometry and ‘de novo sequencing’. Further studies are necessary to identify and characterize them to better understand their functioning.

Keywords: antimicrobial peptides, larval therapy, Lucilia sericata, Sarconesiopsis magellanica

Procedia PDF Downloads 364
1439 Investigation of the EEG Signal Parameters during Epileptic Seizure Phases in Consequence to the Application of External Healing Therapy on Subjects

Authors: Karan Sharma, Ajay Kumar

Abstract:

Epileptic seizure is a type of disease due to which electrical charge in the brain flows abruptly resulting in abnormal activity by the subject. One percent of total world population gets epileptic seizure attacks.Due to abrupt flow of charge, EEG (Electroencephalogram) waveforms change. On the display appear a lot of spikes and sharp waves in the EEG signals. Detection of epileptic seizure by using conventional methods is time-consuming. Many methods have been evolved that detect it automatically. The initial part of this paper provides the review of techniques used to detect epileptic seizure automatically. The automatic detection is based on the feature extraction and classification patterns. For better accuracy decomposition of the signal is required before feature extraction. A number of parameters are calculated by the researchers using different techniques e.g. approximate entropy, sample entropy, Fuzzy approximate entropy, intrinsic mode function, cross-correlation etc. to discriminate between a normal signal & an epileptic seizure signal.The main objective of this review paper is to present the variations in the EEG signals at both stages (i) Interictal (recording between the epileptic seizure attacks). (ii) Ictal (recording during the epileptic seizure), using most appropriate methods of analysis to provide better healthcare diagnosis. This research paper then investigates the effects of a noninvasive healing therapy on the subjects by studying the EEG signals using latest signal processing techniques. The study has been conducted with Reiki as a healing technique, beneficial for restoring balance in cases of body mind alterations associated with an epileptic seizure. Reiki is practiced around the world and is recommended for different health services as a treatment approach. Reiki is an energy medicine, specifically a biofield therapy developed in Japan in the early 20th century. It is a system involving the laying on of hands, to stimulate the body’s natural energetic system. Earlier studies have shown an apparent connection between Reiki and the autonomous nervous system. The Reiki sessions are applied by an experienced therapist. EEG signals are measured at baseline, during session and post intervention to bring about effective epileptic seizure control or its elimination altogether.

Keywords: EEG signal, Reiki, time consuming, epileptic seizure

Procedia PDF Downloads 402
1438 Research Networks and Knowledge Sharing: An Exploratory Study of Aquaculture in Europe

Authors: Zeta Dooly, Aidan Duane

Abstract:

The collaborative European funded research and development landscape provides prime environmental conditions for multi-disciplinary teams to learn and enhance their knowledge beyond the capability of training and learning within their own organisation cocoons. Whilst the emergence of the academic entrepreneur has changed the focus of educational institutions to that of quasi-businesses, the training and professional development of lecturers and academic staff are often not formalised to the same level as industry. This research focuses on industry and academic collaborative research funded by the European Commission. The impact of research is scalable if an optimum research network is created and managed effectively. This paper investigates network embeddedness, the nature of relationships, links, and nodes within a research network, and the enhancement of the network’s knowledge. The contribution of this paper extends our understanding of establishing and maintaining effective collaborative research networks. The effects of network embeddedness are recognized in the literature as pertinent to innovation and the economy. Network theory literature claims that networks are essential to innovative clusters such as Silicon valley and innovation in high tech industries. This research provides evidence to support the impact collaborative research has on the disparate individuals toward their innovative contributions to their organisations and their own professional development. This study adopts a qualitative approach and uncovers some of the challenges of multi-disciplinary research through case study insights. The contribution of this paper recommends the establishment of scaffolding to accommodate cooperation in research networks, role appointment, and addressing contextual complexities early to avoid problem cultivation. Furthermore, it suggests recommendations in relation to network formation, intra-network challenges in relation to open data, competition, friendships, and competency enhancement. The network capability is enhanced by the adoption of the relevant theories; network theory, open innovation, and social exchange, with the understanding that the network structure has an impact on innovation and social exchange in research networks. The research concludes that there is an opportunity to deepen our understanding of the impact of network reuse and network hoping that provides scaffolding for the network members to enhance and build upon their knowledge using a progressive approach.

Keywords: research networks, competency building, network theory, case study

Procedia PDF Downloads 122
1437 Machine Learning Analysis of Eating Disorders Risk, Physical Activity and Psychological Factors in Adolescents: A Community Sample Study

Authors: Marc Toutain, Pascale Leconte, Antoine Gauthier

Abstract:

Introduction: Eating Disorders (ED), such as anorexia, bulimia, and binge eating, are psychiatric illnesses that mostly affect young people. The main symptoms concern eating (restriction, excessive food intake) and weight control behaviors (laxatives, vomiting). Psychological comorbidities (depression, executive function disorders, etc.) and problematic behaviors toward physical activity (PA) are commonly associated with ED. Acquaintances on ED risk factors are still lacking, and more community sample studies are needed to improve prevention and early detection. To our knowledge, studies are needed to specifically investigate the link between ED risk level, PA, and psychological risk factors in a community sample of adolescents. The aim of this study is to assess the relation between ED risk level, exercise (type, frequency, and motivations for engaging in exercise), and psychological factors based on the Jacobi risk factors model. We suppose that a high risk of ED will be associated with the practice of high caloric cost PA, motivations oriented to weight and shape control, and psychological disturbances. Method: An online survey destined for students has been sent to several middle schools and colleges in northwest France. This survey combined several questionnaires, the Eating Attitude Test-26 assessing ED risk; the Exercise Motivation Inventory–2 assessing motivations toward PA; the Hospital Anxiety and Depression Scale assessing anxiety and depression, the Contour Drawing Rating Scale; and the Body Esteem Scale assessing body dissatisfaction, Rosenberg Self-esteem Scale assessing self-esteem, the Exercise Dependence Scale-Revised assessing PA dependence, the Multidimensional Assessment of Interoceptive Awareness assessing interoceptive awareness and the Frost Multidimensional Perfectionism Scale assessing perfectionism. Machine learning analysis will be performed in order to constitute groups with a tree-based model clustering method, extract risk profile(s) with a bootstrap method comparison, and predict ED risk with a prediction method based on a decision tree-based model. Expected results: 1044 complete records have already been collected, and the survey will be closed at the end of May 2022. Records will be analyzed with a clustering method and a bootstrap method in order to reveal risk profile(s). Furthermore, a predictive tree decision method will be done to extract an accurate predictive model of ED risk. This analysis will confirm typical main risk factors and will give more data on presumed strong risk factors such as exercise motivations and interoceptive deficit. Furthermore, it will enlighten particular risk profiles with a strong level of proof and greatly contribute to improving the early detection of ED and contribute to a better understanding of ED risk factors.

Keywords: eating disorders, risk factors, physical activity, machine learning

Procedia PDF Downloads 79
1436 Detailed Analysis of Multi-Mode Optical Fiber Infrastructures for Data Centers

Authors: Matej Komanec, Jan Bohata, Stanislav Zvanovec, Tomas Nemecek, Jan Broucek, Josef Beran

Abstract:

With the exponential growth of social networks, video streaming and increasing demands on data rates, the number of newly built data centers rises proportionately. The data centers, however, have to adjust to the rapidly increased amount of data that has to be processed. For this purpose, multi-mode (MM) fiber based infrastructures are often employed. It stems from the fact, the connections in data centers are typically realized within a short distance, and the application of MM fibers and components considerably reduces costs. On the other hand, the usage of MM components brings specific requirements for installation service conditions. Moreover, it has to be taken into account that MM fiber components have a higher production tolerance for parameters like core and cladding diameters, eccentricity, etc. Due to the high demands for the reliability of data center components, the determination of properly excited optical field inside the MM fiber core belongs to the key parameters while designing such an MM optical system architecture. Appropriately excited mode field of the MM fiber provides optimal power budget in connections, leads to the decrease of insertion losses (IL) and achieves effective modal bandwidth (EMB). The main parameter, in this case, is the encircled flux (EF), which should be properly defined for variable optical sources and consequent different mode-field distribution. In this paper, we present detailed investigation and measurements of the mode field distribution for short MM links purposed in particular for data centers with the emphasis on reliability and safety. These measurements are essential for large MM network design. The various scenarios, containing different fibers and connectors, were tested in terms of IL and mode-field distribution to reveal potential challenges. Furthermore, we focused on estimation of particular defects and errors, which can realistically occur like eccentricity, connector shifting or dust, were simulated and measured, and their dependence to EF statistics and functionality of data center infrastructure was evaluated. The experimental tests were performed at two wavelengths, commonly used in MM networks, of 850 nm and 1310 nm to verify EF statistics. Finally, we provide recommendations for data center systems and networks, using OM3 and OM4 MM fiber connections.

Keywords: optical fiber, multi-mode, data centers, encircled flux

Procedia PDF Downloads 374
1435 Computational Simulations and Assessment of the Application of Non-Circular TAVI Devices

Authors: Jonathon Bailey, Neil Bressloff, Nick Curzen

Abstract:

Transcatheter Aortic Valve Implantation (TAVI) devices are stent-like frames with prosthetic leaflets on the inside, which are percutaneously implanted. The device in a crimped state is fed through the arteries to the aortic root, where the device frame is opened through either self-expansion or balloon expansion, which reveals the prosthetic valve within. The frequency at which TAVI is being used to treat aortic stenosis is rapidly increasing. In time, TAVI is likely to become the favoured treatment over Surgical Valve Replacement (SVR). Mortality after TAVI has been associated with severe Paravalvular Aortic Regurgitation (PAR). PAR occurs when the frame of the TAVI device does not make an effective seal against the internal surface of the aortic root, allowing blood to flow backwards about the valve. PAR is common in patients and has been reported to some degree in as much as 76% of cases. Severe PAR (grade 3 or 4) has been reported in approximately 17% of TAVI patients resulting in post-procedural mortality increases from 6.7% to 16.5%. TAVI devices, like SVR devices, are circular in cross-section as the aortic root is often considered to be approximately circular in shape. In reality, however, the aortic root is often non-circular. The ascending aorta, aortic sino tubular junction, aortic annulus and left ventricular outflow tract have an average ellipticity ratio of 1.07, 1.09, 1.29, and 1.49 respectively. An elliptical aortic root does not severely affect SVR, as the leaflets are completely removed during the surgical procedure. However, an elliptical aortic root can inhibit the ability of the circular Balloon-Expandable (BE) TAVI devices to conform to the interior of the aortic root wall, which increases the risk of PAR. Self-Expanding (SE) TAVI devices are considered better at conforming to elliptical aortic roots, however the valve leaflets were not designed for elliptical function, furthermore the incidence of PAR is greater in SE devices than BE devices (19.8% vs. 12.2% respectively). If a patient’s aortic root is too severely elliptical, they will not be suitable for TAVI, narrowing the treatment options to SVR. It therefore follows that in order to increase the population who can undergo TAVI, and reduce the risk associated with TAVI, non-circular devices should be developed. Computational simulations were employed to further advance our understanding of non-circular TAVI devices. Radial stiffness of the TAVI devices in multiple directions, frame bending stiffness and resistance to balloon induced expansion are all computationally simulated. Finally, a simulation has been developed that demonstrates the expansion of TAVI devices into a non-circular patient specific aortic root model in order to assess the alterations in deployment dynamics, PAR and the stresses induced in the aortic root.

Keywords: tavi, tavr, fea, par, fem

Procedia PDF Downloads 436
1434 Production of Bio-Composites from Cocoa Pod Husk for Use in Packaging Materials

Authors: L. Kanoksak, N. Sukanya, L. Napatsorn, T. Siriporn

Abstract:

A growing population and demand for packaging are driving up the usage of natural resources as raw materials in the pulp and paper industry. Long-term effects of environmental is disrupting people's way of life all across the planet. Finding pulp sources to replace wood pulp is therefore necessary. To produce wood pulp, various other potential plants or plant parts can be employed as substitute raw materials. For example, pulp and paper were made from agricultural residue that mainly included pulp can be used in place of wood. In this study, cocoa pod husks were an agricultural residue of the cocoa and chocolate industries. To develop composite materials to replace wood pulp in packaging materials. The paper was coated with polybutylene adipate-co-terephthalate (PBAT). By selecting and cleaning fresh cocoa pod husks, the size was reduced. And the cocoa pod husks were dried. The morphology and elemental composition of cocoa pod husks were studied. To evaluate the mechanical and physical properties, dried cocoa husks were extracted using the soda-pulping process. After selecting the best formulations, paper with a PBAT bioplastic coating was produced on a paper-forming machine Physical and mechanical properties were studied. By using the Field Emission Scanning Electron Microscope/Energy Dispersive X-Ray Spectrometer (FESEM/EDS) technique, the structure of dried cocoa pod husks showed the main components of cocoa pod husks. The appearance of porous has not been found. The fibers were firmly bound for use as a raw material for pulp manufacturing. Dry cocoa pod husks contain the major elements carbon (C) and oxygen (O). Magnesium (Mg), potassium (K), and calcium (Ca) were minor elements that were found in very small levels. After that cocoa pod husks were removed from the soda-pulping process. It found that the SAQ5 formula produced pulp yield, moisture content, and water drainage. To achieve the basis weight by TAPPI T205 sp-02 standard, cocoa pod husk pulp and modified starch were mixed. The paper was coated with bioplastic PBAT. It was produced using bioplastic resin from the blown film extrusion technique. It showed the contact angle, dispersion component and polar component. It is an effective hydrophobic material for rigid packaging applications.

Keywords: cocoa pod husks, agricultural residue, composite material, rigid packaging

Procedia PDF Downloads 71
1433 Quantifying Automation in the Architectural Design Process via a Framework Based on Task Breakdown Systems and Recursive Analysis: An Exploratory Study

Authors: D. M. Samartsev, A. G. Copping

Abstract:

As with all industries, architects are using increasing amounts of automation within practice, with approaches such as generative design and use of AI becoming more commonplace. However, the discourse on the rate at which the architectural design process is being automated is often personal and lacking in objective figures and measurements. This results in confusion between people and barriers to effective discourse on the subject, in turn limiting the ability of architects, policy makers, and members of the public in making informed decisions in the area of design automation. This paper proposes the use of a framework to quantify the progress of automation within the design process. The use of a reductionist analysis of the design process allows it to be quantified in a manner that enables direct comparison across different times, as well as locations and projects. The methodology is informed by the design of this framework – taking on the aspects of a systematic review but compressed in time to allow for an initial set of data to verify the validity of the framework. The use of such a framework of quantification enables various practical uses such as predicting the future of the architectural industry with regards to which tasks will be automated, as well as making more informed decisions on the subject of automation on multiple levels ranging from individual decisions to policy making from governing bodies such as the RIBA. This is achieved by analyzing the design process as a generic task that needs to be performed, then using principles of work breakdown systems to split the task of designing an entire building into smaller tasks, which can then be recursively split further as required. Each task is then assigned a series of milestones that allow for the objective analysis of its automation progress. By combining these two approaches it is possible to create a data structure that describes how much various parts of the architectural design process are automated. The data gathered in the paper serves the dual purposes of providing the framework with validation, as well as giving insights into the current situation of automation within the architectural design process. The framework can be interrogated in many ways and preliminary analysis shows that almost 40% of the architectural design process has been automated in some practical fashion at the time of writing, with the rate at which progress is made slowly increasing over the years, with the majority of tasks in the design process reaching a new milestone in automation in less than 6 years. Additionally, a further 15% of the design process is currently being automated in some way, with various products in development but not yet released to the industry. Lastly, various limitations of the framework are examined in this paper as well as further areas of study.

Keywords: analysis, architecture, automation, design process, technology

Procedia PDF Downloads 103
1432 Impact of Interface Soil Layer on Groundwater Aquifer Behaviour

Authors: Hayder H. Kareem, Shunqi Pan

Abstract:

The geological environment where the groundwater is collected represents the most important element that affects the behaviour of groundwater aquifer. As groundwater is a worldwide vital resource, it requires knowing the parameters that affect this source accurately so that the conceptualized mathematical models would be acceptable to the broadest ranges. Therefore, groundwater models have recently become an effective and efficient tool to investigate groundwater aquifer behaviours. Groundwater aquifer may contain aquitards, aquicludes, or interfaces within its geological formations. Aquitards and aquicludes have geological formations that forced the modellers to include those formations within the conceptualized groundwater models, while interfaces are commonly neglected from the conceptualization process because the modellers believe that the interface has no effect on aquifer behaviour. The current research highlights the impact of an interface existing in a real unconfined groundwater aquifer called Dibdibba, located in Al-Najaf City, Iraq where it has a river called the Euphrates River that passes through the eastern part of this city. Dibdibba groundwater aquifer consists of two types of soil layers separated by an interface soil layer. A groundwater model is built for Al-Najaf City to explore the impact of this interface. Calibration process is done using PEST 'Parameter ESTimation' approach and the best Dibdibba groundwater model is obtained. When the soil interface is conceptualized, results show that the groundwater tables are significantly affected by that interface through appearing dry areas of 56.24 km² and 6.16 km² in the upper and lower layers of the aquifer, respectively. The Euphrates River will also leak water into the groundwater aquifer of 7359 m³/day. While these results are changed when the soil interface is neglected where the dry area became 0.16 km², the Euphrates River leakage became 6334 m³/day. In addition, the conceptualized models (with and without interface) reveal different responses for the change in the recharge rates applied on the aquifer through the uncertainty analysis test. The aquifer of Dibdibba in Al-Najaf City shows a slight deficit in the amount of water supplied by the current pumping scheme and also notices that the Euphrates River suffers from stresses applied to the aquifer. Ultimately, this study shows a crucial need to represent the interface soil layer in model conceptualization to be the intended and future predicted behaviours more reliable for consideration purposes.

Keywords: Al-Najaf City, groundwater aquifer behaviour, groundwater modelling, interface soil layer, Visual MODFLOW

Procedia PDF Downloads 179
1431 A Comparative Study of Optimization Techniques and Models to Forecasting Dengue Fever

Authors: Sudha T., Naveen C.

Abstract:

Dengue is a serious public health issue that causes significant annual economic and welfare burdens on nations. However, enhanced optimization techniques and quantitative modeling approaches can predict the incidence of dengue. By advocating for a data-driven approach, public health officials can make informed decisions, thereby improving the overall effectiveness of sudden disease outbreak control efforts. The National Oceanic and Atmospheric Administration and the Centers for Disease Control and Prevention are two of the U.S. Federal Government agencies from which this study uses environmental data. Based on environmental data that describe changes in temperature, precipitation, vegetation, and other factors known to affect dengue incidence, many predictive models are constructed that use different machine learning methods to estimate weekly dengue cases. The first step involves preparing the data, which includes handling outliers and missing values to make sure the data is prepared for subsequent processing and the creation of an accurate forecasting model. In the second phase, multiple feature selection procedures are applied using various machine learning models and optimization techniques. During the third phase of the research, machine learning models like the Huber Regressor, Support Vector Machine, Gradient Boosting Regressor (GBR), and Support Vector Regressor (SVR) are compared with several optimization techniques for feature selection, such as Harmony Search and Genetic Algorithm. In the fourth stage, the model's performance is evaluated using Mean Square Error (MSE), Mean Absolute Error (MAE), and Root Mean Square Error (RMSE) as assistance. Selecting an optimization strategy with the least number of errors, lowest price, biggest productivity, or maximum potential results is the goal. In a variety of industries, including engineering, science, management, mathematics, finance, and medicine, optimization is widely employed. An effective optimization method based on harmony search and an integrated genetic algorithm is introduced for input feature selection, and it shows an important improvement in the model's predictive accuracy. The predictive models with Huber Regressor as the foundation perform the best for optimization and also prediction.

Keywords: deep learning model, dengue fever, prediction, optimization

Procedia PDF Downloads 60
1430 Superparamagnetic Sensor with Lateral Flow Immunoassays as Platforms for Biomarker Quantification

Authors: M. Salvador, J. C. Martinez-Garcia, A. Moyano, M. C. Blanco-Lopez, M. Rivas

Abstract:

Biosensors play a crucial role in the detection of molecules nowadays due to their advantages of user-friendliness, high selectivity, the analysis in real time and in-situ applications. Among them, Lateral Flow Immunoassays (LFIAs) are presented among technologies for point-of-care bioassays with outstanding characteristics such as affordability, portability and low-cost. They have been widely used for the detection of a vast range of biomarkers, which do not only include proteins but also nucleic acids and even whole cells. Although the LFIA has traditionally been a positive/negative test, tremendous efforts are being done to add to the method the quantifying capability based on the combination of suitable labels and a proper sensor. One of the most successful approaches involves the use of magnetic sensors for detection of magnetic labels. Bringing together the required characteristics mentioned before, our research group has developed a biosensor to detect biomolecules. Superparamagnetic nanoparticles (SPNPs) together with LFIAs play the fundamental roles. SPMNPs are detected by their interaction with a high-frequency current flowing on a printed micro track. By means of the instant and proportional variation of the impedance of this track provoked by the presence of the SPNPs, quantitative and rapid measurement of the number of particles can be obtained. This way of detection requires no external magnetic field application, which reduces the device complexity. On the other hand, the major limitations of LFIAs are that they are only qualitative or semiquantitative when traditional gold or latex nanoparticles are used as color labels. Moreover, the necessity of always-constant ambient conditions to get reproducible results, the exclusive detection of the nanoparticles on the surface of the membrane, and the short durability of the signal are drawbacks that can be advantageously overcome with the design of magnetically labeled LFIAs. The approach followed was to coat the SPIONs with a specific monoclonal antibody which targets the protein under consideration by chemical bonds. Then, a sandwich-type immunoassay was prepared by printing onto the nitrocellulose membrane strip a second antibody against a different epitope of the protein (test line) and an IgG antibody (control line). When the sample flows along the strip, the SPION-labeled proteins are immobilized at the test line, which provides magnetic signal as described before. Preliminary results using this practical combination for the detection and quantification of the Prostatic-Specific Antigen (PSA) shows the validity and consistency of the technique in the clinical range, where a PSA level of 4.0 ng/mL is the established upper normal limit. Moreover, a LOD of 0.25 ng/mL was calculated with a confident level of 3 according to the IUPAC Gold Book definition. Its versatility has also been proved with the detection of other biomolecules such as troponin I (cardiac injury biomarker) or histamine.

Keywords: biosensor, lateral flow immunoassays, point-of-care devices, superparamagnetic nanoparticles

Procedia PDF Downloads 229
1429 BLS-2/BSL-3 Laboratory for Diagnosis of Pathogens on the Colombia-Ecuador Border Region: A Post-COVID Commitment to Public Health

Authors: Anderson Rocha-Buelvas, Jaqueline Mena Huertas, Edith Burbano Rosero, Arsenio Hidalgo Troya, Mauricio Casas Cruz

Abstract:

COVID-19 is a disruptive pandemic for the public health and economic system of whole countries, including Colombia. Nariño Department is the southwest of the country and draws attention to being on the border with Ecuador, constantly facing demographic transition affecting infections between countries. In Nariño, the early routine diagnosis of SARS-CoV-2, which can be handled at BSL-2, has affected the transmission dynamics of COVID-19. However, new emerging and re-emerging viruses with biological flexibility classified as a Risk Group 3 agent can take advantage of epidemiological opportunities, generating the need to increase clinical diagnosis, mainly in border regions between countries. The overall objective of this project was to assure the quality of the analytical process in the diagnosis of high biological risk pathogens in Nariño by building a laboratory that includes biosafety level (BSL)-2 and (BSL)-3 containment zones. The delimitation of zones was carried out according to the Verification Tool of the National Health Institute of Colombia and following the standard requirements for the competence of testing and calibration laboratories of the International Organization for Standardization. This is achieved by harmonization of methods and equipment for effective and durable diagnostics of the large-scale spread of highly pathogenic microorganisms, employing negative-pressure containment systems and UV Systems in accordance with a finely controlled electrical system and PCR systems as new diagnostic tools. That increases laboratory capacity. Protection in BSL-3 zones will separate the handling of potentially infectious aerosols within the laboratory from the community and the environment. It will also allow the handling and inactivation of samples with suspected pathogens and the extraction of molecular material from them, allowing research with pathogens with high risks, such as SARS-CoV-2, Influenza, and syncytial virus, and malaria, among others. The diagnosis of these pathogens will be articulated across the spectrum of basic, applied, and translational research that could receive about 60 daily samples. It is expected that this project will be articulated with the health policies of neighboring countries to increase research capacity.

Keywords: medical laboratory science, SARS-CoV-2, public health surveillance, Colombia

Procedia PDF Downloads 86
1428 Valuing Cultural Ecosystem Services of Natural Treatment Systems Using Crowdsourced Data

Authors: Andrea Ghermandi

Abstract:

Natural treatment systems such as constructed wetlands and waste stabilization ponds are increasingly used to treat water and wastewater from a variety of sources, including stormwater and polluted surface water. The provision of ancillary benefits in the form of cultural ecosystem services makes these systems unique among water and wastewater treatment technologies and greatly contributes to determine their potential role in promoting sustainable water management practices. A quantitative analysis of these benefits, however, has been lacking in the literature. Here, a critical assessment of the recreational and educational benefits in natural treatment systems is provided, which combines observed public use from a survey of managers and operators with estimated public use as obtained using geotagged photos from social media as a proxy for visitation rates. Geographic Information Systems (GIS) are used to characterize the spatial boundaries of 273 natural treatment systems worldwide. Such boundaries are used as input for the Application Program Interfaces (APIs) of two popular photo-sharing websites (Flickr and Panoramio) in order to derive the number of photo-user-days, i.e., the number of yearly visits by individual photo users in each site. The adequateness and predictive power of four univariate calibration models using the crowdsourced data as a proxy for visitation are evaluated. A high correlation is found between photo-user-days and observed annual visitors (Pearson's r = 0.811; p-value < 0.001; N = 62). Standardized Major Axis (SMA) regression is found to outperform Ordinary Least Squares regression and count data models in terms of predictive power insofar as standard verification statistics – such as the root mean square error of prediction (RMSEP), the mean absolute error of prediction (MAEP), the reduction of error (RE), and the coefficient of efficiency (CE) – are concerned. The SMA regression model is used to estimate the intensity of public use in all 273 natural treatment systems. System type, influent water quality, and area are found to statistically affect public use, consistently with a priori expectations. Publicly available information regarding the home location of the sampled visitors is derived from their social media profiles and used to infer the distance they are willing to travel to visit the natural treatment systems in the database. Such information is analyzed using the travel cost method to derive monetary estimates of the recreational benefits of the investigated natural treatment systems. Overall, the findings confirm the opportunities arising from an integrated design and management of natural treatment systems, which combines the objectives of water quality enhancement and provision of cultural ecosystem services through public use in a multi-functional approach and compatibly with the need to protect public health.

Keywords: constructed wetlands, cultural ecosystem services, ecological engineering, waste stabilization ponds

Procedia PDF Downloads 175
1427 Character Development Outcomes: A Predictive Model for Behaviour Analysis in Tertiary Institutions

Authors: Rhoda N. Kayongo

Abstract:

As behavior analysts in education continue to debate on how higher institutions can continue to benefit from their social and academic related programs, higher education is facing challenges in the area of character development. This is manifested in the percentages of college completion rates, teen pregnancies, drug abuse, sexual abuse, suicide, plagiarism, lack of academic integrity, and violence among their students. Attending college is a perceived opportunity to positively influence the actions and behaviors of the next generation of society; thus colleges and universities have to provide opportunities to develop students’ values and behaviors. Prior studies were mainly conducted in private institutions and more so in developed countries. However, with the complexity of the nature of student body currently due to the changing world, a multidimensional approach combining multiple factors that enhance character development outcomes is needed to suit the changing trends. The main purpose of this study was to identify opportunities in colleges and develop a model for predicting character development outcomes. A survey questionnaire composed of 7 scales including in-classroom interaction, out-of-classroom interaction, school climate, personal lifestyle, home environment, and peer influence as independent variables and character development outcomes as the dependent variable was administered to a total of five hundred and one students of 3rd and 4th year level in selected public colleges and universities in the Philippines and Rwanda. Using structural equation modelling, a predictive model explained 57% of the variance in character development outcomes. Findings from the results of the analysis showed that in-classroom interactions have a substantial direct influence on character development outcomes of the students (r = .75, p < .05). In addition, out-of-classroom interaction, school climate, and home environment contributed to students’ character development outcomes but in an indirect way. The study concluded that in the classroom are many opportunities for teachers to teach, model and integrate character development among their students. Thus, suggestions are made to public colleges and universities to deliberately boost and implement experiences that cultivate character within the classroom. These may contribute tremendously to the students' character development outcomes and hence render effective models of behaviour analysis in higher education.

Keywords: character development, tertiary institutions, predictive model, behavior analysis

Procedia PDF Downloads 133