Search results for: phase inversion method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 22141

Search results for: phase inversion method

811 Multifunctional Janus Microbots for Intracellular Delivery of Therapeutic Agents

Authors: Shilpee Jain, Sachin Latiyan, Kaushik Suneet

Abstract:

Unlike traditional robots, medical microbots are not only smaller in size, but they also possess various unique properties, for example, biocompatibility, stability in the biological fluids, navigation opposite to the bloodstream, wireless control over locomotion, etc. The idea behind their usage in the medical field was to build a minimally invasive method for addressing the post-operative complications, including longer recovery time, infection eruption and pain. Herein, the present study demonstrates the fabrication of dual nature magneto-conducting Fe3O4 magnetic nanoparticles (MNPs) and SU8 derived carbon-based Janus microbots for the efficient intracellular delivery of biomolecules. The low aspect ratio with feature size 2-5 μm microbots were fabricated by using a photolithography technique. These microbots were pyrolyzed at 900°C, which converts SU8 into amorphous carbon. The pyrolyzed microbots have dual properties, i.e., the half part is magneto-conducting and another half is only conducting for sufficing the therapeutic payloads efficiently with the application of external electric/magnetic field stimulations. For the efficient intracellular delivery of the microbots, the size and aspect ratio plays a significant role. However, on a smaller scale, the proper control over movement is difficult to achieve. The dual nature of Janus microbots allowed to control its maneuverability in the complex fluids using external electric as well as the magnetic field. Interestingly, Janus microbots move faster with the application of an external electric field (44 µm/s) as compared to the magnetic field (18 µm/s) application. Furthermore, these Janus microbots exhibit auto-fluorescence behavior that will help to track their pathway during navigation. Typically, the use of MNPs in the microdevices enhances the tendency to agglomerate. However, the incorporation of Fe₃O₄ MNPs in the pyrolyzed carbon reduces the chances of agglomeration of the microbots. The biocompatibility of the medical microbots, which is the essential property of any biosystems, was determined in vitro using HeLa cells. The microbots were found to compatible with HeLa cells. Additionally, the intracellular uptake of microbots was higher in the presence of an external electric field as compared to without electric field stimulation. In summary, the cytocompatible Janus microbots were fabricated successfully. They are stable in the biological fluids, wireless controllable navigation with the help of a few Guess external magnetic fields, their movement can be tracked because of autofluorescence behavior, they are less susceptible to agglomeration and higher cellular uptake could be achieved with the application of the external electric field. Thus, these carriers could offer a versatile platform to suffice the therapeutic payloads under wireless actuation.

Keywords: amorphous carbon, electric/magnetic stimulations, Janus microbots, magnetic nanoparticles, minimally invasive procedures

Procedia PDF Downloads 123
810 The Digital Microscopy in Organ Transplantation: Ergonomics of the Tele-Pathological Evaluation of Renal, Liver, and Pancreatic Grafts

Authors: Constantinos S. Mammas, Andreas Lazaris, Adamantia S. Mamma-Graham, Georgia Kostopanagiotou, Chryssa Lemonidou, John Mantas, Eustratios Patsouris

Abstract:

The process to build a better safety culture, methods of error analysis, and preventive measures, starts with an understanding of the effects when human factors engineering refer to remote microscopic diagnosis in surgery and specially in organ transplantation for the evaluation of the grafts. Α high percentage of solid organs arrive at the recipient hospitals and are considered as injured or improper for transplantation in the UK. Digital microscopy adds information on a microscopic level about the grafts (G) in Organ Transplant (OT), and may lead to a change in their management. Such a method will reduce the possibility that a diseased G will arrive at the recipient hospital for implantation. Aim: The aim of this study is to analyze the ergonomics of digital microscopy (DM) based on virtual slides, on telemedicine systems (TS) for tele-pathological evaluation (TPE) of the grafts (G) in organ transplantation (OT). Material and Methods: By experimental simulation, the ergonomics of DM for microscopic TPE of renal graft (RG), liver graft (LG) and pancreatic graft (PG) tissues is analyzed. In fact, this corresponded to the ergonomics of digital microscopy for TPE in OT by applying virtual slide (VS) system for graft tissue image capture, for remote diagnoses of possible microscopic inflammatory and/or neoplastic lesions. Experimentation included the development of an OTE-TS similar experimental telemedicine system (Exp.-TS) for simulating the integrated VS based microscopic TPE of RG, LG and PG Simulation of DM on TS based TPE performed by 2 specialists on a total of 238 human renal graft (RG), 172 liver graft (LG) and 108 pancreatic graft (PG) tissues digital microscopic images for inflammatory and neoplastic lesions on four electronic spaces of the four used TS. Results: Statistical analysis of specialist‘s answers about the ability to accurately diagnose the diseased RG, LG and PG tissues on the electronic space among four TS (A,B,C,D) showed that DM on TS for TPE in OT is elaborated perfectly on the ES of a desktop, followed by the ES of the applied Exp.-TS. Tablet and mobile-phone ES seem significantly risky for the application of DM in OT (p<.001). Conclusion: To make the largest reduction in errors and adverse events referring to the quality of the grafts, it will take application of human factors engineering to procurement, design, audit, and awareness-raising activities. Consequently, it will take an investment in new training, people, and other changes to management activities for DM in OT. The simulating VS based TPE with DM of RG, LG and PG tissues after retrieval, seem feasible and reliable and dependable on the size of the electronic space of the applied TS, for remote prevention of diseased grafts from being retrieved and/or sent to the recipient hospital and for post-grafting and pre-transplant planning.

Keywords: digital microscopy, organ transplantation, tele-pathology, virtual slides

Procedia PDF Downloads 278
809 Interfacial Instability and Mixing Behavior between Two Liquid Layers Bounded in Finite Volumes

Authors: Lei Li, Ming M. Chai, Xiao X. Lu, Jia W. Wang

Abstract:

The mixing process of two liquid layers in a cylindrical container includes the upper liquid with higher density rushing into the lower liquid with lighter density, the lower liquid rising into the upper liquid, meanwhile the two liquid layers having interactions with each other, forming vortices, spreading or dispersing in others, entraining or mixing with others. It is a complex process constituted of flow instability, turbulent mixing and other multiscale physical phenomena and having a fast evolution velocity. In order to explore the mechanism of the process and make further investigations, some experiments about the interfacial instability and mixing behavior between two liquid layers bounded in different volumes are carried out, applying the planar laser induced fluorescence (PLIF) and the high speed camera (HSC) techniques. According to the results, the evolution of interfacial instability between immiscible liquid develops faster than theoretical rate given by the Rayleigh-Taylor Instability (RTI) theory. It is reasonable to conjecture that some mechanisms except the RTI play key roles in the mixture process of two liquid layers. From the results, it is shown that the invading velocity of the upper liquid into the lower liquid does not depend on the upper liquid's volume (height). Comparing to the cases that the upper and lower containers are of identical diameter, in the case that the lower liquid volume increases to larger geometric space, the upper liquid spreads and expands into the lower liquid more quickly during the evolution of interfacial instability, indicating that the container wall has important influence on the mixing process. In the experiments of miscible liquid layers’ mixing, the diffusion time and pattern of the liquid interfacial mixing also does not depend on the upper liquid's volumes, and when the lower liquid volume increases to larger geometric space, the action of the bounded wall on the liquid falling and rising flow will decrease, and the liquid interfacial mixing effects will also attenuate. Therefore, it is also concluded that the volume weight of upper heavier liquid is not the reason of the fast interfacial instability evolution between the two liquid layers and the bounded wall action is limited to the unstable and mixing flow. The numerical simulations of the immiscible liquid layers’ interfacial instability flow using the VOF method show the typical flow pattern agree with the experiments. However the calculated instability development is much slower than the experimental measurement. The numerical simulation of the miscible liquids’ mixing, which applying Fick’s diffusion law to the components’ transport equation, shows a much faster mixing rate than the experiments on the liquids’ interface at the initial stage. It can be presumed that the interfacial tension plays an important role in the interfacial instability between the two liquid layers bounded in finite volume.

Keywords: interfacial instability and mixing, two liquid layers, Planar Laser Induced Fluorescence (PLIF), High Speed Camera (HSC), interfacial energy and tension, Cahn-Hilliard Navier-Stokes (CHNS) equations

Procedia PDF Downloads 248
808 Cotton Fabrics Functionalized with Green and Commercial Ag Nanoparticles

Authors: Laura Gonzalez, Santiago Benavides, Martha Elena Londono, Ana Elisa Casas, Adriana Restrepo-Osorio

Abstract:

Cotton products are sensitive to microorganisms due to its ability to retain moisture, which might cause change into the coloration, mechanical properties reduction or foul odor generation; consequently, this represents risks to the health of users. Nowadays, have been carried out researches to give antibacterial properties to textiles using different strategies, which included the use of silver nanoparticles (AgNPs). The antibacterial behavior can be affected by laundering process reducing its effectiveness. In the other way, the environmental impact generated for the synthetic antibacterial agents has motivated to seek new and more ecological ways for produce AgNPs. The aims of this work are to determine the antibacterial activity of cotton fabric functionalized with green (G) and commercial (C) AgNPs after twenty washing cycles, also to evaluate morphological and color changes. A plain weave cotton fabric suitable for dyeing and two AgNPs solutions were use. C a commercial product and G produced using an ecological method, both solutions with 0.5 mM concentration were impregnated on cotton fabric without stabilizer, at a liquor to fabric ratio of 1:20 in constant agitation during 30min and then dried at 70 °C by 10 min. After that the samples were subjected to twenty washing cycles using phosphate-free detergent simulated on agitated flask at 150 rpm, then were centrifuged and dried on a tumble. The samples were characterized using Kirby-Bauer test determine antibacterial activity against E. coli y S. aureus microorganisms, the results were registered by photographs establishing the inhibition halo before and after the washing cycles, the tests were conducted in triplicate. Scanning electron microscope (SEM) was used to observe the morphologies of cotton fabric and treated samples. The color changes of cotton fabrics in relation to the untreated samples were obtained by spectrophotometer analysis. The images, reveals the presence of inhibition halo in the samples treated with C and G AgNPs solutions, even after twenty washing cycles, which indicated a good antibacterial activity and washing durability, with a tendency to better results against to S. aureus bacteria. The presence of AgNPs on the surface of cotton fiber and morphological changes were observed through SEM, after and before washing cycles. The own color of the cotton fiber has been significantly altered with both antibacterial solutions. According to the colorimetric results, the samples treated with C lead to yellowing while the samples modified with G to red yellowing Cotton fabrics treated AgNPs C and G from 0.5 mM solutions exhibited excellent antimicrobial activity against E. coli and S. aureus with good laundering durability effects. The surface of the cotton fibers was modified with the presence of AgNPs C and G due to the presence of NPs and its agglomerates. There are significant changes in the natural color of cotton fabric due to deposition of AgNPs C and G which were maintained after laundering process.

Keywords: antibacterial property, cotton fabric, fastness to wash, Kirby-Bauer test, silver nanoparticles

Procedia PDF Downloads 246
807 Estimation of Level of Pesticide in Recurrent Pregnancy Loss and Its Correlation with Paraoxanase1 Gene in North Indian Population

Authors: Apurva Singh, S. P. Jaiswar, Apala Priyadarshini, Akancha Pandey

Abstract:

Objective: The aim of this study is to find the association of PON1 gene polymorphism with pesticides In RPL subjects. Background: Recurrent pregnancy loss (RPL) is defined as three or more sequential abortions before the 20th week of gestation. Pesticides and its derivatives (organochlorine and organophosphate) are proposed to accommodate a ruler chemical for RPL in the sub-humid region of India. The paraoxonase-1 enzyme (PON1) plays an important role in the toxicity of some organophosphate pesticides, with low PON1 activity being associated with higher pesticide sensitivity Methodology: This is a case-control study done in Department of Obstetrics & Gynaecology & Department of Biochemistry, K.G.M.U, Lucknow, India. The subjects were enrolled after fulfilling the inclusion & exclusion criteria. Inclusion criteria: Cases- Subject having two or more spontaneous abortions & Control- Healthy female having one or more alive child was selected. Exclusion criteria: Cases & Control- Subject having the following disease will be excluded from the study Diabetes mellitus, Hypertension, Tuberculosis, Immunocompromised patients, any endocrine disorder and genital, colon or breast cancer any other malignancies. Blood samples were collected in EDTA tubes from cases & healthy control women & genomic DNA was extracted by phenol-chloroform method. The estimation of pesticides residue from blood was done by HPLC. Biochemical estimation was also performed. Genotyping of PON1 gene polymorphism was performed by RFLP. Statistical analysis of the data was performed using the SPSS16.3 software. Results: A sum of total 14 pesticides (12 organochlorine and 2 organophosphate) selected on the basis of their persistent nature and consumption rate. The significant level of pesticide (ppb) estimated by the Mann whiney test and it was found to be significant at higher level of β-HCH (p:0.04), γ-HCH (p:0.001), δ-HCH (p: 0.002), chloropyrifos (p:0.001), pp-DDD (p:0.001) and fenvalrate (p: 0.001) in case group compare to its control. The level of antioxidant enzymes were found to be significantly decreased among the cases. Wild homozygous TT was more frequent and prevalent among control groups. However, heterozygous group (Tt) was more in cases than control groups (CI-0.3-1.3) (p=0.06). Conclusion: Higher levels of pesticides with endocrine disrupting potential in cases indicate the possible role of these compounds as one of the causes of recurrent pregnancy loss. Possibly, increased pesticide level appears to indicate increased levels of oxidative damage that has been associated with the possible cause of Recurrent Miscarriage, it may reflect indirect evidence of toxicity rather than the direct cause. Since both factors are reported to increase risk, individuals with higher levels of these 'Toxic compounds' especially in 'high-risk genotypes' might be more susceptible to recurrent pregnancy loss.

Keywords: paraoxonase, pesticides, PON1, RPL

Procedia PDF Downloads 142
806 Geospatial Technologies in Support of Civic Engagement and Cultural Heritage: Lessons Learned from Three Participatory Planning Workshops for Involving Local Communities in the Development of Sustainable Tourism Practices in Latiano, Brindisi

Authors: Mark Opmeer

Abstract:

The fruitful relationship between cultural heritage and digital technology is evident. Due to the development of user-friendly software, an increasing amount of heritage scholars use ict for their research activities. As a result, the implementation of information technology for heritage planning has become a research objective in itself. During the last decades, we have witnessed a growing debate and literature about the importance of computer technologies for the field of cultural heritage and ecotourism. Indeed, implementing digital technology in support of these domains can be very fruitful for one’s research practice. However, due to the rapid development of new software scholars may find it challenging to use these innovations in an appropriate way. As such, this contribution seeks to explore the interplay between geospatial technologies (geo-ict), civic engagement and cultural heritage and tourism. In this article, we discuss our findings on the use of geo-ict in support of civic participation, cultural heritage and sustainable tourism development in the southern Italian district of Brindisi. In the city of Latiano, three workshops were organized that involved local members of the community to distinguish and discuss interesting points of interests (POI’s) which represent the cultural significance and identity of the area. During the first workshop, a so called mappa della comunità was created on a touch table with collaborative mapping software, that allowed the participators to highlight potential destinations for tourist purposes. Furthermore, two heritage-based itineraries along a selection of identified POI’s was created to make the region attractive for recreants and tourists. These heritage-based itineraries reflect the communities’ ideas about the cultural identity of the region. Both trails were subsequently implemented in a dedicated mobile application (app) and was evaluated using a mixed-method approach with the members of the community during the second workshop. In the final workshop, the findings of the collaboration, the heritage trails and the app was evaluated with all participants. Based on our conclusions, we argue that geospatial technologies have a significant potential for involving local communities in heritage planning and tourism development. The participants of the workshops found it increasingly engaging to share their ideas and knowledge using the digital map of the touch table. Secondly, the use of a mobile application as instrument to test the heritage-based itineraries in the field was broadly considered as fun and beneficial for enhancing community awareness and participation in local heritage. The app furthermore stimulated the communities’ awareness of the added value of geospatial technologies for sustainable tourism development in the area. We conclude this article with a number of recommendations in order to provide a best practice for organizing heritage workshops with similar objectives.

Keywords: civic engagement, geospatial technologies, tourism development, cultural heritage

Procedia PDF Downloads 285
805 Comparison of Two Transcranial Magnetic Stimulation Protocols on Spasticity in Multiple Sclerosis - Pilot Study of a Randomized and Blind Cross-over Clinical Trial

Authors: Amanda Cristina da Silva Reis, Bruno Paulino Venâncio, Cristina Theada Ferreira, Andrea Fialho do Prado, Lucimara Guedes dos Santos, Aline de Souza Gravatá, Larissa Lima Gonçalves, Isabella Aparecida Ferreira Moretto, João Carlos Ferrari Corrêa, Fernanda Ishida Corrêa

Abstract:

Objective: To compare two protocols of Transcranial Magnetic Stimulation (TMS) on quadriceps muscle spasticity in individuals diagnosed with Multiple Sclerosis (MS). Method: Clinical, crossover study, in which six adult individuals diagnosed with MS and spasticity in the lower limbs were randomized to receive one session of high-frequency (≥5Hz) and low-frequency (≤ 1Hz) TMS on motor cortex (M1) hotspot for quadriceps muscle, with a one-week interval between the sessions. To assess the spasticity was applied the Ashworth scale and were analyzed the latency time (ms) of the motor evoked potential (MEP) and the central motor conduction time (CMCT) of the bilateral quadriceps muscle. Assessments were performed before and after each intervention. The difference between groups was analyzed using the Friedman test, with a significance level of 0.05 adopted. Results: All statistical analyzes were performed using the SPSS Statistic version 26 programs, with a significance level established for the analyzes at p<0.05. Shapiro Wilk normality test. Parametric data were represented as mean and standard deviation for non-parametric variables, median and interquartile range, and frequency and percentage for categorical variables. There was no clinical change in quadriceps spasticity assessed using the Ashworth scale for the 1 Hz (p=0.813) and 5 Hz (p= 0.232) protocols for both limbs. Motor Evoked Potential latency time: in the 5hz protocol, there was no significant change for the contralateral side from pre to post-treatment (p>0.05), and for the ipsilateral side, there was a decrease in latency time of 0.07 seconds (p<0.05 ); for the 1Hz protocol there was an increase of 0.04 seconds in the latency time (p<0.05) for the contralateral side to the stimulus, and for the ipsilateral side there was a decrease in the latency time of 0.04 seconds (p=<0.05), with a significant difference between the contralateral (p=0.007) and ipsilateral (p=0.014) groups. Central motor conduction time in the 1Hz protocol, there was no change for the contralateral side (p>0.05) and for the ipsilateral side (p>0.05). In the 5Hz protocol for the contralateral side, there was a small decrease in latency time (p<0.05) and for the ipsilateral side, there was a decrease of 0.6 seconds in the latency time (p<0.05) with a significant difference between groups (p=0.019). Conclusion: A high or low-frequency session does not change spasticity, but it is observed that when the low-frequency protocol was performed, there was an increase in latency time on the stimulated side, and a decrease in latency time on the non-stimulated side, considering then that inhibiting the motor cortex increases cortical excitability on the opposite side.

Keywords: multiple sclerosis, spasticity, motor evoked potential, transcranial magnetic stimulation

Procedia PDF Downloads 85
804 Budgetary Performance Model for Managing Pavement Maintenance

Authors: Vivek Hokam, Vishrut Landge

Abstract:

An ideal maintenance program for an industrial road network is one that would maintain all sections at a sufficiently high level of functional and structural conditions. However, due to various constraints such as budget, manpower and equipment, it is not possible to carry out maintenance on all the needy industrial road sections within a given planning period. A rational and systematic priority scheme needs to be employed to select and schedule industrial road sections for maintenance. Priority analysis is a multi-criteria process that determines the best ranking list of sections for maintenance based on several factors. In priority setting, difficult decisions are required to be made for selection of sections for maintenance. It is more important to repair a section with poor functional conditions which includes uncomfortable ride etc. or poor structural conditions i.e. sections those are in danger of becoming structurally unsound. It would seem therefore that any rational priority setting approach must consider the relative importance of functional and structural condition of the section. The maintenance priority index and pavement performance models tend to focus mainly on the pavement condition, traffic criteria etc. There is a need to develop the model which is suitably used with respect to limited budget provisions for maintenance of pavement. Linear programming is one of the most popular and widely used quantitative techniques. A linear programming model provides an efficient method for determining an optimal decision chosen from a large number of possible decisions. The optimum decision is one that meets a specified objective of management, subject to various constraints and restrictions. The objective is mainly minimization of maintenance cost of roads in industrial area. In order to determine the objective function for analysis of distress model it is necessary to fix the realistic data into a formulation. Each type of repair is to be quantified in a number of stretches by considering 1000 m as one stretch. A stretch considered under study is having 3750 m length. The quantity has to be put into an objective function for maximizing the number of repairs in a stretch related to quantity. The distress observed in this stretch are potholes, surface cracks, rutting and ravelling. The distress data is measured manually by observing each distress level on a stretch of 1000 m. The maintenance and rehabilitation measured that are followed currently are based on subjective judgments. Hence, there is a need to adopt a scientific approach in order to effectively use the limited resources. It is also necessary to determine the pavement performance and deterioration prediction relationship with more accurate and economic benefits of road networks with respect to vehicle operating cost. The infrastructure of road network should have best results expected from available funds. In this paper objective function for distress model is determined by linear programming and deterioration model considering overloading is discussed.

Keywords: budget, maintenance, deterioration, priority

Procedia PDF Downloads 207
803 An Absolute Femtosecond Rangefinder for Metrological Support in Coordinate Measurements

Authors: Denis A. Sokolov, Andrey V. Mazurkevich

Abstract:

In the modern world, there is an increasing demand for highly precise measurements in various fields, such as aircraft, shipbuilding, and rocket engineering. This has resulted in the development of appropriate measuring instruments that are capable of measuring the coordinates of objects within a range of up to 100 meters, with an accuracy of up to one micron. The calibration process for such optoelectronic measuring devices (trackers and total stations) involves comparing the measurement results from these devices to a reference measurement based on a linear or spatial basis. The reference used in such measurements could be a reference base or a reference range finder with the capability to measure angle increments (EDM). The base would serve as a set of reference points for this purpose. The concept of the EDM for replicating the unit of measurement has been implemented on a mobile platform, which allows for angular changes in the direction of laser radiation in two planes. To determine the distance to an object, a high-precision interferometer with its own design is employed. The laser radiation travels to the corner reflectors, which form a spatial reference with precisely known positions. When the femtosecond pulses from the reference arm and the measuring arm coincide, an interference signal is created, repeating at the frequency of the laser pulses. The distance between reference points determined by interference signals is calculated in accordance with recommendations from the International Bureau of Weights and Measures for the indirect measurement of time of light passage according to the definition of a meter. This distance is D/2 = c/2nF, approximately 2.5 meters, where c is the speed of light in a vacuum, n is the refractive index of a medium, and F is the frequency of femtosecond pulse repetition. The achieved uncertainty of type A measurement of the distance to reflectors 64 m (N•D/2, where N is an integer) away and spaced apart relative to each other at a distance of 1 m does not exceed 5 microns. The angular uncertainty is calculated theoretically since standard high-precision ring encoders will be used and are not a focus of research in this study. The Type B uncertainty components are not taken into account either, as the components that contribute most do not depend on the selected coordinate measuring method. This technology is being explored in the context of laboratory applications under controlled environmental conditions, where it is possible to achieve an advantage in terms of accuracy. In general, the EDM tests showed high accuracy, and theoretical calculations and experimental studies on an EDM prototype have shown that the uncertainty type A of distance measurements to reflectors can be less than 1 micrometer. The results of this research will be utilized to develop a highly accurate mobile absolute range finder designed for the calibration of high-precision laser trackers and laser rangefinders, as well as other equipment, using a 64 meter laboratory comparator as a reference.

Keywords: femtosecond laser, pulse correlation, interferometer, laser absolute range finder, coordinate measurement

Procedia PDF Downloads 57
802 A Machine Learning Approach for Assessment of Tremor: A Neurological Movement Disorder

Authors: Rajesh Ranjan, Marimuthu Palaniswami, A. A. Hashmi

Abstract:

With the changing lifestyle and environment around us, the prevalence of the critical and incurable disease has proliferated. One such condition is the neurological disorder which is rampant among the old age population and is increasing at an unstoppable rate. Most of the neurological disorder patients suffer from some movement disorder affecting the movement of their body parts. Tremor is the most common movement disorder which is prevalent in such patients that infect the upper or lower limbs or both extremities. The tremor symptoms are commonly visible in Parkinson’s disease patient, and it can also be a pure tremor (essential tremor). The patients suffering from tremor face enormous trouble in performing the daily activity, and they always need a caretaker for assistance. In the clinics, the assessment of tremor is done through a manual clinical rating task such as Unified Parkinson’s disease rating scale which is time taking and cumbersome. Neurologists have also affirmed a challenge in differentiating a Parkinsonian tremor with the pure tremor which is essential in providing an accurate diagnosis. Therefore, there is a need to develop a monitoring and assistive tool for the tremor patient that keep on checking their health condition by coordinating them with the clinicians and caretakers for early diagnosis and assistance in performing the daily activity. In our research, we focus on developing a system for automatic classification of tremor which can accurately differentiate the pure tremor from the Parkinsonian tremor using a wearable accelerometer-based device, so that adequate diagnosis can be provided to the correct patient. In this research, a study was conducted in the neuro-clinic to assess the upper wrist movement of the patient suffering from Pure (Essential) tremor and Parkinsonian tremor using a wearable accelerometer-based device. Four tasks were designed in accordance with Unified Parkinson’s disease motor rating scale which is used to assess the rest, postural, intentional and action tremor in such patient. Various features such as time-frequency domain, wavelet-based and fast-Fourier transform based cross-correlation were extracted from the tri-axial signal which was used as input feature vector space for the different supervised and unsupervised learning tools for quantification of severity of tremor. A minimum covariance maximum correlation energy comparison index was also developed which was used as the input feature for various classification tools for distinguishing the PT and ET tremor types. An automatic system for efficient classification of tremor was developed using feature extraction methods, and superior performance was achieved using K-nearest neighbors and Support Vector Machine classifiers respectively.

Keywords: machine learning approach for neurological disorder assessment, automatic classification of tremor types, feature extraction method for tremor classification, neurological movement disorder, parkinsonian tremor, essential tremor

Procedia PDF Downloads 153
801 Biodsorption as an Efficient Technology for the Removal of Phosphate, Nitrate and Sulphate Anions in Industrial Wastewater

Authors: Angel Villabona-Ortíz, Candelaria Tejada-Tovar, Andrea Viera-Devoz

Abstract:

Wastewater treatment is an issue of vital importance in these times where the impacts of human activities are most evident, which have become essential tasks for the normal functioning of society. However, they put entire ecosystems at risk by time destroying the possibility of sustainable development. Various conventional technologies are used to remove pollutants from water. Agroindustrial waste is the product with the potential to be used as a renewable raw material for the production of energy and chemical products, and their use is beneficial since products with added value are generated from materials that were not used before. Considering the benefits that the use of residual biomass brings, this project proposes the use of agro-industrial residues from corn crops for the production of natural adsorbents whose purpose is aimed at the remediation of contaminated water bodies with large loads of nutrients. The adsorption capacity of two biomaterials obtained from the processing of corn stalks was evaluated by batch system tests. Biochar impregnated with sulfuric acid and thermally activated was synthesized. On the other hand, the cellulose was extracted from the corn stalks and chemically modified with cetyltrimethylammonium chloride in order to quaternize the surface of the adsorbent. The adsorbents obtained were characterized by thermogravimetric analysis (TGA), scanning electron microscopy (SEM), infrared spectrometry with Fourier Transform (FTIR), analysis by Brunauer, Emmett and Teller method (BET) and X-ray Diffraction analysis ( XRD), which showed favorable characteristics for the cellulose extraction process. Higher adsorption capacities of the nutrients were obtained with the use of biochar, with phosphate being the anion with the best removal percentages. The effect of the initial adsorbate concentration was evaluated, with which it was shown that the Freundlich isotherm better describes the adsorption process in most systems. The adsorbent-phosphate / nitrate systems fit better to the Pseudo Primer Order kinetic model, while the adsorbent-sulfate systems showed a better fit to the Pseudo second-order model, which indicates that there are both physical and chemical interactions in the process. Multicomponent adsorption tests revealed that phosphate anions have a higher affinity for both adsorbents. On the other hand, the thermodynamic parameters standard enthalpy (ΔH °) and standard entropy (ΔS °) with negative results indicate the exothermic nature of the process, whereas the ascending values of standard Gibbs free energy (ΔG °). The adsorption process of anions with biocarbon and modified cellulose is spontaneous and exothermic. The use of the evaluated biomateriles is recommended for the treatment of industrial effluents contaminated with sulfate, nitrate and phosphate anions.

Keywords: adsorption, biochar, modified cellulose, corn stalks

Procedia PDF Downloads 180
800 Continuous and Discontinuos Modeling of Wellbore Instability in Anisotropic Rocks

Authors: C. Deangeli, P. Obentaku Obenebot, O. Omwanghe

Abstract:

The study focuses on the analysis of wellbore instability in rock masses affected by weakness planes. The occurrence of failure in such a type of rocks can occur in the rock matrix and/ or along the weakness planes, in relation to the mud weight gradient. In this case the simple Kirsch solution coupled with a failure criterion cannot supply a suitable scenario for borehole instabilities. Two different numerical approaches have been used in order to investigate the onset of local failure at the wall of a borehole. For each type of approach the influence of the inclination of weakness planes has been investigates, by considering joint sets at 0°, 35° and 90° to the horizontal. The first set of models have been carried out with FLAC 2D (Fast Lagrangian Analysis of Continua) by considering the rock material as a continuous medium, with a Mohr Coulomb criterion for the rock matrix and using the ubiquitous joint model for accounting for the presence of the weakness planes. In this model yield may occur in either the solid or along the weak plane, or both, depending on the stress state, the orientation of the weak plane and the material properties of the solid and weak plane. The second set of models have been performed with PFC2D (Particle Flow code). This code is based on the Discrete Element Method and considers the rock material as an assembly of grains bonded by cement-like materials, and pore spaces. The presence of weakness planes is simulated by the degradation of the bonds between grains along given directions. In general the results of the two approaches are in agreement. However the discrete approach seems to capture more complex phenomena related to local failure in the form of grain detachment at wall of the borehole. In fact the presence of weakness planes in the discontinuous medium leads to local instability along the weak planes also in conditions not predicted from the continuous solution. In general slip failure locations and directions do not follow the conventional wellbore breakout direction but depend upon the internal friction angle and the orientation of the bedding planes. When weakness plane is at 0° and 90° the behaviour are similar to that of a continuous rock material, but borehole instability is more severe when weakness planes are inclined at an angle between 0° and 90° to the horizontal. In conclusion, the results of the numerical simulations show that the prediction of local failure at the wall of the wellbore cannot disregard the presence of weakness planes and consequently the higher mud weight required for stability for any specific inclination of the joints. Despite the discrete approach can simulate smaller areas because of the large number of particles required for the generation of the rock material, however it seems to investigate more correctly the occurrence of failure at the miscroscale and eventually the propagation of the failed zone to a large portion of rock around the wellbore.

Keywords: continuous- discontinuous, numerical modelling, weakness planes wellbore, FLAC 2D

Procedia PDF Downloads 497
799 Inpatient Glycemic Management Strategies and Their Association with Clinical Outcomes in Hospitalized SARS-CoV-2 Patients

Authors: Thao Nguyen, Maximiliano Hyon, Sany Rajagukguk, Anna Melkonyan

Abstract:

Introduction: Type 2 Diabetes is a well-established risk factor for severe SARS-CoV-2 infection. Uncontrolled hyperglycemia in patients with established or newly diagnosed diabetes is associated with poor outcomes, including increased mortality and hospital length of stay. Objectives: Our study aims to compare three different glycemic management strategies and their association with clinical outcomes in patients hospitalized for moderate to severe SARS-CoV-2 infection. Identifying optimal glycemic management strategies will improve the quality of patient care and improve their outcomes. Method: This is a retrospective observational study on patients hospitalized at Adventist Health White Memorial with severe SARS-CoV-2 infection from 11/1/2020 to 02/28/2021. The following inclusion criteria were used: positive SARS-CoV-2 PCR test, age >18 yrs old, diabetes or random glucose >200 mg/dL on admission, oxygen requirement >4L/min, and treatment with glucocorticoids. Our exclusion criteria included: ICU admission within 24 hours, discharge within five days, death within five days, and pregnancy. The patients were divided into three glycemic management groups: Group 1, managed solely by the Primary Team, Group 2, by Pharmacy; and Group 3, by Endocrinologist. Primary outcomes were average glucose on Day 5, change in glucose between Days 3 and 5, and average insulin dose on Day 5 among groups. Secondary outcomes would be upgraded to ICU, inpatient mortality, and hospital length of stay. For statistics, we used IBM® SPSS, version 28, 2022. Results: Most studied patients were Hispanic, older than 60, and obese (BMI >30). It was the first CV-19 surge with the Delta variant in an unvaccinated population. Mortality was markedly high (> 40%) with longer LOS (> 13 days) and a high ICU transfer rate (18%). Most patients had markedly elevated inflammatory markers (CRP, Ferritin, and D-Dimer). These, in combination with glucocorticoids, resulted in severe hyperglycemia that was difficult to control. Average glucose on Day 5 was not significantly different between groups primary vs. pharmacy vs. endocrine (220.5 ± 63.4 vs. 240.9 ± 71.1 vs. 208.6 ± 61.7 ; P = 0.105). Change in glucose from days 3 to 5 was not significantly different between groups but trended towards favoring the endocrinologist group (-26.6±73.6 vs. 3.8±69.5 vs. -32.2±84.1; P= 0.052). TDD insulin was not significantly different between groups but trended towards higher TDD for the endocrinologist group (34.6 ± 26.1 vs. 35.2 ± 26.4 vs. 50.5 ± 50.9; P=0.054). The endocrinologist group used significantly more preprandial insulin compared to other groups (91.7% vs. 39.1% vs. 65.9% ; P < 0.001). The pharmacy used more basal insulin than other groups (95.1% vs. 79.5% vs. 79.2; P = 0.047). There were no differences among groups in the clinical outcomes: LOS, ICU upgrade, or mortality. Multivariate regression analysis controlled for age, sex, BMI, HbA1c level, renal function, liver function, CRP, d-dimer, and ferritin showed no difference in outcomes among groups. Conclusion: Given high-risk factors in our population, despite efforts from the glycemic management teams, it’s unsurprising no differences in clinical outcomes in mortality and length of stay.

Keywords: glycemic management, strategies, hospitalized, SARS-CoV-2, outcomes

Procedia PDF Downloads 447
798 Fructose-Aided Cross-Linked Enzyme Aggregates of Laccase: An Insight on Its Chemical and Physical Properties

Authors: Bipasa Dey, Varsha Panwar, Tanmay Dutta

Abstract:

Laccase, a multicopper oxidase (EC 1.10.3.2) have been at the forefront as a superior industrial biocatalyst. They are versatile in terms of bestowing sustainable and ecological catalytic reactions such as polymerisation, xenobiotic degradation and bioremediation of phenolic and non-phenolic compounds. Regardless of the wide biotechnological applications, the critical limiting factors viz. reusability, retrieval, and storage stability still prevail. This can cause an impediment in their applicability. Crosslinked enzyme aggregates (CLEAs) have emerged as a promising technique that rehabilitates these essential facets, albeit at the expense of their enzymatic activity. The carrier free crosslinking method prevails over the carrier-bound immobilisation in conferring high productivity, low production cost owing to the absence of additional carrier and circumvent any non-catalytic ballast which could dilute the volumetric activity. To the best of our knowledge, the ε-amino group of lysyl residue is speculated as the best choice for forming Schiff’s base with glutaraldehyde. Despite being most preferrable, excess glutaraldehyde can bring about disproportionate and undesirable crosslinking within the catalytic site and hence could deliver undesirable catalytic losses. Moreover, the surface distribution of lysine residues in Trametes versicolor laccase is significantly less. Thus, to mitigate the adverse effect of glutaraldehyde in conjunction with scaling down the degradation or catalytic loss of the enzyme, crosslinking with inert substances like gelatine, collagen, Bovine serum albumin (BSA) or excess lysine is practiced. Analogous to these molecules, sugars have been well known as a protein stabiliser. It helps to retain the structural integrity, specifically secondary structure of the protein during aggregation by changing the solvent properties. They are comprehended to avert protein denaturation or enzyme deactivation during precipitation. We prepared crosslinked enzyme aggregates (CLEAs) of laccase from T. versicolor with the aid of sugars. The sugar CLEAs were compared with the classic BSA and glutaraldehyde laccase CLEAs concerning physico-chemical properties. The activity recovery for the fructose CLEAs were found to be ~20% higher than the non-sugar CLEA. Moreover, the 𝐾𝑐𝑎𝑡𝐾𝑚⁄ values of the CLEAs were two and three-fold higher than BSA-CLEA and GACLEA, respectively. The half-life (t1/2) deciphered by sugar-CLEA was higher than the t1/2 of GA-CLEAs and free enzyme, portraying more thermal stability. Besides, it demonstrated extraordinarily high pH stability, which was analogous to BSA-CLEA. The promising attributes of increased storage stability and recyclability (>80%) gives more edge to the sugar-CLEAs over conventional CLEAs of their corresponding free enzyme. Thus, sugar-CLEA prevails in furnishing the rudimentary properties required for a biocatalyst and holds many prospects.

Keywords: cross-linked enzyme aggregates, laccase immobilization, enzyme reusability, enzyme stability

Procedia PDF Downloads 100
797 Dialysis Access Surgery for Patients in Renal Failure: A 10-Year Institutional Experience

Authors: Daniel Thompson, Muhammad Peerbux, Sophie Cerutti, Hansraj Bookun

Abstract:

Introduction: Dialysis access is a key component of the care of patients with end stage renal failure. In our institution, a combined service of vascular surgeons and nephrologists are responsible for the creation and maintenance of arteriovenous fisultas (AVF), tenckhoff cathethers and Hickman/permcath lines. This poster investigates the last 10 years of dialysis access surgery conducted at St. Vincent’s Hospital Melbourne. Method: A cross-sectional retrospective analysis was conducted of patients of St. Vincent’s Hospital Melbourne (Victoria, Australia) utilising data collection from the Australasian Vascular Audit (Australian and New Zealand Society for Vascular Surgery). Descriptive demographic analysis was carried out as well as operation type, length of hospital stays, postoperative deaths and need for reoperation. Results: 2085 patients with renal failure were operated on between the years of 2011 and 2020. 1315 were male (63.1%) and 770 were female (36.9%). The mean age was 58 (SD 13.8). 92% of patients scored three or greater on the American Society of Anesthiologiests classification system. Almost half had a history of ischaemic heart disease (48.4%), more than half had a history of diabetes (64%), and a majority had hypertension (88.4%). 1784 patients had a creatinine over 150mmol/L (85.6%), the rest were on dialysis (14.4%). The most common access procedure was AVF creation, with 474 autologous AVFs and 64 prosthetic AVFs. There were 263 Tenckhoff insertions. We performed 160 cadeveric renal transplants. The most common location for AVF formation was brachiocephalic (43.88%) followed by radiocephalic (36.7%) and brachiobasilic (16.67%). Fistulas that required re-intervention were most commonly angioplastied (n=163), followed by thrombectomy (n=136). There were 107 local fistula repairs. Average length of stay was 7.6 days, (SD 12). There were 106 unplanned returns to theatre, most commonly for fistula creation, insertion of tenckhoff or permacath removal (71.7%). There were 8 deaths in the immediately postoperative period. Discussion: Access to dialysis is vital for patients with end stage kidney disease, and requires a multidisciplinary approach from both nephrologists, vascular surgeons, and allied health practitioners. Our service provides a variety of dialysis access methods, predominately fistula creation and tenckhoff insertion. Patients with renal failure are heavily comorbid, and prolonged hospital admission following surgery is a source of significant healthcare expenditure. AVFs require careful monitoring and maintenance for ongoing utility, and our data reflects a multitude of operations required to maintain usable access. The requirement for dialysis is growing worldwide and our data demonstrates a local experience in access, with preferred methods, common complications and the associated surgical interventions.

Keywords: dialysis, fistula, nephrology, vascular surgery

Procedia PDF Downloads 112
796 Redefining Intellectual Humility in Indian Context: An Experimental Investigation

Authors: Jayashree And Gajjam

Abstract:

Intellectual humility (IH) is defined as a virtuous mean between intellectual arrogance and intellectual self-diffidence by the ‘Doxastic Account of IH’ studied, researched and developed by western scholars not earlier than 2015 at the University of Edinburgh. Ancient Indian philosophical texts or the Upanisads written in the Sanskrit language during the later Vedic period (circa 600-300 BCE) have long addressed the virtue of being humble in several stories and narratives. The current research paper questions and revisits these character traits in an Indian context following an experimental method. Based on the subjective reports of more than 400 Indian teenagers and adults, it argues that while a few traits of IH (such as trustworthiness, respectfulness, intelligence, politeness, etc.) are panhuman and pancultural, a few are not. Some attributes of IH (such as proper pride, open-mindedness, awareness of own strength, etc.) may be taken for arrogance by the Indian population, while other qualities of Intellectual Diffidence such as agreeableness, surrendering can be regarded as the characteristic of IH. The paper then gives the reasoning for this discrepancy that can be traced back to the ancient Indian (Upaniṣadic) teachings that are still prevalent in many Indian families and still anchor their views on IH. The name Upanisad itself means ‘sitting down near’ (to the Guru to gain the Supreme knowledge of the Self and the Universe and setting to rest ignorance) which is equivalent to the three traits among the BIG SEVEN characterized as IH by the western scholars viz. ‘being a good listener’, ‘curious to learn’, and ‘respect to other’s opinion’. The story of Satyakama Jabala (Chandogya Upanisad 4.4-8) who seeks the truth for several years even from the bull, the fire, the swan and waterfowl, suggests nothing but the ‘need for cognition’ or ‘desire for knowledge’. Nachiketa (Katha Upanisad), a boy with a pure mind and heart, follows his father’s words and offers himself to Yama (the God of Death) where after waiting for Yama for three days and nights, he seeks the knowledge of the mysteries of life and death. Although the main aim of these Upaniṣadic stories is to give the knowledge of life and death, the Supreme reality which can be identical with traits such as ‘curious to learn’, one cannot deny that they have a lot more to offer than mere information about true knowledge e.g., ‘politeness’, ‘good listener’, ‘awareness of own limitations’, etc. The possible future scope of this research includes (1) finding other socio-cultural factors that affect the ideas on IH such as age, gender, caste, type of education, highest qualification, place of residence and source of income, etc. which may be predominant in current Indian society despite our great teachings of the Upaniṣads, and (2) to devise different measures to impart IH in Indian children, teenagers, and younger adults for the harmonious future. The current experimental research can be considered as the first step towards these goals.

Keywords: ethics and virtue epistemology, Indian philosophy, intellectual humility, upaniṣadic texts in ancient India

Procedia PDF Downloads 91
795 Problem Solving in Mathematics Education: A Case Study of Nigerian Secondary School Mathematics Teachers’ Conceptions in Relation to Classroom Instruction

Authors: Carol Okigbo

Abstract:

Mathematical problem solving has long been accorded an important place in mathematics curricula at every education level in both advanced and emerging economies. Its classroom approaches have varied, such as teaching for problem-solving, teaching about problem-solving, and teaching mathematics through problem-solving. It requires engaging in tasks for which the solution methods are not eminent, making sense of problems and persevering in solving them by exhibiting processes, strategies, appropriate attitude, and adequate exposure. Teachers play important roles in helping students acquire competency in problem-solving; thus, they are expected to be good problem-solvers and have proper conceptions of problem-solving. Studies show that teachers’ conceptions influence their decisions about what to teach and how to teach. Therefore, how teachers view their roles in teaching problem-solving will depend on their pedagogical conceptions of problem-solving. If teaching problem-solving is a major component of secondary school mathematics instruction, as recommended by researchers and mathematics educators, then it is necessary to establish teachers’ conceptions, what they do, and how they approach problem-solving. This study is designed to determine secondary school teachers’ conceptions regarding mathematical problem solving, its current situation, how teachers’ conceptions relate to their demographics, as well as the interaction patterns in the mathematics classroom. There have been many studies of mathematics problem solving, some of which addressed teachers’ conceptions using single-method approaches, thereby presenting only limited views of this important phenomenon. To address the problem more holistically, this study adopted an integrated mixed methods approach which involved a quantitative survey, qualitative analysis of open-ended responses, and ethnographic observations of teachers in class. Data for the analysis came from a random sample of 327 secondary school mathematics teachers in two Nigerian states - Anambra State and Enugu State who completed a 45-item questionnaire. Ten of the items elicited demographic information, 11 items were open-ended questions, and 25 items were Likert-type questions. Of the 327 teachers who responded to the questionnaires, 37 were randomly selected and observed in their classes. Data analysis using ANOVA, t-tests, chi-square tests, and open coding showed that the teachers had different conceptions about problem-solving, which fall into three main themes: practice on exercises and word application problems, a process of solving mathematical problems, and a way of teaching mathematics. Teachers reported that no period is set aside for problem-solving; typically, teachers solve problems on the board, teach problem-solving strategies, and allow students time to struggle with problems on their own. The result shows a significant difference between male and female teachers’ conception of problems solving, a significant relationship among teachers’ conceptions and academic qualifications, and teachers who have spent ten years or more teaching mathematics were significantly different from the group with seven to nine years of experience in terms of their conceptions of problem-solving.

Keywords: conceptions, education, mathematics, problem solving, teacher

Procedia PDF Downloads 75
794 Hypersensitivity Reactions Following Intravenous Administration of Contrast Medium

Authors: Joanna Cydejko, Paulina Mika

Abstract:

Hypersensitivity reactions are side effects of medications that resemble an allergic reaction. Anaphylaxis is a generalized, severe allergic reaction of the body caused by exposure to a specific agent at a dose tolerated by a healthy body. The most common causes of anaphylaxis are food (about 70%), Hymenoptera venoms (22%), and medications (7%), despite detailed diagnostics in 1% of people, the cause of the anaphylactic reaction was not indicated. Contrast media are anaphylactic agents of unknown mechanism. Hypersensitivity reactions can occur with both immunological and non-immunological mechanisms. Symptoms of anaphylaxis occur within a few seconds to several minutes after exposure to the allergen. Contrast agents are chemical compounds that make it possible to visualize or improve the visibility of anatomical structures. In the diagnosis of computed tomography, the preparations currently used are derivatives of the triiodide benzene ring. Pharmacokinetic and pharmacodynamic properties, i.e., their osmolality, viscosity, low chemotoxicity and high hydrophilicity, have an impact on better tolerance of the substance by the patient's body. In MRI diagnostics, macrocyclic gadolinium contrast agents are administered during examinations. The aim of this study is to present the results of the number and severity of anaphylactic reactions that occurred in patients in all age groups undergoing diagnostic imaging with intravenous administration of contrast agents. In non-ionic iodine CT and in macrocyclic gadolinium MRI. A retrospective assessment of the number of adverse reactions after contrast administration was carried out on the basis of data from the Department of Radiology of the University Clinical Center in Gdańsk, and it was assessed whether their different physicochemical properties had an impact on the incidence of acute complications. Adverse reactions are divided according to the severity of the patient's condition and the diagnostic method used in a given patient. Complications following the administration of a contrast medium in the form of acute anaphylaxis accounted for less than 0.5% of all diagnostic procedures performed with the use of a contrast agent. In the analysis period from January to December 2022, 34,053 CT scans and 15,279 MRI examinations with the use of contrast medium were performed. The total number of acute complications was 21, of which 17 were complications of iodine-based contrast agents and 5 of gadolinium preparations. The introduction of state-of-the-art contrast formulations was an important step toward improving the safety and tolerability of contrast agents used in imaging. Currently, contrast agents administered to patients are considered to be one of the best-tolerated preparations used in medicine. However, like any drug, they can be responsible for the occurrence of adverse reactions resulting from their toxic effects. The increase in the number of imaging tests performed with the use of contrast agents has a direct impact on the number of adverse events associated with their administration. However, despite the low risk of anaphylaxis, this risk should not be marginalized. The growing threat associated with the mass performance of radiological procedures with the use of contrast agents forces the knowledge of the rules of conduct in the event of symptoms of hypersensitivity to these preparations.

Keywords: anaphylactic, contrast medium, diagnostic, medical imagine

Procedia PDF Downloads 61
793 Electrohydrodynamic Patterning for Surface Enhanced Raman Scattering for Point-of-Care Diagnostics

Authors: J. J. Rickard, A. Belli, P. Goldberg Oppenheimer

Abstract:

Medical diagnostics, environmental monitoring, homeland security and forensics increasingly demand specific and field-deployable analytical technologies for quick point-of-care diagnostics. Although technological advancements have made optical methods well-suited for miniaturization, a highly-sensitive detection technique for minute sample volumes is required. Raman spectroscopy is a well-known analytical tool, but has very weak signals and hence is unsuitable for trace level analysis. Enhancement via localized optical fields (surface plasmons resonances) on nanoscale metallic materials generates huge signals in surface-enhanced Raman scattering (SERS), enabling single molecule detection. This enhancement can be tuned by manipulation of the surface roughness and architecture at the sub-micron level. Nevertheless, the development and application of SERS has been inhibited by the irreproducibility and complexity of fabrication routes. The ability to generate straightforward, cost-effective, multiplex-able and addressable SERS substrates with high enhancements is of profound interest for SERS-based sensing devices. While most SERS substrates are manufactured by conventional lithographic methods, the development of a cost-effective approach to create nanostructured surfaces is a much sought-after goal in the SERS community. Here, a method is established to create controlled, self-organized, hierarchical nanostructures using electrohydrodynamic (HEHD) instabilities. The created structures are readily fine-tuned, which is an important requirement for optimizing SERS to obtain the highest enhancements. HEHD pattern formation enables the fabrication of multiscale 3D structured arrays as SERS-active platforms. Importantly, each of the HEHD-patterned individual structural units yield a considerable SERS enhancement. This enables each single unit to function as an isolated sensor. Each of the formed structures can be effectively tuned and tailored to provide high SERS enhancement, while arising from different HEHD morphologies. The HEHD fabrication of sub-micrometer architectures is straightforward and robust, providing an elegant route for high-throughput biological and chemical sensing. The superior detection properties and the ability to fabricate SERS substrates on the miniaturized scale, will facilitate the development of advanced and novel opto-fluidic devices, such as portable detection systems, and will offer numerous applications in biomedical diagnostics, forensics, ecological warfare and homeland security.

Keywords: hierarchical electrohydrodynamic patterning, medical diagnostics, point-of care devices, SERS

Procedia PDF Downloads 344
792 Mechanism of Veneer Colouring for Production of Multilaminar Veneer from Plantation-Grown Eucalyptus Globulus

Authors: Ngoc Nguyen

Abstract:

There is large plantation of Eucalyptus globulus established which has been grown to produce pulpwood. This resource is not suitable for the production of decorative products, principally due to low grades of wood and “dull” appearance but many trials have been already undertaken for the production of veneer and veneer-based engineered wood products, such as plywood and laminated veneer lumber (LVL). The manufacture of veneer-based products has been recently identified as an unprecedented opportunity to promote higher value utilisation of plantation resources. However, many uncertainties remain regarding the impacts of inferior wood quality of young plantation trees on product recovery and value, and with respect to optimal processing techniques. Moreover, the quality of veneer and veneer-based products is far from optimal as trees are young and have small diameters; and the veneers have the significant colour variation which affects to the added value of final products. Developing production methods which would enhance appearance of low-quality veneer would provide a great potential for the production of high-value wood products such as furniture, joinery, flooring and other appearance products. One of the methods of enhancing appearance of low quality veneer, developed in Italy, involves the production of multilaminar veneer, also named “reconstructed veneer”. An important stage of the multilaminar production is colouring the veneer which can be achieved by dyeing veneer with dyes of different colours depending on the type of appearance products, their design and market demand. Although veneer dyeing technology has been well advanced in Italy, it has been focused on poplar veneer from plantation which wood is characterized by low density, even colour, small amount of defects and high permeability. Conversely, the majority of plantation eucalypts have medium to high density, have a lot of defects, uneven colour and low permeability. Therefore, detailed study is required to develop dyeing methods suitable for colouring eucalypt veneers. Brown reactive dye is used for veneer colouring process. Veneers from sapwood and heartwood of two moisture content levels are used to conduct colouring experiments: green veneer and veneer dried to 12% MC. Prior to dyeing, all samples are treated. Both soaking (dipping) and vacuum pressure methods are used in the study to compare the results and select most efficient method for veneer dyeing. To date, the results of colour measurements by CIELAB colour system showed significant differences in the colour of the undyed veneers produced from heartwood part. The colour became moderately darker with increasing of Sodium chloride, compared to control samples according to the colour measurements. It is difficult to conclude a suitable dye solution used in the experiments at this stage as the variables such as dye concentration, dyeing temperature or dyeing time have not been done. The dye will be used with and without UV absorbent after all trials are completed using optimal parameters in colouring veneers.

Keywords: Eucalyptus globulus, veneer colouring/dyeing, multilaminar veneer, reactive dye

Procedia PDF Downloads 348
791 Blackcurrant-Associated Rhabdovirus: New Pathogen for Blackcurrants in the Baltic Sea Region

Authors: Gunta Resevica, Nikita Zrelovs, Ivars Silamikelis, Ieva Kalnciema, Helvijs Niedra, Gunārs Lācis, Toms Bartulsons, Inga Moročko-Bičevska, Arturs Stalažs, Kristīne Drevinska, Andris Zeltins, Ina Balke

Abstract:

Newly discovered viruses provide novel knowledge for basic phytovirus research, serve as tools for biotechnology and can be helpful in identification of epidemic outbreaks. Blackcurrant-associated rhabdovirus (BCaRV) have been discovered in USA germplasm collection samples from Russia and France. As it was reported in one accession originating from France it is unclear whether the material was already infected when it entered in the USA or it became infected while in collection in the USA. Due to that BCaRV was definite as non-EU viruses. According to ICTV classification BCaRV is representative of Blackcurrant betanucleorhabdovirus specie in genus Betanucleorhabdovirus (family Rhabdoviridae). Nevertheless, BCaRV impact on the host, transmission mechanisms and vectors are still unknown. In RNA-seq data pool from Ribes plants resistance gene study by high throughput sequencing (HTS) we observed differences between sample group gene transcript heat maps. Additional analysis of the whole data pool (total 393660492 of 150 bp long read pairs) by rnaSPAdes v 3.13.1 resulted into 14424 bases long contig with an average coverage of 684x with shared 99.5% identity to the previously reported first complete genome of BCaRV (MF543022.1) using EMBOSS Needle. This finding proved BCaRV presence in EU and indicated that it might be relevant pathogen. In this study leaf tissue from twelve asymptomatic blackcurrant cv. Mara Eglite plants (negatively tested for blackcurrant reversion virus (BRV)) from Dobele, Latvia (56°36'31.9"N, 23°18'13.6"E) was collected and used for total RNA isolation with RNeasy Plant Mini Kit with minor modifications, followed by plant rRNA removal by a RiboMinus Plant Kit for RNA-Seq. HTS libraries were prepared using MGI Easy RNA Directional Library Prep Set for 16 reactions to obtain 150 bp pair-end reads. Libraries were pooled, circularized and cleaned and sequenced on DNBSEQ-G400 using PE150 flow cell. Additionally, all samples were tested by RT-PCR, and amplicons were directly sequenced by Sanger-based method. The contig representing the genome of BCaRV isolate Mara Eglite was deposited at European Nucleotide Archive under accession number OU015520. Those findings indicate a second evidence on the presence of this particular virus in the EU and further research on BCaRV prevalence in Ribes from other geographical areas should be performed. As there are no information on BCaRV impact on the host this should be investigated, regarding the fact that mixed infections with BRV and nucleorhabdoviruses are reported.

Keywords: BCaRV, Betanucleorhabdovirus, Ribes, RNA-seq

Procedia PDF Downloads 183
790 Effects of Prescribed Surface Perturbation on NACA 0012 at Low Reynolds Number

Authors: Diego F. Camacho, Cristian J. Mejia, Carlos Duque-Daza

Abstract:

The recent widespread use of Unmanned Aerial Vehicles (UAVs) has fueled a renewed interest in efficiency and performance of airfoils, particularly for applications at low and moderate Reynolds numbers, typical of this kind of vehicles. Most of previous efforts in the aeronautical industry, regarding aerodynamic efficiency, had been focused on high Reynolds numbers applications, typical of commercial airliners and large size aircrafts. However, in order to increase the levels of efficiency and to boost the performance of these UAV, it is necessary to explore new alternatives in terms of airfoil design and application of drag reduction techniques. The objective of the present work is to carry out the analysis and comparison of performance levels between a standard NACA0012 profile against another one featuring a wall protuberance or surface perturbation. A computational model, based on the finite volume method, is employed to evaluate the effect of the presence of geometrical distortions on the wall. The performance evaluation is achieved in terms of variations of drag and lift coefficients for the given profile. In particular, the aerodynamic performance of the new design, i.e. the airfoil with a surface perturbation, is examined under conditions of incompressible and subsonic flow in transient state. The perturbation considered is a shaped protrusion prescribed as a small surface deformation on the top wall of the aerodynamic profile. The ultimate goal by including such a controlled smooth artificial roughness was to alter the turbulent boundary layer. It is shown in the present work that such a modification has a dramatic impact on the aerodynamic characteristics of the airfoil, and if properly adjusted, in a positive way. The computational model was implemented using the unstructured, FVM-based open source C++ platform OpenFOAM. A number of numerical experiments were carried out at Reynolds number 5x104, based on the length of the chord and the free-stream velocity, and angles of attack 6° and 12°. A Large Eddy Simulation (LES) approach was used, together with the dynamic Smagorinsky approach as subgrid scale (SGS) model, in order to account for the effect of the small turbulent scales. The impact of the surface perturbation on the performance of the airfoil is judged in terms of changes in the drag and lift coefficients, as well as in terms of alterations of the main characteristics of the turbulent boundary layer on the upper wall. A dramatic change in the whole performance can be appreciated, including an arguably large level of lift-to-drag coefficient ratio increase for all angles and a size reduction of laminar separation bubble (LSB) for a twelve-angle-of-attack.

Keywords: CFD, LES, Lift-to-drag ratio, LSB, NACA 0012 airfoil

Procedia PDF Downloads 385
789 A Method Intensive Top-down Approach for Generating Guidelines for an Energy-Efficient Neighbourhood: A Case of Amaravati, Andhra Pradesh, India

Authors: Rituparna Pal, Faiz Ahmed

Abstract:

Neighbourhood energy efficiency is a newly emerged term to address the quality of urban strata of built environment in terms of various covariates of sustainability. The concept of sustainability paradigm in developed nations has encouraged the policymakers for developing urban scale cities to envision plans under the aegis of urban scale sustainability. The concept of neighbourhood energy efficiency is realized a lot lately just when the cities, towns and other areas comprising this massive global urban strata have started facing a strong blow from climate change, energy crisis, cost hike and an alarming shortfall in the justice which the urban areas required. So this step of urban sustainability can be easily referred more as a ‘Retrofit Action’ which is to cover up the already affected urban structure. So even if we start energy efficiency for existing cities and urban areas the initial layer remains, for which a complete model of urban sustainability still lacks definition. Urban sustainability is a broadly spoken off word with end number of parameters and policies through which the loop can be met. Out of which neighbourhood energy efficiency can be an integral part where the concept and index of neighbourhood scale indicators, block level indicators and building physics parameters can be understood, analyzed and concluded to help emerge guidelines for urban scale sustainability. The future of neighbourhood energy efficiency not only lies in energy efficiency but also important parameters like quality of life, access to green, access to daylight, outdoor comfort, natural ventilation etc. So apart from designing less energy-hungry buildings, it is required to create a built environment which will create less stress on buildings to consume more energy. A lot of literary analysis has been done in the Western countries prominently in Spain, Paris and also Hong Kong, leaving a distinct gap in the Indian scenario in exploring the sustainability at the urban strata. The site for the study has been selected in the upcoming capital city of Amaravati which can be replicated with similar neighbourhood typologies in the area. The paper suggests a methodical intent to quantify energy and sustainability indices in detail taking by involving several macro, meso and micro level covariates and parameters. Several iterations have been made both at macro and micro level and have been subjected to simulation, computation and mathematical models and finally to comparative analysis. Parameters at all levels are analyzed to suggest the best case scenarios which in turn is extrapolated to the macro level finally coming out with a proposal model for energy efficient neighbourhood and worked out guidelines with significance and correlations derived.

Keywords: energy quantification, macro scale parameters, meso scale parameters, micro scale parameters

Procedia PDF Downloads 175
788 Photoemission Momentum Microscopy of Graphene on Ir (111)

Authors: Anna V. Zaporozhchenko, Dmytro Kutnyakhov, Katherina Medjanik, Christian Tusche, Hans-Joachim Elmers, Olena Fedchenko, Sergey Chernov, Martin Ellguth, Sergej A. Nepijko, Gerd Schoenhense

Abstract:

Graphene reveals a unique electronic structure that predetermines many intriguing properties such as massless charge carriers, optical transparency and high velocity of fermions at the Fermi level, opening a wide horizon of future applications. Hence, a detailed investigation of the electronic structure of graphene is crucial. The method of choice is angular resolved photoelectron spectroscopy ARPES. Here we present experiments using time-of-flight (ToF) momentum microscopy, being an alternative way of ARPES using full-field imaging of the whole Brillouin zone (BZ) and simultaneous acquisition of up to several 100 energy slices. Unlike conventional ARPES, k-microscopy is not limited in simultaneous k-space access. We have recorded the whole first BZ of graphene on Ir(111) including all six Dirac cones. As excitation source we used synchrotron radiation from BESSY II (Berlin) at the U125-2 NIM, providing linearly polarized (both polarizations p- and s-) VUV radiation. The instrument uses a delay-line detector for single-particle detection up the 5 Mcps range and parallel energy detection via ToF recording. In this way, we gather a 3D data stack I(E,kx,ky) of the full valence electronic structure in approx. 20 mins. Band dispersion stacks were measured in the energy range of 14 eV up to 23 eV with steps of 1 eV. The linearly-dispersing graphene bands for all six K and K’ points were simultaneously recorded. We find clear features of hybridization with the substrate, in particular in the linear dichroism in the angular distribution (LDAD). Recording of the whole Brillouin zone of graphene/Ir(111) revealed new features. First, the intensity differences (i.e. the LDAD) are very sensitive to the interaction of graphene bands with substrate bands. Second, the dark corridors are investigated in detail for both, p- and s- polarized radiation. They appear as local distortions of photoelectron current distribution and are induced by quantum mechanical interference of graphene sublattices. The dark corridors are located in different areas of the 6 Dirac cones and show chirality behaviour with a mirror plane along vertical axis. Moreover, two out of six show an oval shape while the rest are more circular. It clearly indicates orientation dependence with respect to E vector of incident light. Third, a pattern of faint but very sharp lines is visible at energies around 22eV that strongly remind on Kikuchi lines in diffraction. In conclusion, the simultaneous study of all six Dirac cones is crucial for a complete understanding of dichroism phenomena and the dark corridor.

Keywords: band structure, graphene, momentum microscopy, LDAD

Procedia PDF Downloads 339
787 Interacting with Multi-Scale Structures of Online Political Debates by Visualizing Phylomemies

Authors: Quentin Lobbe, David Chavalarias, Alexandre Delanoe

Abstract:

The ICT revolution has given birth to an unprecedented world of digital traces and has impacted a wide number of knowledge-driven domains such as science, education or policy making. Nowadays, we are daily fueled by unlimited flows of articles, blogs, messages, tweets, etc. The internet itself can thus be considered as an unsteady hyper-textual environment where websites emerge and expand every day. But there are structures inside knowledge. A given text can always be studied in relation to others or in light of a specific socio-cultural context. By way of their textual traces, human beings are calling each other out: hypertext citations, retweets, vocabulary similarity, etc. We are in fact the architects of a giant web of elements of knowledge whose structures and shapes convey their own information. The global shapes of these digital traces represent a source of collective knowledge and the question of their visualization remains an opened challenge. How can we explore, browse and interact with such shapes? In order to navigate across these growing constellations of words and texts, interdisciplinary innovations are emerging at the crossroad between fields of social and computational sciences. In particular, complex systems approaches make it now possible to reconstruct the hidden structures of textual knowledge by means of multi-scale objects of research such as semantic maps and phylomemies. The phylomemy reconstruction is a generic method related to the co-word analysis framework. Phylomemies aim to reveal the temporal dynamics of large corpora of textual contents by performing inter-temporal matching on extracted knowledge domains in order to identify their conceptual lineages. This study aims to address the question of visualizing the global shapes of online political discussions related to the French presidential and legislative elections of 2017. We aim to build phylomemies on top of a dedicated collection of thousands of French political tweets enriched with archived contemporary news web articles. Our goal is to reconstruct the temporal evolution of online debates fueled by each political community during the elections. To that end, we want to introduce an iterative data exploration methodology implemented and tested within the free software Gargantext. There we combine synchronic and diachronic axis of visualization to reveal the dynamics of our corpora of tweets and web pages as well as their inner syntagmatic and paradigmatic relationships. In doing so, we aim to provide researchers with innovative methodological means to explore online semantic landscapes in a collaborative and reflective way.

Keywords: online political debate, French election, hyper-text, phylomemy

Procedia PDF Downloads 185
786 A Finite Element Analysis of Hexagonal Double-Arrowhead Auxetic Structure with Enhanced Energy Absorption Characteristics and Stiffness

Authors: Keda Li, Hong Hu

Abstract:

Auxetic materials, as an emerging artificial designed metamaterial has attracted growing attention due to their promising negative Poisson’s ratio behaviors and tunable properties. The conventional auxetic lattice structures for which the deformation process is governed by a bending-dominated mechanism have faced the limitation of poor mechanical performance for many potential engineering applications. Recently, both load-bearing and energy absorption capabilities have become a crucial consideration in auxetic structure design. This study reports the finite element analysis of a class of hexagonal double-arrowhead auxetic structures with enhanced stiffness and energy absorption performance. The structure design was developed by extending the traditional double-arrowhead honeycomb to a hexagon frame, the stretching-dominated deformation mechanism was determined according to Maxwell’s stability criterion. The finite element (FE) models of 2D lattice structures established with stainless steel material were analyzed in ABAQUS/Standard for predicting in-plane structural deformation mechanism, failure process, and compressive elastic properties. Based on the computational simulation, the parametric analysis was studied to investigate the effect of the structural parameters on Poisson’s ratio and mechanical properties. The geometrical optimization was then implemented to achieve the optimal Poisson’s ratio for the maximum specific energy absorption. In addition, the optimized 2D lattice structure was correspondingly converted into a 3D geometry configuration by using the orthogonally splicing method. The numerical results of 2D and 3D structures under compressive quasi-static loading conditions were compared separately with the traditional double-arrowhead re-entrant honeycomb in terms of specific Young's moduli, Poisson's ratios, and specified energy absorption. As a result, the energy absorption capability and stiffness are significantly reinforced with a wide range of Poisson’s ratio compared to traditional double-arrowhead re-entrant honeycomb. The auxetic behaviors, energy absorption capability, and yield strength of the proposed structure are adjustable with different combinations of joint angle, struts thickness, and the length-width ratio of the representative unit cell. The numerical prediction in this study suggests the proposed concept of hexagonal double-arrowhead structure could be a suitable candidate for the energy absorption applications with a constant request of load-bearing capacity. For future research, experimental analysis is required for the validation of the numerical simulation.

Keywords: auxetic, energy absorption capacity, finite element analysis, negative Poisson's ratio, re-entrant hexagonal honeycomb

Procedia PDF Downloads 86
785 The Effect of Aerobics and Yogic Exercise on Selected Physiological and Psychological Variables of Middle-Aged Women

Authors: A. Pallavi, N. Vijay Mohan

Abstract:

A nation can be economically progressive only when the citizens have sufficient capacity to work efficiently to increase the productivity. So, good health must be regarded as a primary need of the community. This helps the growth and development of the body and the mind, which in turn leads to progress and prosperity of the nation. An optimum growth is a necessity for an efficient existence in a biologically adverse and economically competitive world. It is also necessary for the execution of daily routine work. Yoga is a method or a system for the complete development of the personality in a human being. It can be further elaborated as an all-around and complete development of the body, mind, morality, intellect and soul of a being. Sri Aurobindo defines yoga as 'a methodical effort towards self-perfection by the development of the potentialities in the individual.' Aerobic exercise as any activity that uses large muscle groups, can be maintained continuously, and is rhythmic I nature. It is a type of exercise that overloads the heart and lungs and causes them to work harder than at rest. The important idea behind aerobic exercise today, is to get up and get moving. There are more activities that ever to choose from, whether it is a new activity or an old one. Find something you enjoy doing that keeps our heart rate elevated for a continuous time period and get moving to a healthier life. Middle aged selected and served as the subjects for the purpose of this study. The selected subjects were in the age group of 30 to 40 years. By going through the literature and after consulting the experts in yoga and aerobic training, the investigator had chosen the variables which are specifically related to the middle-aged men. The selected physiological variables are pulse rate, diastolic blood pressure, systolic blood pressure; percent body fat and vital capacity. The selected psychological variables are job anxiety, occupational stress. The study was formulated as a random group design consisting of aerobic exercise and yogic exercises groups. The subjects (N=60) were at random divided into three equal groups of twenty middle-aged men each. The groups were assigned the names as follows: 1. Experimental group I- aerobic exercises group, 2. Experimental group II- yogic exercises, 3. Control group. All the groups were subjected to pre-test prior to the experimental treatment. The experimental groups participated in their respective duration of twenty-four weeks, six days in a week throughout the study. The various tests administered were: prior to training (pre-test), after twelfth week (second test) and twenty-fourth weeks (post-test) of the training schedule.

Keywords: pulse rate, diastolic blood pressure, systolic blood pressure; percent body fat and vital capacity, psychological variables, job anxiety, occupational stress, aerobic exercise, yogic exercise

Procedia PDF Downloads 442
784 Optimization of Ultrasound-Assisted Extraction of Oil from Spent Coffee Grounds Using a Central Composite Rotatable Design

Authors: Malek Miladi, Miguel Vegara, Maria Perez-Infantes, Khaled Mohamed Ramadan, Antonio Ruiz-Canales, Damaris Nunez-Gomez

Abstract:

Coffee is the second consumed commodity worldwide, yet it also generates colossal waste. Proper management of coffee waste is proposed by converting them into products with higher added value to achieve sustainability of the economic and ecological footprint and protect the environment. Based on this, a study looking at the recovery of coffee waste is becoming more relevant in recent decades. Spent coffee grounds (SCG's) resulted from brewing coffee represents the major waste produced among all coffee industry. The fact that SCGs has no economic value be abundant in nature and industry, do not compete with agriculture and especially its high oil content (between 7-15% from its total dry matter weight depending on the coffee varieties, Arabica or Robusta), encourages its use as a sustainable feedstock for bio-oil production. The bio-oil extraction is a crucial step towards biodiesel production by the transesterification process. However, conventional methods used for oil extraction are not recommended due to their high consumption of energy, time, and generation of toxic volatile organic solvents. Thus, finding a sustainable, economical, and efficient extraction technique is crucial to scale up the process and to ensure more environment-friendly production. Under this perspective, the aim of this work was the statistical study to know an efficient strategy for oil extraction by n-hexane using indirect sonication. The coffee waste mixed Arabica and Robusta, which was used in this work. The temperature effect, sonication time, and solvent-to-solid ratio on the oil yield were statistically investigated as dependent variables by Central Composite Rotatable Design (CCRD) 23. The results were analyzed using STATISTICA 7 StatSoft software. The CCRD showed the significance of all the variables tested (P < 0.05) on the process output. The validation of the model by analysis of variance (ANOVA) showed good adjustment for the results obtained for a 95% confidence interval, and also, the predicted values graph vs. experimental values confirmed the satisfactory correlation between the model results. Besides, the identification of the optimum experimental conditions was based on the study of the surface response graphs (2-D and 3-D) and the critical statistical values. Based on the CCDR results, 29 ºC, 56.6 min, and solvent-to-solid ratio 16 were the better experimental conditions defined statistically for coffee waste oil extraction using n-hexane as solvent. In these conditions, the oil yield was >9% in all cases. The results confirmed the efficiency of using an ultrasound bath in extracting oil as a more economical, green, and efficient way when compared to the Soxhlet method.

Keywords: coffee waste, optimization, oil yield, statistical planning

Procedia PDF Downloads 118
783 A Survey of Digital Health Companies: Opportunities and Business Model Challenges

Authors: Iris Xiaohong Quan

Abstract:

The global digital health market reached 175 billion U.S. dollars in 2019, and is expected to grow at about 25% CAGR to over 650 billion USD by 2025. Different terms such as digital health, e-health, mHealth, telehealth have been used in the field, which can sometimes cause confusion. The term digital health was originally introduced to refer specifically to the use of interactive media, tools, platforms, applications, and solutions that are connected to the Internet to address health concerns of providers as well as consumers. While mHealth emphasizes the use of mobile phones in healthcare, telehealth means using technology to remotely deliver clinical health services to patients. According to FDA, “the broad scope of digital health includes categories such as mobile health (mHealth), health information technology (IT), wearable devices, telehealth and telemedicine, and personalized medicine.” Some researchers believe that digital health is nothing else but the cultural transformation healthcare has been going through in the 21st century because of digital health technologies that provide data to both patients and medical professionals. As digital health is burgeoning, but research in the area is still inadequate, our paper aims to clear the definition confusion and provide an overall picture of digital health companies. We further investigate how business models are designed and differentiated in the emerging digital health sector. Both quantitative and qualitative methods are adopted in the research. For the quantitative analysis, our research data came from two databases Crunchbase and CBInsights, which are well-recognized information sources for researchers, entrepreneurs, managers, and investors. We searched a few keywords in the Crunchbase database based on companies’ self-description: digital health, e-health, and telehealth. A search of “digital health” returned 941 unique results, “e-health” returned 167 companies, while “telehealth” 427. We also searched the CBInsights database for similar information. After merging and removing duplicate ones and cleaning up the database, we came up with a list of 1464 companies as digital health companies. A qualitative method will be used to complement the quantitative analysis. We will do an in-depth case analysis of three successful unicorn digital health companies to understand how business models evolve and discuss the challenges faced in this sector. Our research returned some interesting findings. For instance, we found that 86% of the digital health startups were founded in the recent decade since 2010. 75% of the digital health companies have less than 50 employees, and almost 50% with less than 10 employees. This shows that digital health companies are relatively young and small in scale. On the business model analysis, while traditional healthcare businesses emphasize the so-called “3P”—patient, physicians, and payer, digital health companies extend to “5p” by adding patents, which is the result of technology requirements (such as the development of artificial intelligence models), and platform, which is an effective value creation approach to bring the stakeholders together. Our case analysis will detail the 5p framework and contribute to the extant knowledge on business models in the healthcare industry.

Keywords: digital health, business models, entrepreneurship opportunities, healthcare

Procedia PDF Downloads 182
782 A Conceptual Framework of Integrated Evaluation Methodology for Aquaculture Lakes

Authors: Robby Y. Tallar, Nikodemus L., Yuri S., Jian P. Suen

Abstract:

Research in the subject of ecological water resources management is full of trivial questions addressed and it seems, today to be one branch of science that can strongly contribute to the study of complexity (physical, biological, ecological, socio-economic, environmental, and other aspects). Existing literature available on different facets of these studies, much of it is technical and targeted for specific users. This study offered the combination all aspects in evaluation methodology for aquaculture lakes with its paradigm refer to hierarchical theory and to the effects of spatial specific arrangement of an object into a space or local area. Therefore, the process in developing a conceptual framework represents the more integrated and related applicable concept from the grounded theory. A design of integrated evaluation methodology for aquaculture lakes is presented. The method is based on the identification of a series of attributes which can be used to describe status of aquaculture lakes using certain indicators from aquaculture water quality index (AWQI), aesthetic aquaculture lake index (AALI) and rapid appraisal for fisheries index (RAPFISH). The preliminary preparation could be accomplished as follows: first, the characterization of study area was undertaken at different spatial scales. Second, an inventory data as a core resource such as city master plan, water quality reports from environmental agency, and related government regulations. Third, ground-checking survey should be completed to validate the on-site condition of study area. In order to design an integrated evaluation methodology for aquaculture lakes, finally we integrated and developed rating scores system which called Integrated Aquaculture Lake Index (IALI).The development of IALI are reflecting a compromise all aspects and it responds the needs of concise information about the current status of aquaculture lakes by the comprehensive approach. IALI was elaborated as a decision aid tool for stakeholders to evaluate the impact and contribution of anthropogenic activities on the aquaculture lake’s environment. The conclusion was while there is no denying the fact that the aquaculture lakes are under great threat from the pressure of the increasing human activities, one must realize that no evaluation methodology for aquaculture lakes can succeed by keeping the pristine condition. The IALI developed in this work can be used as an effective, low-cost evaluation methodology of aquaculture lakes for developing countries. Because IALI emphasizes the simplicity and understandability as it must communicate to decision makers and the experts. Moreover, stakeholders need to be helped to perceive their lakes so that sites can be accepted and valued by local people. For this site of lake development, accessibility and planning designation of the site is of decisive importance: the local people want to know whether the lake condition is safe or whether it can be used.

Keywords: aesthetic value, AHP, aquaculture lakes, integrated lakes, RAPFISH

Procedia PDF Downloads 236