Search results for: small object detection
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8834

Search results for: small object detection

734 Reconstruction of Signal in Plastic Scintillator of PET Using Tikhonov Regularization

Authors: L. Raczynski, P. Moskal, P. Kowalski, W. Wislicki, T. Bednarski, P. Bialas, E. Czerwinski, A. Gajos, L. Kaplon, A. Kochanowski, G. Korcyl, J. Kowal, T. Kozik, W. Krzemien, E. Kubicz, Sz. Niedzwiecki, M. Palka, Z. Rudy, O. Rundel, P. Salabura, N.G. Sharma, M. Silarski, A. Slomski, J. Smyrski, A. Strzelecki, A. Wieczorek, M. Zielinski, N. Zon

Abstract:

The J-PET scanner, which allows for single bed imaging of the whole human body, is currently under development at the Jagiellonian University. The J-PET detector improves the TOF resolution due to the use of fast plastic scintillators. Since registration of the waveform of signals with duration times of few nanoseconds is not feasible, a novel front-end electronics allowing for sampling in a voltage domain at four thresholds was developed. To take fully advantage of these fast signals a novel scheme of recovery of the waveform of the signal, based on ideas from the Tikhonov regularization (TR) and Compressive Sensing methods, is presented. The prior distribution of sparse representation is evaluated based on the linear transformation of the training set of waveform of the signals by using the Principal Component Analysis (PCA) decomposition. Beside the advantage of including the additional information from training signals, a further benefit of the TR approach is that the problem of signal recovery has an optimal solution which can be determined explicitly. Moreover, from the Bayes theory the properties of regularized solution, especially its covariance matrix, may be easily derived. This step is crucial to introduce and prove the formula for calculations of the signal recovery error. It has been proven that an average recovery error is approximately inversely proportional to the number of samples at voltage levels. The method is tested using signals registered by means of the single detection module of the J-PET detector built out from the 30 cm long BC-420 plastic scintillator strip. It is demonstrated that the experimental and theoretical functions describing the recovery errors in the J-PET scenario are largely consistent. The specificity and limitations of the signal recovery method in this application are discussed. It is shown that the PCA basis offers high level of information compression and an accurate recovery with just eight samples, from four voltage levels, for each signal waveform. Moreover, it is demonstrated that using the recovered waveform of the signals, instead of samples at four voltage levels alone, improves the spatial resolution of the hit position reconstruction. The experiment shows that spatial resolution evaluated based on information from four voltage levels, without a recovery of the waveform of the signal, is equal to 1.05 cm. After the application of an information from four voltage levels to the recovery of the signal waveform, the spatial resolution is improved to 0.94 cm. Moreover, the obtained result is only slightly worse than the one evaluated using the original raw-signal. The spatial resolution calculated under these conditions is equal to 0.93 cm. It is very important information since, limiting the number of threshold levels in the electronic devices to four, leads to significant reduction of the overall cost of the scanner. The developed recovery scheme is general and may be incorporated in any other investigation where a prior knowledge about the signals of interest may be utilized.

Keywords: plastic scintillators, positron emission tomography, statistical analysis, tikhonov regularization

Procedia PDF Downloads 440
733 Identifying Areas on the Pavement Where Rain Water Runoff Affects Motorcycle Behavior

Authors: Panagiotis Lemonakis, Theodoros Αlimonakis, George Kaliabetsos, Nikos Eliou

Abstract:

It is very well known that certain vertical and longitudinal slopes have to be assured in order to achieve adequate rainwater runoff from the pavement. The selection of longitudinal slopes, between the turning points of the vertical curves that meet the afore-mentioned requirement does not ensure adequate drainage because the same condition must also be applied at the transition curves. In this way none of the pavement edges’ slopes (as well as any other spot that lie on the pavement) will be opposite to the longitudinal slope of the rotation axis. Horizontal and vertical alignment must be properly combined in order to form a road which resultant slope does not take small values and hence, checks must be performed in every cross section and every chainage of the road. The present research investigates the rain water runoff from the road surface in order to identify the conditions under which, areas of inadequate drainage are being created, to analyze the rainwater behavior in such areas, to provide design examples of good and bad drainage zones and to track down certain motorcycle types which might encounter hazardous situations due to the presence of water film between the pavement and both of their tires resulting loss of traction. Moreover, it investigates the combination of longitudinal and cross slope values in critical pavement areas. It should be pointed out that the drainage gradient is analytically calculated for the whole road width and not just for an oblique slope per chainage (combination of longitudinal grade and cross slope). Lastly, various combinations of horizontal and vertical design are presented, indicating the crucial zones of bad pavement drainage. The key conclusion of the study is that any type of motorcycle will travel for some time inside the area of improper runoff for a certain time frame which depends on the speed and the trajectory that the rider chooses along the transition curve. Taking into account that on this section the rider will have to lean his motorcycle and hence reduce the contact area of his tire with the pavement it is apparent that any variations on the friction value due to the presence of a water film may lead to serious problems regarding his safety. The water runoff from the road pavement is improved when between reverse longitudinal slopes, crest instead of sag curve is chosen and particularly when its edges coincide with the edges of the horizontal curve. Lastly, the results of the investigation have shown that the variation of the longitudinal slope involves the vertical shift of the center of the poor water runoff area. The magnitude of this area increases as the length of the transition curve increases.

Keywords: drainage, motorcycle safety, superelevation, transition curves, vertical grade

Procedia PDF Downloads 93
732 Engineering Topology of Photonic Systems for Sustainable Molecular Structure: Autopoiesis Systems

Authors: Moustafa Osman Mohammed

Abstract:

This paper introduces topological order in descried social systems starting with the original concept of autopoiesis by biologists and scientists, including the modification of general systems based on socialized medicine. Topological order is important in describing the physical systems for exploiting optical systems and improving photonic devices. The stats of topological order have some interesting properties of topological degeneracy and fractional statistics that reveal the entanglement origin of topological order, etc. Topological ideas in photonics form exciting developments in solid-state materials, that being; insulating in the bulk, conducting electricity on their surface without dissipation or back-scattering, even in the presence of large impurities. A specific type of autopoiesis system is interrelated to the main categories amongst existing groups of the ecological phenomena interaction social and medical sciences. The hypothesis, nevertheless, has a nonlinear interaction with its natural environment 'interactional cycle' for exchange photon energy with molecules without changes in topology. The engineering topology of a biosensor is based on the excitation boundary of surface electromagnetic waves in photonic band gap multilayer films. The device operation is similar to surface Plasmonic biosensors in which a photonic band gap film replaces metal film as the medium when surface electromagnetic waves are excited. The use of photonic band gap film offers sharper surface wave resonance leading to the potential of greatly enhanced sensitivity. So, the properties of the photonic band gap material are engineered to operate a sensor at any wavelength and conduct a surface wave resonance that ranges up to 470 nm. The wavelength is not generally accessible with surface Plasmon sensing. Lastly, the photonic band gap films have robust mechanical functions that offer new substrates for surface chemistry to understand the molecular design structure and create sensing chips surface with different concentrations of DNA sequences in the solution to observe and track the surface mode resonance under the influences of processes that take place in the spectroscopic environment. These processes led to the development of several advanced analytical technologies: which are; automated, real-time, reliable, reproducible, and cost-effective. This results in faster and more accurate monitoring and detection of biomolecules on refractive index sensing, antibody-antigen reactions with a DNA or protein binding. Ultimately, the controversial aspect of molecular frictional properties is adjusted to each other in order to form unique spatial structure and dynamics of biological molecules for providing the environment mutual contribution in investigation of changes due to the pathogenic archival architecture of cell clusters.

Keywords: autopoiesis, photonics systems, quantum topology, molecular structure, biosensing

Procedia PDF Downloads 82
731 The Role of Demographics and Service Quality in the Adoption and Diffusion of E-Government Services: A Study in India

Authors: Sayantan Khanra, Rojers P. Joseph

Abstract:

Background and Significance: This study is aimed at analyzing the role of demographic and service quality variables in the adoption and diffusion of e-government services among the users in India. The study proposes to examine the users' perception about e-Government services and investigate the key variables that are most salient to the Indian populace. Description of the Basic Methodologies: The methodology to be adopted in this study is Hierarchical Regression Analysis, which will help in exploring the impact of the demographic variables and the quality dimensions on the willingness to use e-government services in two steps. First, the impact of demographic variables on the willingness to use e-government services is to be examined. In the second step, quality dimensions would be used as inputs to the model for explaining variance in excess of prior contribution by the demographic variables. Present Status: Our study is in the data collection stage in collaboration with a highly reliable, authentic and adequate source of user data. Assuming that the population of the study comprises all the Internet users in India, a massive sample size of more than 10,000 random respondents is being approached. Data is being collected using an online survey questionnaire. A pilot survey has already been carried out to refine the questionnaire with inputs from an expert in management information systems and a small group of users of e-government services in India. The first three questions in the survey pertain to the Internet usage pattern of a respondent and probe whether the person has used e-government services. If the respondent confirms that he/she has used e-government services, then an aggregate of 15 indicators are used to measure the quality dimensions under consideration and the willingness of the respondent to use e-government services, on a five-point Likert scale. If the respondent reports that he/she has not used e-government services, then a few optional questions are asked to understand the reason(s) behind the same. Last four questions in the survey are dedicated to collect data related to the demographic variables. An indication of the Major Findings: Based on the extensive literature review carried out to develop several propositions; a research model is prescribed to start with. A major outcome expected at the completion of the study is the development of a research model that would help to understand the relationship involving the demographic variables and service quality dimensions, and the willingness to adopt e-government services, particularly in an emerging economy like India. Concluding Statement: Governments of emerging economies and other relevant agencies can use the findings from the study in designing, updating, and promoting e-government services to enhance public participation, which in turn, would help to improve efficiency, convenience, engagement, and transparency in implementing these services.

Keywords: adoption and diffusion of e-government services, demographic variables, hierarchical regression analysis, service quality dimensions

Procedia PDF Downloads 257
730 Streamlining the Fuzzy Front-End and Improving the Usability of the Tools Involved

Authors: Michael N. O'Sullivan, Con Sheahan

Abstract:

Researchers have spent decades developing tools and techniques to aid teams in the new product development (NPD) process. Despite this, it is evident that there is a huge gap between their academic prevalence and their industry adoption. For the fuzzy front-end, in particular, there is a wide range of tools to choose from, including the Kano Model, the House of Quality, and many others. In fact, there are so many tools that it can often be difficult for teams to know which ones to use and how they interact with one another. Moreover, while the benefits of using these tools are obvious to industrialists, they are rarely used as they carry a learning curve that is too steep and they become too complex to manage over time. In essence, it is commonly believed that they are simply not worth the effort required to learn and use them. This research explores a streamlined process for the fuzzy front-end, assembling the most effective tools and making them accessible to everyone. The process was developed iteratively over the course of 3 years, following over 80 final year NPD teams from engineering, design, technology, and construction as they carried a product from concept through to production specification. Questionnaires, focus groups, and observations were used to understand the usability issues with the tools involved, and a human-centred design approach was adopted to produce a solution to these issues. The solution takes the form of physical toolkit, similar to a board game, which allows the team to play through an example of a new product development in order to understand the process and the tools, before using it for their own product development efforts. A complimentary website is used to enhance the physical toolkit, and it provides more examples of the tools being used, as well as deeper discussions on each of the topics, allowing teams to adapt the process to their skills, preferences and product type. Teams found the solution very useful and intuitive and experienced significantly less confusion and mistakes with the process than teams who did not use it. Those with a design background found it especially useful for the engineering principles like Quality Function Deployment, while those with an engineering or technology background found it especially useful for design and customer requirements acquisition principles, like Voice of the Customer. Products developed using the toolkit are added to the website as more examples of how it can be used, creating a loop which helps future teams understand how the toolkit can be adapted to their project, whether it be a small consumer product or a large B2B service. The toolkit unlocks the potential of these beneficial tools to those in industry, both for large, experienced teams and for inexperienced start-ups. It allows users to assess the market potential of their product concept faster and more effectively, arriving at the product design stage with technical requirements prioritized according to their customers’ needs and wants.

Keywords: new product development, fuzzy front-end, usability, Kano model, quality function deployment, voice of customer

Procedia PDF Downloads 106
729 Performance of the Abbott RealTime High Risk HPV Assay with SurePath Liquid Based Cytology Specimens from Women with Low Grade Cytological Abnormalities

Authors: Alexandra Sargent, Sarah Ferris, Ioannis Theofanous

Abstract:

The Abbott RealTime High Risk HPV test (RealTime HPV) is one of five assays clinically validated and approved by the English NHS Cervical Screening Programme (CSP) for HPV triage of low grade dyskaryosis and test-of-cure of treated Cervical Intraepithelial Neoplasia. The assay is a highly automated multiplex real-time PCR test for detecting 14 high risk (hr) HPV types, with simultaneous differentiation of HPV 16 and HPV 18 versus non-HPV 16/18 hrHPV. An endogenous internal control ensures sample cellularity, controls extraction efficiency and PCR inhibition. The original cervical specimen collected in SurePath (SP) liquid-based cytology (LBC) medium (BD Diagnostics) and the SP post-gradient cell pellets (SPG) after cytological processing are both CE marked for testing with the RealTime HPV test. During the 2011 NHSCSP validation of new tests only the original aliquot of SP LBC medium was investigated. Residual sample volume left after cytology slide preparation is low and may not always have sufficient volume for repeat HPV testing or for testing of other biomarkers that may be implemented in testing algorithms in the future. The SPG samples, however, have sufficient volumes to carry out additional testing and necessary laboratory validation procedures. This study investigates the correlation of RealTime HPV results of cervical specimens collected in SP LBC medium from women with low grade cytological abnormalities observed with matched pairs of original SP LBC medium and SP post-gradient cell pellets (SPG) after cytology processing. Matched pairs of SP and SPG samples from 750 women with borderline (N = 392) and mild (N = 351) cytology were available for this study. Both specimen types were processed and parallel tested for the presence of hrHPV with RealTime HPV according to the manufacturer´s instructions. HrHPV detection rates and concordance between test results from matched SP and SPGCP pairs were calculated. A total of 743 matched pairs with valid test results on both sample types were available for analysis. An overall-agreement of hrHPV test results of 97.5% (k: 0.95) was found with matched SP/SPG pairs and slightly lower concordance (96.9%; k: 0.94) was observed on 392 pairs from women with borderline cytology compared to 351 pairs from women with mild cytology (98.0%; k: 0.95). Partial typing results were highly concordant in matched SP/SPG pairs for HPV 16 (99.1%), HPV 18 (99.7%) and non-HPV16/18 hrHPV (97.0%), respectively. 19 matched pairs were found with discrepant results: 9 from women with borderline cytology and 4 from women with mild cytology were negative on SPG and positive on SP; 3 from women with borderline cytology and 3 from women with mild cytology were negative on SP and positive on SPG. Excellent correlation of hrHPV DNA test results was found between matched pairs of SP original fluid and post-gradient cell pellets from women with low grade cytological abnormalities tested with the Abbott RealTime High-Risk HPV assay, demonstrating robust performance of the test with both specimen types and reassuring the utility of the assay for cytology triage with both specimen types.

Keywords: Abbott realtime test, HPV, SurePath liquid based cytology, surepath post-gradient cell pellet

Procedia PDF Downloads 245
728 Mathematical Model to Simulate Liquid Metal and Slag Accumulation, Drainage and Heat Transfer in Blast Furnace Hearth

Authors: Hemant Upadhyay, Tarun Kumar Kundu

Abstract:

It is utmost important for a blast furnace operator to understand the mechanisms governing the liquid flow, accumulation, drainage and heat transfer between various phases in blast furnace hearth for a stable and efficient blast furnace operation. Abnormal drainage behavior may lead to high liquid build up in the hearth. Operational problems such as pressurization, low wind intake, and lower material descent rates, normally be encountered if the liquid levels in the hearth exceed a critical limit when Hearth coke and Deadman start to float. Similarly, hot metal temperature is an important parameter to be controlled in the BF operation; it should be kept at an optimal level to obtain desired product quality and a stable BF performance. It is not possible to carry out any direct measurement of above due to the hostile conditions in the hearth with chemically aggressive hot liquids. The objective here is to develop a mathematical model to simulate the variation in hot metal / slag accumulation and temperature during the tapping of the blast furnace based on the computed drainage rate, production rate, mass balance, heat transfer between metal and slag, metal and solids, slag and solids as well as among the various zones of metal and slag itself. For modeling purpose, the BF hearth is considered as a pressurized vessel, filled with solid coke particles. Liquids trickle down in hearth from top and accumulate in voids between the coke particles which are assumed thermally saturated. A set of generic mass balance equations gives the amount of metal and slag intake in hearth. A small drainage (tap hole) is situated at the bottom of the hearth and flow rate of liquids from tap hole is computed taking in account the amount of both the phases accumulated their level in hearth, pressure from gases in the furnace and erosion behaviors of tap hole itself. Heat transfer equations provide the exchange of heat between various layers of liquid metal and slag, and heat loss to cooling system through refractories. Based on all that information a dynamic simulation is carried out which provides real time information of liquids accumulation in hearth before and during tapping, drainage rate and its variation, predicts critical event timings during tapping and expected tapping temperature of metal and slag on preset time intervals. The model is in use at JSPL, India BF-II and its output is regularly cross-checked with actual tapping data, which are in good agreement.

Keywords: blast furnace, hearth, deadman, hotmetal

Procedia PDF Downloads 180
727 Stuttering Persistence in Children: Effectiveness of the Psicodizione Method in a Small Italian Cohort

Authors: Corinna Zeli, Silvia Calati, Marco Simeoni, Chiara Comastri

Abstract:

Developmental stuttering affects about 10% of preschool children; although the high percentage of natural recovery, a quarter of them will become an adult who stutters. An effective early intervention should help those children with high persistence risk for the future. The Psicodizione method for early stuttering is an Italian behavior indirect treatment for preschool children who stutter in which method parents act as good guides for communication, modeling their own fluency. In this study, we give a preliminary measure to evaluate the long-term effectiveness of Psicodizione method on stuttering preschool children with a high persistence risk. Among all Italian children treated with the Psicodizione method between 2018 and 2019, we selected 8 kids with at least 3 high risk persistence factors from the Illinois Prediction Criteria proposed by Yairi and Seery. The factors chosen for the selection were: one parent who stutters (1pt mother; 1.5pt father), male gender, ≥ 4 years old at onset; ≥ 12 months from onset of symptoms before treatment. For this study, the families were contacted after an average period of time of 14,7 months (range 3 - 26 months). Parental reports were gathered with a standard online questionnaire in order to obtain data reflecting fluency from a wide range of the children’s life situations. The minimum worthwhile outcome was set at "mild evidence" in a 5 point Likert scale (1 mild evidence- 5 high severity evidence). A second group of 6 children, among those treated with the Piscodizione method, was selected as high potential for spontaneous remission (low persistence risk). The children in this group had to fulfill all the following criteria: female gender, symptoms for less than 12 months (before treatment), age of onset <4 years old, none of the parents with persistent stuttering. At the time of this follow-up, the children were aged 6–9 years, with a mean of 15 months post-treatment. Among the children in the high persistence risk group, 2 (25%) hadn’t had stutter anymore, and 3 (37,5%) had mild stutter based on parental reports. In the low persistency risk group, the children were aged 4–6 years, with a mean of 14 months post-treatment, and 5 (84%) hadn’t had stutter anymore (for the past 16 months on average).62,5% of children at high risk of persistence after Psicodizione treatment showed mild evidence of stutter at most. 75% of parents confirmed a better fluency than before the treatment. The low persistence risk group seemed to be representative of spontaneous recovery. This study’s design could help to better evaluate the success of the proposed interventions for stuttering preschool children and provides a preliminary measure of the effectiveness of the Psicodizione method on high persistence risk children.

Keywords: early treatment, fluency, preschool children, stuttering

Procedia PDF Downloads 211
726 A Model for a Continuous Professional Development Program for Early Childhood Teachers in Villages: Insights from the Coaching Pilot in Indonesia

Authors: Ellen Patricia, Marilou Hyson

Abstract:

Coaching has been showing great potential to strengthen the impact of brief group trainings and help early childhood teachers solve specific problems at work with the goal of raising the quality of early childhood services. However, there have been some doubts about the benefits that village teachers can receive from coaching. It is perceived that village teachers may struggle with the thinking skills needed to make coaching beneficial. Furthermore, there are reservations about whether principals and supervisors in villages are open to coaching’s facilitative approach, as opposed to the directive approach they have been using. As such, the use of coaching to develop the professionalism of early childhood teachers in the villages needs to be examined. The Coaching Pilot for early childhood teachers in Indonesia villages provides insights for the above issues. The Coaching Pilot is part of the ECED Frontline Pilot, which is a collaboration project between the Government of Indonesia and the World Bank with the support from the Australian Government (DFAT). The Pilot started with coordinated efforts with the local government in two districts to select principals and supervisors who have been equipped with basic knowledge about early childhood education to take part in 2-days coaching training. Afterwards, the participants were asked to collect 25 hours of coaching early childhood teachers who have participated in the Enhanced Basic Training for village teachers. The participants who completed this requirement were then invited to come for an assessment of their coaching skills. Following that, a qualitative evaluation was conducted using in-depth interviews and Focus Group Discussion techniques. The evaluation focuses on the impact of the coaching pilot in helping the village teachers to develop in their professionalism, as well as on the sustainability of the intervention. Results from the evaluation indicated that although their low education may limit their thinking skills, village teachers benefited from the coaching that they received. Moreover, the evaluation results also suggested that with enough training and support, principals and supervisors in the villages were able to provide an adequate coaching service for the teachers. On top of that, beyond this small start, interest is growing, both within the pilot districts and even beyond, due to word of mouth of the benefits that the Coaching Pilot has created. The districts where coaching was piloted have planned to continue the coaching program, since a number of early childhood teachers have requested to be coached, and a number of principals and supervisors have also requested to be trained as a coach. Furthermore, the Association for Early Childhood Educators in Indonesia has started to adopt coaching into their program. Although further research is needed, the Coaching Pilot suggests that coaching can positively impact early childhood teachers in villages, and village principals and supervisors can become a promising source of future coaches. As such, coaching has a significant potential to become a sustainable model for a continuous professional development program for early childhood teachers in villages.

Keywords: coaching, coaching pilot, early childhood teachers, principals and supervisors, village teachers

Procedia PDF Downloads 233
725 Association between Maternal Personality and Postnatal Mother-to-Infant Bonding

Authors: Tessa Sellis, Marike A. Wierda, Elke Tichelman, Mirjam T. Van Lohuizen, Marjolein Berger, François Schellevis, Claudi Bockting, Lilian Peters, Huib Burger

Abstract:

Introduction: Most women develop a healthy bond with their children, however, adequate mother-to-infant bonding cannot be taken for granted. Mother-to-infant bonding refers to the feelings and emotions experienced by the mother towards her child. It is an ongoing process that starts during pregnancy and develops during the first year postpartum and likely throughout early childhood. The prevalence of inadequate bonding ranges from 7 to 11% in the first weeks postpartum. An impaired mother-to-infant bond can cause long-term complications for both mother and child. Very little research has been conducted on the direct relationship between the personality of the mother and mother-to-infant bonding. This study explores the associations between maternal personality and postnatal mother-to-infant bonding. The main hypothesis is that there is a relationship between neuroticism and mother-to-infant bonding. Methods: Data for this study were used from the Pregnancy Anxiety and Depression Study (2010-2014), which examined symptoms of and risk factors for anxiety or depression during pregnancy and the first year postpartum of 6220 pregnant women who received primary, secondary or tertiary care in the Netherlands. The study was expanded in 2015 to investigate postnatal mother-to-infant bonding. For the current research 3836 participants were included. During the first trimester of gestation, baseline characteristics, as well as personality, were measured through online questionnaires. Personality was measured by the NEO Five Factor Inventory (NEO-FFI), which covers the big five of personality (neuroticism, extraversion, openness, altruism and conscientiousness). Mother-to-infant bonding was measured postpartum by the Postpartum Bonding Questionnaire (PBQ). Univariate linear regression analysis was performed to estimate the associations. Results: 5% of the PBQ-respondents reported impaired bonding. A statistically significant association was found between neuroticism and mother-to-infant bonding (p < .001): mothers scoring higher on neuroticism, reported a lower score on mother-to-infant bonding. In addition, a positive correlation was found between the personality traits extraversion (b: -.081), openness (b: -.014), altruism (b: -.067), conscientiousness (b: -.060) and mother-to-infant bonding. Discussion: This study is one of the first to demonstrate a direct association between the personality of the mother and mother-to-infant bonding. A statistically significant relationship has been found between neuroticism and mother-to-infant bonding, however, the percentage of variance predictable by a personality dimension is very small. This study has examined one part of the multi-factorial topic of mother-to-infant bonding and offers more insight into the rarely investigated and complex matter of mother-to-infant bonding. For midwives, it is important recognize the risks for impaired bonding and subsequently improve policy for women at risk.

Keywords: mother-to-infant bonding, personality, postpartum, pregnancy

Procedia PDF Downloads 353
724 Combination of Silver-Curcumin Nanoparticle for the Treatment of Root Canal Infection

Authors: M. Gowri, E. K. Girija, V. Ganesh

Abstract:

Background and Significance: Among the dental infections, inflammation and infection of the root canal are common among all age groups. Currently, the management of root canal infections involves cleaning the canal with powerful irrigants followed by intracanal medicament application. Though these treatments have been in vogue for a long time, root canal failures do occur. Treatment for root canal infections is limited due to the anatomical complexity in terms of small micrometer volumes and poor penetration of drugs. Thus, infections of the root canal seem to be a challenge that demands development of new agents that can eradicate C. albicans. Methodology: In the present study, we synthesized and screened silver-curcumin nanoparticle against Candida albicans. Detailed molecular studies were carried out with silver-curcumin nanoparticle on C. albicans pathogenicity. Morphological cell damage and antibiofilm activity of silver-curcumin nanoparticle on C. albicans was studied using scanning electron microscopy (SEM). Biochemical evidence for membrane damage was studied using flow cytometry. Further, the antifungal activity of silver-curcumin nanoparticle was evaluated in an ex vivo dentinal tubule infection model. Results: Screening data showed that silver-curcumin nanoparticle was active against C. albicans. Silver-curcumin nanoparticle exerted time kill effect and post antifungal effect. When used in combination with fluconazole or nystatin, silver-curcumin nanoparticle revealed a minimum inhibitory concentration (MIC) decrease for both drugs used. In-depth molecular studies with silver-curcumin nanoparticle on C. albicans showed that silver-curcumin nanoparticle inhibited yeast to hyphae (Y-H) conversion. Further, SEM images of C. albicans showed that silver-curcumin nanoparticle caused membrane damage and inhibited biofilm formation. Biochemical evidence for membrane damage was confirmed by increased propidium iodide (PI) uptake in flow cytometry. Further, the antifungal activity of silver-curcumin nanoparticle was evaluated in an ex vivo dentinal tubule infection model, which mimics human tooth root canal infection. Confocal laser scanning microscopy studies showed eradication of C. albicans and reduction in colony forming unit (CFU) after 24 h treatment in the infected tooth samples in this model. Conclusion: The results of this study can pave the way for developing new antifungal agents with well deciphered mechanisms of action and can be a promising antifungal agent or medicament against root canal infection.

Keywords: C. albicans, ex vivo dentine model, inhibition of biofilm formation, root canal infection, yeast to hyphae conversion inhibition

Procedia PDF Downloads 201
723 Assessment of the Efficacy of Routine Medical Tests in Screening Medical Radiation Staff in Shiraz University of Medical Sciences Educational Centers

Authors: Z. Razi, S. M. J. Mortazavi, N. Shokrpour, Z. Shayan, F. Amiri

Abstract:

Long-term exposure to low doses of ionizing radiation occurs in radiation health care workplaces. Although doses in health professions are generally very low, there are still matters of concern. The radiation safety program promotes occupational radiation safety through accurate and reliable monitoring of radiation workers in order to effectively manage radiation protection. To achieve this goal, it has become mandatory to implement health examination periodically. As a result, based on the hematological alterations, working populations with a common occupational radiation history are screened. This paper calls into question the effectiveness of blood component analysis as a screening program which is mandatory for medical radiation workers in some countries. This study details the distribution and trends of changes in blood components, including white blood cells (WBCs), red blood cells (RBCs) and platelets as well as received cumulative doses from occupational radiation exposure. This study was conducted among 199 participants and 100 control subjects at the medical imaging departments at the central hospital of Shiraz University of Medical Sciences during the years 2006–2010. Descriptive and analytical statistics, considering the P-value<0.05 as statistically significance was used for data analysis. The results of this study show that there is no significant difference between the radiation workers and controls regarding WBCs and platelet count during 4 years. Also, we have found no statistically significant difference between the two groups with respect to RBCs. Besides, no statistically significant difference was observed with respect to RBCs with regards to gender, which has been analyzed separately because of the lower reference range for normal RBCs levels in women compared to men and. Moreover, the findings confirm that in a separate evaluation between WBCs count and the personnel’s working experience and their annual exposure dose, results showed no linear correlation between the three variables. Since the hematological findings were within the range of control levels, it can be concluded that the radiation dosage (which was not more than 7.58 mSv in this study) had been too small to stimulate any quantifiable change in medical radiation worker’s blood count. Thus, use of more accurate method for screening program based on the working profile of the radiation workers and their accumulated dose is suggested. In addition, complexity of radiation-induced functions and the influence of various factors on blood count alteration should be taken into account.

Keywords: blood cell count, mandatory testing, occupational exposure, radiation

Procedia PDF Downloads 453
722 Tracing the Developmental Repertoire of the Progressive: Evidence from L2 Construction Learning

Authors: Tianqi Wu, Min Wang

Abstract:

Research investigating language acquisition from a constructionist perspective has demonstrated that language is learned as constructions at various linguistic levels, which is related to factors of frequency, semantic prototypicality, and form-meaning contingency. However, previous research on construction learning tended to focus on clause-level constructions such as verb argument constructions but few attempts were made to study morpheme-level constructions such as the progressive construction, which is regarded as a source of acquisition problems for English learners from diverse L1 backgrounds, especially for those whose L1 do not have an equivalent construction such as German and Chinese. To trace the developmental trajectory of Chinese EFL learners’ use of the progressive with respect to verb frequency, verb-progressive contingency, and verbal prototypicality and generality, a learner corpus consisting of three sub-corpora representing three different English proficiency levels was extracted from the Chinese Learners of English Corpora (CLEC). As the reference point, a native speakers’ corpus extracted from the Louvain Corpus of Native English Essays was also established. All the texts were annotated with C7 tagset by part-of-speech tagging software. After annotation all valid progressive hits were retrieved with AntConc 3.4.3 followed by a manual check. Frequency-related data showed that from the lowest to the highest proficiency level, (1) the type token ratio increased steadily from 23.5% to 35.6%, getting closer to 36.4% in the native speakers’ corpus, indicating a wider use of verbs in the progressive; (2) the normalized entropy value rose from 0.776 to 0.876, working towards the target score of 0.886 in native speakers’ corpus, revealing that upper-intermediate learners exhibited a more even distribution and more productive use of verbs in the progressive; (3) activity verbs (i.e., verbs with prototypical progressive meanings like running and singing) dropped from 59% to 34% but non-prototypical verbs such as state verbs (e.g., being and living) and achievement verbs (e.g., dying and finishing) were increasingly used in the progressive. Apart from raw frequency analyses, collostructional analyses were conducted to quantify verb-progressive contingency and to determine what verbs were distinctively associated with the progressive construction. Results were in line with raw frequency findings, which showed that contingency between the progressive and non-prototypical verbs represented by light verbs (e.g., going, doing, making, and coming) increased as English proficiency proceeded. These findings altogether suggested that beginning Chinese EFL learners were less productive in using the progressive construction: they were constrained by a small set of verbs which had concrete and typical progressive meanings (e.g., the activity verbs). But with English proficiency increasing, their use of the progressive began to spread to marginal members such as the light verbs.

Keywords: Construction learning, Corpus-based, Progressives, Prototype

Procedia PDF Downloads 122
721 Analyzing Transit Network Design versus Urban Dispersion

Authors: Hugo Badia

Abstract:

This research answers which is the most suitable transit network structure to serve specific demand requirements in an increasing urban dispersion process. Two main approaches of network design are found in the literature. On the one hand, a traditional answer, widespread in our cities, that develops a high number of lines to connect most of origin-destination pairs by direct trips; an approach based on the idea that users averse to transfers. On the other hand, some authors advocate an alternative design characterized by simple networks where transfer is essential to complete most of trips. To answer which of them is the best option, we use a two-step methodology. First, by means of an analytical model, three basic network structures are compared: a radial scheme, starting point for the other two structures, a direct trip-based network, and a transfer-based one, which represent the two alternative transit network designs. The model optimizes the network configuration with regard to the total cost for each structure. For a scenario of dispersion, the best alternative is the structure with the minimum cost. This dispersion degree is defined in a simple way considering that only a central area attracts all trips. If this area is small, we have a high concentrated mobility pattern; if this area is too large, the city is highly decentralized. In this first step, we can determine the area of applicability for each structure in function to that urban dispersion degree. The analytical results show that a radial structure is suitable when the demand is so centralized, however, when this demand starts to scatter, new transit lines should be implemented to avoid transfers. If the urban dispersion advances, the introduction of more lines is no longer a good alternative, in this case, the best solution is a change of structure, from direct trips to a network based on transfers. The area of applicability of each network strategy is not constant, it depends on the characteristics of demand, city and transport technology. In the second step, we translate analytical results to a real case study by the relationship between the parameters of dispersion of the model and direct measures of dispersion in a real city. Two dimensions of the urban sprawl process are considered: concentration, defined by Gini coefficient, and centralization by area based centralization index. Once it is estimated the real dispersion degree, we are able to identify in which area of applicability the city is located. In summary, from a strategic point of view, we can obtain with this methodology which is the best network design approach for a city, comparing the theoretical results with the real dispersion degree.

Keywords: analytical network design model, network structure, public transport, urban dispersion

Procedia PDF Downloads 226
720 High-Speed Particle Image Velocimetry of the Flow around a Moving Train Model with Boundary Layer Control Elements

Authors: Alexander Buhr, Klaus Ehrenfried

Abstract:

Trackside induced airflow velocities, also known as slipstream velocities, are an important criterion for the design of high-speed trains. The maximum permitted values are given by the Technical Specifications for Interoperability (TSI) and have to be checked in the approval process. For train manufactures it is of great interest to know in advance, how new train geometries would perform in TSI tests. The Reynolds number in moving model experiments is lower compared to full-scale. Especially the limited model length leads to a thinner boundary layer at the rear end. The hypothesis is that the boundary layer rolls up to characteristic flow structures in the train wake, in which the maximum flow velocities can be observed. The idea is to enlarge the boundary layer using roughness elements at the train model head so that the ratio between the boundary layer thickness and the car width at the rear end is comparable to a full-scale train. This may lead to similar flow structures in the wake and better prediction accuracy for TSI tests. In this case, the design of the roughness elements is limited by the moving model rig. Small rectangular roughness shapes are used to get a sufficient effect on the boundary layer, while the elements are robust enough to withstand the high accelerating and decelerating forces during the test runs. For this investigation, High-Speed Particle Image Velocimetry (HS-PIV) measurements on an ICE3 train model have been realized in the moving model rig of the DLR in Göttingen, the so called tunnel simulation facility Göttingen (TSG). The flow velocities within the boundary layer are analysed in a plain parallel to the ground. The height of the plane corresponds to a test position in the EN standard (TSI). Three different shapes of roughness elements are tested. The boundary layer thickness and displacement thickness as well as the momentum thickness and the form factor are calculated along the train model. Conditional sampling is used to analyse the size and dynamics of the flow structures at the time of maximum velocity in the train wake behind the train. As expected, larger roughness elements increase the boundary layer thickness and lead to larger flow velocities in the boundary layer and in the wake flow structures. The boundary layer thickness, displacement thickness and momentum thickness are increased by using larger roughness especially when applied in the height close to the measuring plane. The roughness elements also cause high fluctuations in the form factors of the boundary layer. Behind the roughness elements, the form factors rapidly are approaching toward constant values. This indicates that the boundary layer, while growing slowly along the second half of the train model, has reached a state of equilibrium.

Keywords: boundary layer, high-speed PIV, ICE3, moving train model, roughness elements

Procedia PDF Downloads 301
719 Prevalence and Associated Risk Factors of Age-Related Macular Degeneration in the Retina Clinic at a Tertiary Center in Makkah Province, Saudi Arabia: A Retrospective Record Review

Authors: Rahaf Mandura, Fatmah Abusharkh, Layan Kurdi, Rahaf Shigdar, Khadijah Alattas

Abstract:

Introduction: Age-related macular degeneration (AMD) in older individuals are serious health issues that severely impact the quality of life of millions globally. In 2020, the fourth leading cause of blindness worldwide was AMD. The global prevalence of AMD is estimated to be around 8.7%. AMD is a progressive disease involving the macular region of the retina, and it has a complex pathophysiology. RPE cell dysfunction plays a crucial step in the pathway leading to irreversible degeneration of photoreceptors with yellowish lipid-rich, protein-containing drusen deposits accumulating between Bruch's membrane and the RPE. Furthermore, lipofuscinogenesis, drusogenesis, inflammation, and neovascularization are four main processes responsible for the formation of the two types of AMD: the wet (exudative, neovascular) and dry (non-exudative, geographic atrophy) types. We retrospectively evaluated the prevalence of AMD among patients visiting the retina clinic at King Abdulaziz University Hospital (Jeddah, Makkah Province, Saudi Arabia) to identify the commonly associated risk factors of AMD. Methods: The records of 3,067 individuals from 2017 to 2021 were reviewed. Of these, 1,935 satisfied the inclusion criteria and were included in this study. We excluded all patient below 18 years, and those who did not undergo fundus imaging or attend their booked appointments, follow-ups, treatments, and referrals were excluded. Results: The prevalence of AMD among the patients was 4%. The age of patients with AMD was significantly greater than those without AMD (72.4 ± 9.8 years vs. 57.2 ± 15.5 years; p < 0.001). Participants with a family history of AMD tended to have the disease more than those without such a history (85.7% vs. 45%; p = 0.043). Ex- and current smokers were more likely to have AMD than non-smokers (34% and 18.6% vs. 7.2%; p < 0.001). Patients with hypertension and those without type 1 diabetes were at a higher risk of developing AMD than those without hypertension (5.5% vs. 2.8%; p = 0.002) and those with type 1 diabetes (4.2% vs. 0.8%; p = 0.040). In contrast, sex, nationality, type 2 diabetes, and abnormal lipid profile were not significantly associated with AMD. Regarding the clinical characteristics of AMD cases, most cases (70.4%) were of the dry type and affected both eyes (77.2%). The disease duration was ≥5 years in 43.1% of the patients. The most frequent chronic diseases associated with AMD were type 2 diabetes (69.1%), hypertension (61.7%), and dyslipidemia (18.5%). Conclusion: In summary, our single tertiary center study showed that AMD is widely prevalent in Jeddah, Saudi Arabia (4%) and linked to a wide range of risk factors. Some of these are modifiable risk factors that can be adjusted to help reduce AMD occurrence. Furthermore, this study has shown the importance of screening and follow-up of family members of patients with AMD to promote early detection and intervention of AMD. We recommend conducting further research on AMD in Saudi Arabia. Concerning the study design, a community-based cross-sectional study would be more helpful for assessing the disease's prevalence. Finally, recruiting a larger sample size is required for more accurate estimation.

Keywords: age related macular degeneration, prevelence, risk factor, dry AMD

Procedia PDF Downloads 30
718 Automatic Aggregation and Embedding of Microservices for Optimized Deployments

Authors: Pablo Chico De Guzman, Cesar Sanchez

Abstract:

Microservices are a software development methodology in which applications are built by composing a set of independently deploy-able, small, modular services. Each service runs a unique process and it gets instantiated and deployed in one or more machines (we assume that different microservices are deployed into different machines). Microservices are becoming the de facto standard for developing distributed cloud applications due to their reduced release cycles. In principle, the responsibility of a microservice can be as simple as implementing a single function, which can lead to the following issues: - Resource fragmentation due to the virtual machine boundary. - Poor communication performance between microservices. Two composition techniques can be used to optimize resource fragmentation and communication performance: aggregation and embedding of microservices. Aggregation allows the deployment of a set of microservices on the same machine using a proxy server. Aggregation helps to reduce resource fragmentation, and is particularly useful when the aggregated services have a similar scalability behavior. Embedding deals with communication performance by deploying on the same virtual machine those microservices that require a communication channel (localhost bandwidth is reported to be about 40 times faster than cloud vendor local networks and it offers better reliability). Embedding can also reduce dependencies on load balancer services since the communication takes place on a single virtual machine. For example, assume that microservice A has two instances, a1 and a2, and it communicates with microservice B, which also has two instances, b1 and b2. One embedding can deploy a1 and b1 on machine m1, and a2 and b2 are deployed on a different machine m2. This deployment configuration allows each pair (a1-b1), (a2-b2) to communicate using the localhost interface without the need of a load balancer between microservices A and B. Aggregation and embedding techniques are complex since different microservices might have incompatible runtime dependencies which forbid them from being installed on the same machine. There is also a security concern since the attack surface between microservices can be larger. Luckily, container technology allows to run several processes on the same machine in an isolated manner, solving the incompatibility of running dependencies and the previous security concern, thus greatly simplifying aggregation/embedding implementations by just deploying a microservice container on the same machine as the aggregated/embedded microservice container. Therefore, a wide variety of deployment configurations can be described by combining aggregation and embedding to create an efficient and robust microservice architecture. This paper presents a formal method that receives a declarative definition of a microservice architecture and proposes different optimized deployment configurations by aggregating/embedding microservices. The first prototype is based on i2kit, a deployment tool also submitted to ICWS 2018. The proposed prototype optimizes the following parameters: network/system performance, resource usage, resource costs and failure tolerance.

Keywords: aggregation, deployment, embedding, resource allocation

Procedia PDF Downloads 197
717 Long-Term Variabilities and Tendencies in the Zonally Averaged TIMED-SABER Ozone and Temperature in the Middle Atmosphere over 10°N-15°N

Authors: Oindrila Nath, S. Sridharan

Abstract:

Long-term (2002-2012) temperature and ozone measurements by Sounding of Atmosphere by Broadband Emission Radiometry (SABER) instrument onboard Thermosphere, Ionosphere, Mesosphere Energetics and Dynamics (TIMED) satellite zonally averaged over 10°N-15°N are used to study their long-term changes and their responses to solar cycle, quasi-biennial oscillation and El Nino Southern Oscillation. The region is selected to provide more accurate long-term trends and variabilities, which were not possible earlier with lidar measurements over Gadanki (13.5°N, 79.2°E), which are limited to cloud-free nights, whereas continuous data sets of SABER temperature and ozone are available. Regression analysis of temperature shows a cooling trend of 0.5K/decade in the stratosphere and that of 3K/decade in the mesosphere. Ozone shows a statistically significant decreasing trend of 1.3 ppmv per decade in the mesosphere although there is a small positive trend in stratosphere at 25 km. Other than this no significant ozone trend is observed in stratosphere. Negative ozone-QBO response (0.02ppmv/QBO), positive ozone-solar cycle (0.91ppmv/100SFU) and negative response to ENSO (0.51ppmv/SOI) have been found more in mesosphere whereas positive ozone response to ENSO (0.23ppmv/SOI) is pronounced in stratosphere (20-30 km). The temperature response to solar cycle is more positive (3.74K/100SFU) in the upper mesosphere and its response to ENSO is negative around 80 km and positive around 90-100 km and its response to QBO is insignificant at most of the heights. Composite monthly mean of ozone volume mixing ratio shows maximum values during pre-monsoon and post-monsoon season in middle stratosphere (25-30 km) and in upper mesosphere (85-95 km) around 10 ppmv. Composite monthly mean of temperature shows semi-annual variation with large values (~250-260 K) in equinox months and less values in solstice months in upper stratosphere and lower mesosphere (40-55 km) whereas the SAO becomes weaker above 55 km. The semi-annual variation again appears at 80-90 km, with large values in spring equinox and winter months. In the upper mesosphere (90-100 km), less temperature (~170-190 K) prevails in all the months except during September, when the temperature is slightly more. The height profiles of amplitudes of semi-annual and annual oscillations in ozone show maximum values of 6 ppmv and 2.5 ppmv respectively in upper mesosphere (80-100 km), whereas SAO and AO in temperature show maximum values of 5.8 K and 4.6 K in lower and middle mesosphere around 60-85 km. The phase profiles of both SAO and AO show downward progressions. These results are being compared with long-term lidar temperature measurements over Gadanki (13.5°N, 79.2°E) and the results obtained will be presented during the meeting.

Keywords: trends, QBO, solar cycle, ENSO, ozone, temperature

Procedia PDF Downloads 403
716 Testing the Impact of the Nature of Services Offered on Travel Sites and Links on Traffic Generated: A Longitudinal Survey

Authors: Rania S. Hussein

Abstract:

Background: This study aims to determine the evolution of service provision by Egyptian travel sites and how these services change in terms of their level of sophistication over the period of the study which is ten years. To the author’s best knowledge, this is the first longitudinal study that focuses on an extended time frame of ten years. Additionally, the study attempts to determine the popularity of these websites through the number of links to these sites. Links maybe viewed as the equivalent of a referral or word of mouth but in an online context. Both popularity and the nature of the services provided by these websites are used to determine the traffic on these sites. In examining the nature of services provided, the website itself is viewed as an overall service offering that is composed of different travel products and services. Method: This study uses content analysis in the form of a small scale survey done on 30 Egyptian travel agents’ websites to examine whether Egyptian travel websites are static or dynamic in terms of the services that they provide and whether they provide simple or sophisticated travel services. To determine the level of sophistication of these travel sites, the nature and composition of products and services offered by these sites were first examined. A framework adapted from Kotler (1997) 'Five levels of a product' was used. The target group for this study consists of companies that do inbound tourism. Four rounds of data collection were conducted over a period of 10 years. Two rounds of data collection were made in 2004 and two rounds were made in 2014. Data from the travel agents’ sites were collected over a two weeks period in each of the four rounds. Besides collecting data on features of websites, data was also collected on the popularity of these websites through a software program called Alexa that showed the traffic rank and number of links of each site. Regression analysis was used to test the effect of links and services on websites as independent variables on traffic as the dependent variable of this study. Findings: Results indicate that as companies moved from having simple websites with basic travel information to being more interactive, the number of visitors illustrated by traffic and the popularity of those sites increase as shown by the number of links. Results also show that travel companies use the web much more for promotion rather than for distribution since most travel agents are using it basically for information provision. The results of this content analysis study taps on an unexplored area and provide useful insights for marketers on how they can generate more traffic to their websites by focusing on developing a distinctive content on these sites and also by focusing on the visibility of their sites thus enhancing the popularity or links to their sites.

Keywords: levels of a product, popularity, travel, website evolution

Procedia PDF Downloads 315
715 Structural and Biochemical Characterization of Red and Green Emitting Luciferase Enzymes

Authors: Wael M. Rabeh, Cesar Carrasco-Lopez, Juliana C. Ferreira, Pance Naumov

Abstract:

Bioluminescence, the emission of light from a biological process, is found in various living organisms including bacteria, fireflies, beetles, fungus and different marine organisms. Luciferase is an enzyme that catalyzes a two steps oxidation of luciferin in the presence of Mg2+ and ATP to produce oxyluciferin and releases energy in the form of light. The luciferase assay is used in biological research and clinical applications for in vivo imaging, cell proliferation, and protein folding and secretion analysis. The luciferase enzyme consists of two domains, a large N-terminal domain (1-436 residues) that is connected to a small C-terminal domain (440-544) by a flexible loop that functions as a hinge for opening and closing the active site. The two domains are separated by a large cleft housing the active site that closes after binding the substrates, luciferin and ATP. Even though all insect luciferases catalyze the same chemical reaction and share 50% to 90% sequence homology and high structural similarity, they emit light of different colors from green at 560nm to red at 640 nm. Currently, the majority of the structural and biochemical studies have been conducted on green-emitting firefly luciferases. To address the color emission mechanism, we expressed and purified two luciferase enzymes with blue-shifted green and red emission from indigenous Brazilian species Amydetes fanestratus and Phrixothrix, respectively. The two enzymes naturally emit light of different colors and they are an excellent system to study the color-emission mechanism of luciferases, as the current proposed mechanisms are based on mutagenesis studies. Using a vapor-diffusion method and a high-throughput approach, we crystallized and solved the crystal structure of both enzymes, at 1.7 Å and 3.1 Å resolution respectively, using X-ray crystallography. The free enzyme adopted two open conformations in the crystallographic unit cell that are different from the previously characterized firefly luciferase. The blue-shifted green luciferase crystalized as a monomer similar to other luciferases reported in literature, while the red luciferases crystalized as an octamer and was also purified as an octomer in solution. The octomer conformation is the first of its kind for any insect’s luciferase, which might be relate to the red color emission. Structurally designed mutations confirmed the importance of the transition between the open and close conformations in the fine-tuning of the color and the characterization of other interesting mutants is underway.

Keywords: bioluminescence, enzymology, structural biology, x-ray crystallography

Procedia PDF Downloads 318
714 Separate Collection System of Recyclables and Biowaste Treatment and Utilization in Metropolitan Area Finland

Authors: Petri Kouvo, Aino Kainulainen, Kimmo Koivunen

Abstract:

Separate collection system for recyclable wastes in the Helsinki region was ranked second best of European capitals. The collection system includes paper, cardboard, glass, metals and biowaste. Residual waste is collected and used in energy production. The collection system excluding paper is managed by the Helsinki Region Environmental Services HSY, a public organization owned by four municipalities (Helsinki, Espoo, Kauniainen and Vantaa). Paper collection is handled by the producer responsibility scheme. The efficiency of the collection system in the Helsinki region relies on a good coverage of door-to-door-collection. All properties with 10 or more dwelling units are required to source separate biowaste and cardboard. This covers about 75% of the population of the area. The obligation is extended to glass and metal in properties with 20 or more dwelling units. Other success factors include public awareness campaigns and a fee system that encourages recycling. As a result of waste management regulations for source separation of recyclables and biowaste, nearly 50 percent of recycling rate of household waste has been reached. For households and small and medium size enterprises, there is a sorting station fleet of five stations available. More than 50 percent of wastes received at sorting stations is utilized as material. The separate collection of plastic packaging in Finland will begin in 2016 within the producer responsibility scheme. HSY started supplementing the national bring point system with door-to-door-collection and pilot operations will begin in spring 2016. The result of plastic packages pilot project has been encouraging. Until the end of 2016, over 3500 apartment buildings have been joined the piloting, and more than 1800 tons of plastic packages have been collected separately. In the summer 2015 a novel partial flow digestion process combining digestion and tunnel composting was adopted for source separated household and commercial biowaste management. The product gas form digestion process is converted in to heat and electricity in piston engine and organic Rankine cycle process with very high overall efficiency. This paper describes the efficient collection system and discusses key success factors as well as main obstacles and lessons learned as well as the partial flow process for biowaste management.

Keywords: biowaste, HSY, MSW, plastic packages, recycling, separate collection

Procedia PDF Downloads 212
713 Magnetic Navigation of Nanoparticles inside a 3D Carotid Model

Authors: E. G. Karvelas, C. Liosis, A. Theodorakakos, T. E. Karakasidis

Abstract:

Magnetic navigation of the drug inside the human vessels is a very important concept since the drug is delivered to the desired area. Consequently, the quantity of the drug required to reach therapeutic levels is being reduced while the drug concentration at targeted sites is increased. Magnetic navigation of drug agents can be achieved with the use of magnetic nanoparticles where anti-tumor agents are loaded on the surface of the nanoparticles. The magnetic field that is required to navigate the particles inside the human arteries is produced by a magnetic resonance imaging (MRI) device. The main factors which influence the efficiency of the usage of magnetic nanoparticles for biomedical applications in magnetic driving are the size and the magnetization of the biocompatible nanoparticles. In this study, a computational platform for the simulation of the optimal gradient magnetic fields for the navigation of magnetic nanoparticles inside a carotid artery is presented. For the propulsion model of the particles, seven major forces are considered, i.e., the magnetic force from MRIs main magnet static field as well as the magnetic field gradient force from the special propulsion gradient coils. The static field is responsible for the aggregation of nanoparticles, while the magnetic gradient contributes to the navigation of the agglomerates that are formed. Moreover, the contact forces among the aggregated nanoparticles and the wall and the Stokes drag force for each particle are considered, while only spherical particles are used in this study. In addition, gravitational forces due to gravity and the force due to buoyancy are included. Finally, Van der Walls force and Brownian motion are taken into account in the simulation. The OpenFoam platform is used for the calculation of the flow field and the uncoupled equations of particles' motion. To verify the optimal gradient magnetic fields, a covariance matrix adaptation evolution strategy (CMAES) is used in order to navigate the particles into the desired area. A desired trajectory is inserted into the computational geometry, which the particles are going to be navigated in. Initially, the CMAES optimization strategy provides the OpenFOAM program with random values of the gradient magnetic field. At the end of each simulation, the computational platform evaluates the distance between the particles and the desired trajectory. The present model can simulate the motion of particles when they are navigated by the magnetic field that is produced by the MRI device. Under the influence of fluid flow, the model investigates the effect of different gradient magnetic fields in order to minimize the distance of particles from the desired trajectory. In addition, the platform can navigate the particles into the desired trajectory with an efficiency between 80-90%. On the other hand, a small number of particles are stuck to the walls and remains there for the rest of the simulation.

Keywords: artery, drug, nanoparticles, navigation

Procedia PDF Downloads 101
712 Meeting the Health Needs of Adolescents and Young Adults: Developing and Evaluating an Electronic Questionnaire and Health Report Form, for the Health Assessment at Youth Health Clinics – A Mixed Methods Project

Authors: P.V. Lostelius, M.Mattebo, E. Thors Adolfsson, A. Söderlund, Å. Revenäs

Abstract:

Adolescents are vulnerable in healthcare settings. Early detection of poor health in young people is important to support a good quality of life and adult social functioning. Youth Health Clinics (YHCs) in Sweden provide healthcare for young people ages 13-25 years old. Using an overall mixed methods approach, the project’s main objective was to develop and evaluate an electronic health system, including a health questionnaire, a case report form, and an evaluation questionnaire to assess young people’s health risks in early stages, increase health, and quality of life. In total, 72 young people, 16-23 years old, eleven healthcare professionals and eight researchers participated in the three project studies. Results from interviews with fifteen young people gave that an electronic health questionnaire should include questions about physical-, mental-, sexual health and social support. It should specifically include questions about self-harm and suicide risk. The young people said that the questionnaire should be appealing, based on young people’s needs and be user-friendly. It was important that young people felt safe when responding to the questions, both physically and electronically. Also, they found that it had the potential to support the face-to face-meeting between young people and healthcare professionals. The electronic health report system was developed by the researchers, performing a structured development of the electronic health questionnaire, construction of a case report form to present the results from the health questions, along with an electronic evaluation questionnaire. An Information Technology company finalized the development by digitalizing the electronic health system. Four young people, three healthcare professionals and seven researchers evaluated the usability using interviews and a usability questionnaire. The electronic health questionnaire was found usable for YHCs but needed some clarifications. Essentially, the system succeeded in capturing the overall health of young people; it should be able to keep the interest of young people and have the potential to contribute to health assessment planning and young people’s self-reflection, sharing vulnerable feelings with healthcare professionals. In advance of effect studies, a feasibility study was performed by collecting electronic questionnaire data from 54 young people and interview data from eight healthcare professionals to assess the feasibility of the use of the electronic evaluation questionnaire, the case report form, and the planned recruitment method. When merging the results, the research group found that the evaluation questionnaire and the health report were feasible for future research. However, the COVID-19 pandemic, commitment challenges and drop-outs affected the recruitment of young people. Also, some healthcare professionals felt insecure about using computers and electronic devices and worried that their workload would increase. This project contributes knowledge about the development and use of electronic health tools for young people. Before implementation, clinical routines need for using the health report system need to be considered.

Keywords: adolescent health, developmental studies, electronic health questionnaire, mixed methods research

Procedia PDF Downloads 93
711 Raman Spectroscopic Detection of the Diminishing Toxic Effect of Renal Waste Creatinine by Its in vitro Reaction with Drugs N-Acetylcysteine and Taurine

Authors: Debraj Gangopadhyay, Moumita Das, Ranjan K. Singh, Poonam Tandon

Abstract:

Creatinine is a toxic chemical waste generated from muscle metabolism. Abnormally high levels of creatinine in the body fluid indicate possible malfunction or failure of the kidneys. This leads to a condition termed as creatinine induced nephrotoxicity. N-acetylcysteine is an antioxidant drug which is capable of preventing creatinine induced nephrotoxicity and is helpful to treat renal failure in its early stages. Taurine is another antioxidant drug which serves similar purpose. The kidneys have a natural power that whenever reactive oxygen species radicals increase in the human body, the kidneys make an antioxidant shell so that these radicals cannot harm the kidney function. Taurine plays a vital role in increasing the power of that shell such that the glomerular filtration rate can remain in its normal level. Thus taurine protects the kidneys against several diseases. However, taurine also has some negative effects on the body as its chloramine derivative is a weak oxidant by nature. N-acetylcysteine is capable of inhibiting the residual oxidative property of taurine chloramine. Therefore, N-acetylcysteine is given to a patient along with taurine and this combination is capable of suppressing the negative effect of taurine. Both N-acetylcysteine and taurine being affordable, safe, and widely available medicines, knowledge of the mechanism of their combined effect on creatinine, the favored route of administration, and the proper dose may be highly useful in their use for treating renal patients. Raman spectroscopy is a precise technique to observe minor structural changes taking place when two or more molecules interact. The possibility of formation of a complex between a drug molecule and an analyte molecule in solution can be explored by analyzing the changes in the Raman spectra. The formation of a stable complex of creatinine with N-acetylcysteinein vitroin aqueous solution has been observed with the help of Raman spectroscopic technique. From the Raman spectra of the mixtures of aqueous solutions of creatinine and N-acetylcysteinein different molar ratios, it is observed that the most stable complex is formed at 1:1 ratio of creatinine andN-acetylcysteine. Upon drying, the complex obtained is gel-like in appearance and reddish yellow in color. The complex is hygroscopic and has much better water solubility compared to creatinine. This highlights that N-acetylcysteineplays an effective role in reducing the toxic effect of creatinine by forming this water soluble complex which can be removed through urine. Since the drug taurine is also known to be useful in reducing nephrotoxicity caused by creatinine, the aqueous solution of taurine with those of creatinine and N-acetylcysteinewere mixed in different molar ratios and were investigated by Raman spectroscopic technique. It is understood that taurine itself does not undergo complexation with creatinine as no additional changes are observed in the Raman spectra of creatinine when it is mixed with taurine. However, when creatinine, N-acetylcysteine and taurine are mixed in aqueous solution in molar ratio 1:1:3, several changes occurring in the Raman spectra of creatinine suggest the diminishing toxic effect of creatinine in the presence ofantioxidant drugs N-acetylcysteine and taurine.

Keywords: creatinine, creatinine induced nephrotoxicity, N-acetylcysteine, taurine

Procedia PDF Downloads 146
710 Spatial Ecology of an Endangered Amphibian Litoria Raniformis within Modified Tasmanian Landscapes

Authors: Timothy Garvey, Don Driscoll

Abstract:

Within Tasmania, the growling grass frog (Litoria raniformis) has experienced a rapid contraction in distribution. This decline is primarily attributed to habitat loss through landscape modification and improved land drainage. Reductions in seasonal water-sources have placed increasing importance on permanent water bodies for reproduction and foraging. Tasmanian agricultural and commercial forestry landscapes often feature small artificial ponds, utilized for watering livestock and fighting wildfires. Improved knowledge of how L. raniformis may be exploiting anthropogenic ponds is required for improved conservation management. We implemented telemetric tracking in order to evaluate the spatial ecology of L. raniformis (n = 20) within agricultural and managed forestry sites, with tracking conducted periodically over the breeding season (November/December, January/February, March/April). We investigated (1) potential differences in habitat utilization between agricultural and plantation sites, and (2) the post-breeding dispersal of individual frogs. Frogs were found to remain in close proximity to ponds throughout November/December, with individuals occupying vegetative depauperate water bodies beginning to disperse by January/February. Dispersing individuals traversed exposed plantation understory and agricultural pasture land in order to enter patches of native scrubland. By March/April all individuals captured at minimally vegetated ponds had retreated to adjacent scrub corridors. Animals found in ponds featuring dense riparian vegetation were not recorded to disperse. No difference in behavior was recorded between sexes. Rising temperatures coincided with increased movement by individuals towards native scrub refugia. The patterns of movement reported in this investigation emphasize the significant contribution of manmade water-bodies towards the conservation of L. raniformis within modified landscapes. The use of natural scrubland as cyclical retreats between breeding seasons also highlights the importance of the continued preservation of remnant vegetation corridors. Loss of artificial dams or buffering scrubland in heavily altered landscapes could see the breakdown of the greater L. raniformis meta-population further threatening their regional persistence.

Keywords: habitat loss, modified landscapes, spatial ecology, telemetry

Procedia PDF Downloads 108
709 Analysis of Reduced Mechanisms for Premixed Combustion of Methane/Hydrogen/Propane/Air Flames in Geometrically Modified Combustor and Its Effects on Flame Properties

Authors: E. Salem

Abstract:

Combustion has been used for a long time as a means of energy extraction. However, in recent years, there has been a further increase in air pollution, through pollutants such as nitrogen oxides, acid etc. In order to solve this problem, there is a need to reduce carbon and nitrogen oxides through learn burning modifying combustors and fuel dilution. A numerical investigation has been done to investigate the effectiveness of several reduced mechanisms in terms of computational time and accuracy, for the combustion of the hydrocarbons/air or diluted with hydrogen in a micro combustor. The simulations were carried out using the ANSYS Fluent 19.1. To validate the results “PREMIX and CHEMKIN” codes were used to calculate 1D premixed flame based on the temperature, composition of burned and unburned gas mixtures. Numerical calculations were carried for several hydrocarbons by changing the equivalence ratios and adding small amounts of hydrogen into the fuel blends then analyzing the flammable limit, the reduction in NOx and CO emissions, then comparing it to experimental data. By solving the conservations equations, several global reduced mechanisms (2-9-12) were obtained. These reduced mechanisms were simulated on a 2D cylindrical tube with dimensions of 40 cm in length and 2.5 cm diameter. The mesh of the model included a proper fine quad mesh, within the first 7 cm of the tube and around the walls. By developing a proper boundary layer, several simulations were performed on hydrocarbon/air blends to visualize the flame characteristics than were compared with experimental data. Once the results were within acceptable range, the geometry of the combustor was modified through changing the length, diameter, adding hydrogen by volume, and changing the equivalence ratios from lean to rich in the fuel blends, the results on flame temperature, shape, velocity and concentrations of radicals and emissions were observed. It was determined that the reduced mechanisms provided results within an acceptable range. The variation of the inlet velocity and geometry of the tube lead to an increase of the temperature and CO2 emissions, highest temperatures were obtained in lean conditions (0.5-0.9) equivalence ratio. Addition of hydrogen blends into combustor fuel blends resulted in; reduction in CO and NOx emissions, expansion of the flammable limit, under the condition of having same laminar flow, and varying equivalence ratio with hydrogen additions. The production of NO is reduced because the combustion happens in a leaner state and helps in solving environmental problems.

Keywords: combustor, equivalence-ratio, hydrogenation, premixed flames

Procedia PDF Downloads 113
708 Characteristics of the Rocks Glacier Deposits in the Southern Carpathians, Romania

Authors: Petru Urdea

Abstract:

As a distinct part of the mountain system, the rock glacier system is a particularly periglacial debris system. Being an open system, it works in a manner of interconnection with others subsystems like glacial, cliffs, rocky slopes sand talus slope subsystems, which are sources of sediments. One characteristic is that for long periods of time it is like a storage unit for debris, and ice, and temporary for snow and water. In the Southern Carpathians 306 rock glaciers were identified. The vast majority of these rock glaciers, are talus rock glaciers, 74%, and 26%, are debris rock glaciers. In the area occupied by granites and granodiorites are present, 49% of all the rock glaciers, representing 61% of the area occupied by Southern Carpathians rock glaciers. This lithological dependence also leaves its mark on the specifics of the deposits, everything bearing the imprint of the particular way the rocks respond to the physical weathering processes, all in a periglacial regime. If in the domain of granites and granodiorites the blocks are large, - of metric order, even 10 m3 - , in the domain of the metamorphic rocks only gneisses can cut similar sizes. Amphibolites, amphibolitic schists, micaschists, sericite-chlorite schists and phyllites crop out in much smaller blocks, of decimetric order, mostly in the form of slabs. In the case of rock glaciers made up of large blocks, with a strcture of open-works type, the density and volume of voids between the blocks is greater, the smaller debris generating more compact structures with fewer voids. All these influences the thermal regime, associated with a certain type of air circulation during the seasons and the emergence of permafrost formation conditions. The rock glaciers are fed by rock falls, rock avalanches, debris flows, avalanches, so that the structure is heterogeneous, which is also reflected in the detailed topography of the rock glaciers. This heterogeneity is also influenced by the spatial assembly of the rock bodies in the supply area and, an element that cannot be omitted, the behavior of the rocks during periglacial weathering. The production of small gelifracts determines the filling of voids and the appearance of more compact structures, with effects on the creep process. In general, surface deposits are coarser, those in depth are finer, their characteristics being detectable by applying geophysical methods. The electrical tomography (ERT) and georadar (GPR) investigations carried out in the Făgăraş Mountains, Retezat and the Parâng Mountains, each with a different lithological specificity, allowed the identification of some differentiations, including the presence of permafrost bodies.

Keywords: rock glaciers deposits, structure, lithology, permafrost, Southern Carpathians, Romania

Procedia PDF Downloads 15
707 An Adaptive Decomposition for the Variability Analysis of Observation Time Series in Geophysics

Authors: Olivier Delage, Thierry Portafaix, Hassan Bencherif, Guillaume Guimbretiere

Abstract:

Most observation data sequences in geophysics can be interpreted as resulting from the interaction of several physical processes at several time and space scales. As a consequence, measurements time series in geophysics have often characteristics of non-linearity and non-stationarity and thereby exhibit strong fluctuations at all time-scales and require a time-frequency representation to analyze their variability. Empirical Mode Decomposition (EMD) is a relatively new technic as part of a more general signal processing method called the Hilbert-Huang transform. This analysis method turns out to be particularly suitable for non-linear and non-stationary signals and consists in decomposing a signal in an auto adaptive way into a sum of oscillating components named IMFs (Intrinsic Mode Functions), and thereby acts as a bank of bandpass filters. The advantages of the EMD technic are to be entirely data driven and to provide the principal variability modes of the dynamics represented by the original time series. However, the main limiting factor is the frequency resolution that may give rise to the mode mixing phenomenon where the spectral contents of some IMFs overlap each other. To overcome this problem, J. Gilles proposed an alternative entitled “Empirical Wavelet Transform” (EWT) which consists in building from the segmentation of the original signal Fourier spectrum, a bank of filters. The method used is based on the idea utilized in the construction of both Littlewood-Paley and Meyer’s wavelets. The heart of the method lies in the segmentation of the Fourier spectrum based on the local maxima detection in order to obtain a set of non-overlapping segments. Because linked to the Fourier spectrum, the frequency resolution provided by EWT is higher than that provided by EMD and therefore allows to overcome the mode-mixing problem. On the other hand, if the EWT technique is able to detect the frequencies involved in the original time series fluctuations, EWT does not allow to associate the detected frequencies to a specific mode of variability as in the EMD technic. Because EMD is closer to the observation of physical phenomena than EWT, we propose here a new technic called EAWD (Empirical Adaptive Wavelet Decomposition) based on the coupling of the EMD and EWT technics by using the IMFs density spectral content to optimize the segmentation of the Fourier spectrum required by EWT. In this study, EMD and EWT technics are described, then EAWD technic is presented. Comparison of results obtained respectively by EMD, EWT and EAWD technics on time series of ozone total columns recorded at Reunion island over [1978-2019] period is discussed. This study was carried out as part of the SOLSTYCE project dedicated to the characterization and modeling of the underlying dynamics of time series issued from complex systems in atmospheric sciences

Keywords: adaptive filtering, empirical mode decomposition, empirical wavelet transform, filter banks, mode-mixing, non-linear and non-stationary time series, wavelet

Procedia PDF Downloads 129
706 Molecular Characterization, Host Plant Resistance and Epidemiology of Bean Common Mosaic Virus Infecting Cowpea (Vigna unguiculata L. Walp)

Authors: N. Manjunatha, K. T. Rangswamy, N. Nagaraju, H. A. Prameela, P. Rudraswamy, M. Krishnareddy

Abstract:

The identification of virus in cowpea especially potyviruses is confusing. Even though there are several studies on viruses causing diseases in cowpea, difficult to distinguish based on symptoms and serological detection. The differentiation of potyviruses considering as a constraint, the present study is initiated for molecular characterization, host plant resistance and epidemiology of the BCMV infecting cowpea. The etiological agent causing cowpea mosaic was identified as Bean Common Mosaic Virus (BCMV) on the basis of RT-PCR and electron microscopy. An approximately 750bp PCR product corresponding to coat protein (CP) region of the virus and the presence of long flexuous filamentous particles measuring about 952 nm in size typical to genus potyvirus were observed under electron microscope. The characterized virus isolate genome had 10054 nucleotides, excluding the 3’ terminal poly (A) tail. Comparison of polyprotein of the virus with other potyviruses showed similar genome organization with 9 cleavage sites resulted in 10 functional proteins. The pairwise sequence comparison of individual genes, P1 showed most divergent, but CP gene was less divergent at nucleotide and amino acid level. A phylogenetic tree constructed based on multiple sequence alignments of the polyprotein nucleotide and amino acid sequences of cowpea BCMV and potyviruses showed virus is closely related to BCMV-HB. Whereas, Soybean variant of china (KJ807806) and NL1 isolate (AY112735) showed 93.8 % (5’UTR) and 94.9 % (3’UTR) homology respectively with other BCMV isolates. This virus transmitted to different leguminous plant species and produced systemic symptoms under greenhouse conditions. Out of 100 cowpea genotypes screened, three genotypes viz., IC 8966, V 5 and IC 202806 showed immune reaction in both field and greenhouse conditions. Single marker analysis (SMA) was revealed out of 4 SSR markers linked to BCMV resistance, M135 marker explains 28.2 % of phenotypic variation (R2) and Polymorphic information content (PIC) value of these markers was ranged from 0.23 to 0.37. The correlation and regression analysis showed rainfall, and minimum temperature had significant negative impact and strong relationship with aphid population, whereas weak correlation was observed with disease incidence. Path coefficient analysis revealed most of the weather parameters exerted their indirect contributions to the aphid population and disease incidence except minimum temperature. This study helps to identify specific gaps in knowledge for researchers who may wish to further analyse the science behind complex interactions between vector-virus and host in relation to the environment. The resistant genotypes identified are could be effectively used in resistance breeding programme.

Keywords: cowpea, epidemiology, genotypes, virus

Procedia PDF Downloads 229
705 Preparation of β-Polyvinylidene Fluoride Film for Self-Charging Lithium-Ion Battery

Authors: Nursultan Turdakyn, Alisher Medeubayev, Didar Meiramov, Zhibek Bekezhankyzy, Desmond Adair, Gulnur Kalimuldina

Abstract:

In recent years the development of sustainable energy sources is getting extensive research interest due to the ever-growing demand for energy. As an alternative energy source to power small electronic devices, ambient energy harvesting from vibration or human body motion is considered a potential candidate. Despite the enormous progress in the field of battery research in terms of safety, lifecycle and energy density in about three decades, it has not reached the level to conveniently power wearable electronic devices such as smartwatches, bands, hearing aids, etc. For this reason, the development of self-charging power units with excellent flexibility and integrated energy harvesting and storage is crucial. Self-powering is a key idea that makes it possible for the system to operate sustainably, which is now getting more acceptance in many fields in the area of sensor networks, the internet of things (IoT) and implantable in-vivo medical devices. For solving this energy harvesting issue, the self-powering nanogenerators (NGS) were proposed and proved their high effectiveness. Usually, sustainable power is delivered through energy harvesting and storage devices by connecting them to the power management circuit; as for energy storage, the Li-ion battery (LIB) is one of the most effective technologies. Through the movement of Li ions under the driving of an externally applied voltage source, the electrochemical reactions generate the anode and cathode, storing the electrical energy as the chemical energy. In this paper, we present a simultaneous process of converting the mechanical energy into chemical energy in a way that NG and LIB are combined as an all-in-one power system. The electrospinning method was used as an initial step for the development of such a system with a β-PVDF separator. The obtained film showed promising voltage output at different stress frequencies. X-ray diffraction (XRD) and Fourier Transform Infrared Spectroscopy (FT-IR) analysis showed a high percentage of β phase of PVDF polymer material. Moreover, it was found that the addition of 1 wt.% of BTO (Barium Titanate) results in higher quality fibers. When comparing pure PVDF solution with 20 wt.% content and the one with BTO added the latter was more viscous. Hence, the sample was electrospun uniformly without any beads. Lastly, to test the sensor application of such film, a particular testing device has been developed. With this device, the force of a finger tap can be applied at different frequencies so that electrical signal generation is validated.

Keywords: electrospinning, nanogenerators, piezoelectric PVDF, self-charging li-ion batteries

Procedia PDF Downloads 157