Search results for: small peptides
533 Connecting the Dots: Bridging Academia and National Community Partnerships When Delivering Healthy Relationships Programming
Authors: Nicole Vlasman, Karamjeet Dhillon
Abstract:
Over the past four years, the Healthy Relationships Program has been delivered in community organizations and schools across Canada. More than 240 groups have been facilitated in collaboration with 33 organizations. As a result, 2157 youth have been engaged in the programming. The purpose and scope of the Healthy Relationships Program are to offer sustainable, evidence-based skills through small group implementation to prevent violence and promote positive, healthy relationships in youth. The program development has included extensive networking at regional and national levels. The Healthy Relationships Program is currently being implemented, adapted, and researched within the Resilience and Inclusion through Strengthening and Enhancing Relationships (RISE-R) project. Alongside the project’s research objectives, the RISE-R team has worked to virtually share the ongoing findings of the project through a slow ontology approach. Slow ontology is a practice integrated into project systems and structures whereby slowing the pace and volume of outputs offers creative opportunities. Creative production reveals different layers of success and complements the project, the building blocks for sustainability. As a result of integrating a slow ontology approach, the RISE-R team has developed a Geographic Information System (GIS) that documents local landscapes through a Story Map feature, and more specifically, video installations. Video installations capture the cartography of space and place within the context of singular diverse community spaces (case studies). By documenting spaces via human connections, the project captures narratives, which further enhance the voices and faces of the community within the larger project scope. This GIS project aims to create a visual and interactive flow of information that complements the project's mixed-method research approach. Conclusively, creative project development in the form of a geographic information system can provide learning and engagement opportunities at many levels (i.e., within community organizations and educational spaces or with the general public). In each of these disconnected spaces, fragmented stories are connected through a visual display of project outputs. A slow ontology practice within the context of the RISE-R project documents activities on the fringes and within internal structures; primarily through documenting project successes as further contributions to the Centre for School Mental Health framework (philosophy, recruitment techniques, allocation of resources and time, and a shared commitment to evidence-based products).Keywords: community programming, geographic information system, project development, project management, qualitative, slow ontology
Procedia PDF Downloads 156532 An Integrated Framework for Wind-Wave Study in Lakes
Authors: Moien Mojabi, Aurelien Hospital, Daniel Potts, Chris Young, Albert Leung
Abstract:
The wave analysis is an integral part of the hydrotechnical assessment carried out during the permitting and design phases for coastal structures, such as marinas. This analysis aims in quantifying: i) the Suitability of the coastal structure design against Small Craft Harbour wave tranquility safety criterion; ii) Potential environmental impacts of the structure (e.g., effect on wave, flow, and sediment transport); iii) Mooring and dock design and iv) Requirements set by regulatory agency’s (e.g., WSA section 11 application). While a complex three-dimensional hydrodynamic modelling approach can be applied on large-scale projects, the need for an efficient and reliable wave analysis method suitable for smaller scale marina projects was identified. As a result, Tetra Tech has developed and applied an integrated analysis framework (hereafter TT approach), which takes the advantage of the state-of-the-art numerical models while preserving the level of simplicity that fits smaller scale projects. The present paper aims to describe the TT approach and highlight the key advantages of using this integrated framework in lake marina projects. The core of this methodology is made by integrating wind, water level, bathymetry, and structure geometry data. To respond to the needs of specific projects, several add-on modules have been added to the core of the TT approach. The main advantages of this method over the simplified analytical approaches are i) Accounting for the proper physics of the lake through the modelling of the entire lake (capturing real lake geometry) instead of a simplified fetch approach; ii) Providing a more realistic representation of the waves by modelling random waves instead of monochromatic waves; iii) Modelling wave-structure interaction (e.g. wave transmission/reflection application for floating structures and piles amongst others); iv) Accounting for wave interaction with the lakebed (e.g. bottom friction, refraction, and breaking); v) Providing the inputs for flow and sediment transport assessment at the project site; vi) Taking in consideration historical and geographical variations of the wind field; and vii) Independence of the scale of the reservoir under study. Overall, in comparison with simplified analytical approaches, this integrated framework provides a more realistic and reliable estimation of wave parameters (and its spatial distribution) in lake marinas, leading to a realistic hydrotechnical assessment accessible to any project size, from the development of a new marina to marina expansion and pile replacement. Tetra Tech has successfully utilized this approach since many years in the Okanagan area.Keywords: wave modelling, wind-wave, extreme value analysis, marina
Procedia PDF Downloads 84531 Characterization of the MOSkin Dosimeter for Accumulated Dose Assessment in Computed Tomography
Authors: Lenon M. Pereira, Helen J. Khoury, Marcos E. A. Andrade, Dean L. Cutajar, Vinicius S. M. Barros, Anatoly B. Rozenfeld
Abstract:
With the increase of beam widths and the advent of multiple-slice and helical scanners, concerns related to the current dose measurement protocols and instrumentation in computed tomography (CT) have arisen. The current methodology of dose evaluation, which is based on the measurement of the integral of a single slice dose profile using a 100 mm long cylinder ionization chamber (Ca,100 and CPPMA, 100), has been shown to be inadequate for wide beams as it does not collect enough of the scatter-tails to make an accurate measurement. In addition, a long ionization chamber does not offer a good representation of the dose profile when tube current modulation is used. An alternative approach has been suggested by translating smaller detectors through the beam plane and assessing the accumulated dose trough the integral of the dose profile, which can be done for any arbitrary length in phantoms or in the air. For this purpose, a MOSFET dosimeter of small dosimetric volume was used. One of its recently designed versions is known as the MOSkin, which is developed by the Centre for Medical Radiation Physics at the University of Wollongong, and measures the radiation dose at a water equivalent depth of 0.07 mm, allowing the evaluation of skin dose when placed at the surface, or internal point doses when placed within a phantom. Thus, the aim of this research was to characterize the response of the MOSkin dosimeter for X-ray CT beams and to evaluate its application for the accumulated dose assessment. Initially, tests using an industrial x-ray unit were carried out at the Laboratory of Ionization Radiation Metrology (LMRI) of Federal University of Pernambuco, in order to investigate the sensitivity, energy dependence, angular dependence, and reproducibility of the dose response for the device for the standard radiation qualities RQT 8, RQT 9 and RQT 10. Finally, the MOSkin was used for the accumulated dose evaluation of scans using a Philips Brilliance 6 CT unit, with comparisons made between the CPPMA,100 value assessed with a pencil ionization chamber (PTW Freiburg TW 30009). Both dosimeters were placed in the center of a PMMA head phantom (diameter of 16 cm) and exposed in the axial mode with collimation of 9 mm, 250 mAs and 120 kV. The results have shown that the MOSkin response was linear with doses in the CT range and reproducible (98.52%). The sensitivity for a single MOSkin in mV/cGy was as follows: 9.208, 7.691 and 6.723 for the RQT 8, RQT 9 and RQT 10 beams qualities respectively. The energy dependence varied up to a factor of ±1.19 among those energies and angular dependence was not greater than 7.78% within the angle range from 0 to 90 degrees. The accumulated dose and the CPMMA, 100 value were 3,97 and 3,79 cGy respectively, which were statistically equivalent within the 95% confidence level. The MOSkin was shown to be a good alternative for CT dose profile measurements and more than adequate to provide accumulated dose assessments for CT procedures.Keywords: computed tomography dosimetry, MOSFET, MOSkin, semiconductor dosimetry
Procedia PDF Downloads 311530 Passive Greenhouse Systems in Poland
Authors: Magdalena Grudzińska
Abstract:
Passive systems allow solar radiation to be converted into thermal energy thanks to appropriate building construction. Greenhouse systems are particularly worth attention, due to the low costs of their realization and strong architectural appeal. The paper discusses the energy effects of using passive greenhouse systems, such as glazed balconies, in an example residential building. The research was carried out for five localities in Poland, belonging to climatic zones different in terms of external air temperature and insolation: Koszalin, Poznań, Lublin, Białystok and Zakopane The analysed apartment had a floor area of approximately 74 m² Three thermal zones were distinguished in the flat - the balcony, the room adjacent to it, and the remaining space, for which various internal conditions were defined. Calculations of the energy demand were made using the dynamic simulation program, based on the control volume method. The climatic data were represented by Typical Meteorological Years, prepared on the basis of source data collected from 1971 to 2000. In each locality, the introduction of a passive greenhouse system led to a lower demand for heating in the apartment, and the shortening of the heating season. The smallest effectiveness of passive solar energy systems was noted in Białystok. Demand for heating was reduced there by 14.5% and the heating season remained the longest, due to low temperatures of external air and small sums of solar radiation intensity. In Zakopane, energy savings came to 21% and the heating season was reduced to 107 days, thanks to the greatest insolation during winter. The introduction of greenhouse systems caused an increase in cooling demand in the warmer part of the year, but total energy demand declined in each of the discussed places. However, potential energy savings are smaller if the building's annual life cycle is taken into consideration, and amount from 5.6% up to 14%. Koszalin and Zakopane are localities in which the greenhouse system allows the best energy results to be achieved. It should be emphasized that favourable conditions for introducing greenhouse systems are connected with different climatic conditions. In the seaside area (Koszalin) they result from high temperatures in the heating season and the smallest insolation in the summer period, while in the mountainous area (Zakopane) they result from high insolation in the winter and low temperatures in the summer. In the region of middle and middle-eastern Poland active systems (such as solar energy collectors or photovoltaic panels) could be more beneficial, due to high insolation during summer. It is assessed that passive systems do not eliminate the need for traditional heating in Poland. They can, however, substantially contribute to lower use of non-renewable fuels and the shortening of the heating season. The calculations showed diversification in the effectiveness of greenhouse systems resulting from climatic conditions, and allowed to identify areas which are the most suitable for the passive use of solar radiation.Keywords: solar energy, passive greenhouse systems, glazed balconies, climatic conditions
Procedia PDF Downloads 368529 Symphony of Healing: Exploring Music and Art Therapy’s Impact on Chemotherapy Patients with Cancer
Authors: Sunidhi Sood, Drashti Narendrakumar Shah, Aakarsh Sharma, Nirali Harsh Panchal, Maria Karizhenskaia
Abstract:
Cancer is a global health concern, causing a significant number of deaths, with chemotherapy being a standard treatment method. However, chemotherapy often induces side effects that profoundly impact the physical and emotional well-being of patients, lowering their overall quality of life (QoL). This research aims to investigate the potential of music and art therapy as holistic adjunctive therapy for cancer patients undergoing chemotherapy, offering non-pharmacological support. This is achieved through a comprehensive review of existing literature with a focus on the following themes, including stress and anxiety alleviation, emotional expression and coping skill development, transformative changes, and pain management with mood upliftment. A systematic search was conducted using Medline, Google Scholar, and St. Lawrence College Library, considering original, peer-reviewed research papers published from 2014 to 2023. The review solely incorporated studies focusing on the impact of music and art therapy on the health and overall well-being of cancer patients undergoing chemotherapy in North America. The findings from 16 studies involving pediatric oncology patients, females affected by breast cancer, and general oncology patients show that music and art therapies significantly reduce anxiety (standardized mean difference: -1.10) and improve perceived stress (median change: -4.0) and overall quality of life in cancer patients undergoing chemotherapy. Furthermore, music therapy has demonstrated the potential to decrease anxiety, depression, and pain during infusion treatments (average changes in resilience scale: 3.4 and 4.83 for instrumental and vocal music therapy, respectively). This data calls for consideration of the integration of music and art therapy into supportive care programs for cancer patients undergoing chemotherapy. Moreover, it provides guidance to healthcare professionals and policymakers, facilitating the development of patient-centered strategies for cancer care in Canada. Further research is needed in collaboration with qualified therapists to examine its applicability and explore and evaluate patients' perceptions and expectations in order to optimize the therapeutic benefits and overall patient experience. In conclusion, integrating music and art therapy in cancer care promises to substantially enhance the well-being and psychosocial state of patients undergoing chemotherapy. However, due to the small population size considered in existing studies, further research is needed to bridge the knowledge gap and ensure a comprehensive, patient-centered approach, ultimately enhancing the quality of life (QoL) for individuals facing the challenges of cancer treatment.Keywords: anxiety, cancer, chemotherapy, depression, music and art therapy, pain management, quality of life
Procedia PDF Downloads 76528 Understanding Face-to-Face Household Gardens’ Profitability and Local Economic Opportunity Pathways
Authors: Annika Freudenberger, Sin Sokhong
Abstract:
In just a few years, the Face-to-Face Victory Gardens Project (F2F) in Cambodia has developed a high-impact project that has provided immediate and tangible benefits to local families. This has been accomplished with a relatively hands-off approach that relies on households’ own motivation and personal investments of time and resources -which is both unique and impressive in the landscape of NGO and government initiatives in the area. Households have been growing food both for their own consumption and to sell or exchange. Not all targeted beneficiaries are equally motivated and maximizing their involvement, but there is a clear subset of households -particularly those who serve as facilitators- whose circumstances have been transformed as a result of F2F. A number of household factors and contextual economic factors affect families’ income generation opportunities. All the households we spoke with became involved with F2F with the goal of selling some proportion of their produce (i.e., not exclusively for their own consumption). For some, this income is marginal and supplemental to their core household income; for others, it is substantial and transformative. Some engage directly with customers/buyers in their immediate community, while others sell in larger nearby markets, and others link up with intermediary vendors. All struggle, to a certain extent, to compete in a local economy flooded with cheap produce imported from large-scale growers in neighboring provinces, Thailand, and Vietnam, although households who grow and sell herbs and greens popular in Khmer cuisine have found a stronger local market. Some are content with the scale of their garden, the income they make, and the current level of effort required to maintain it; others would like to expand but are faced with land constraints and water management challenges. Households making a substantial income from selling their products have achieved success in different ways, making it difficult to pinpoint a clear “model” for replication. Within our small sample size of interviewees, it seems as though the families with a clear passion for their gardens and high motivation to work hard to bring their products to market have succeeded in doing so. Khmer greens and herbs have been the most successful; they are not high-value crops, but they are fairly easy to grow, and there is a constant demand. These crops are also not imported as much, so prices are more stable than those of crops such as long beans. Although we talked to a limited number of individuals, it also appears as though successful families either restricted their crops to those that would grow well in drought or flood conditions (depending on which they are affected by most); or benefit already from water management infrastructure such as water tanks which helps them diversify their crops and helps them build their resilience.Keywords: food security, Victory Gardens, nutrition, Cambodia
Procedia PDF Downloads 59527 Relaxor Ferroelectric Lead-Free Na₀.₅₂K₀.₄₄Li₀.₀₄Nb₀.₈₄Ta₀.₁₀Sb₀.₀₆O₃ Ceramic: Giant Electromechanical Response with Intrinsic Polarization and Resistive Leakage Analyses
Authors: Abid Hussain, Binay Kumar
Abstract:
Environment-friendly lead-free Na₀.₅₂K₀.₄₄Li₀.₀₄Nb₀.₈₄Ta₀.₁₀Sb₀.₀₆O₃ (NKLNTS) ceramic was synthesized by solid-state reaction method in search of a potential candidate to replace lead-based ceramics such as PbZrO₃-PbTiO₃ (PZT), Pb(Mg₁/₃Nb₂/₃)O₃-PbTiO₃ (PMN-PT) etc., for various applications. The ceramic was calcined at temperature 850 ᵒC and sintered at 1090 ᵒC. The powder X-Ray Diffraction (XRD) pattern revealed the formation of pure perovskite phase having tetragonal symmetry with space group P4mm of the synthesized ceramic. The surface morphology of the ceramic was studied using Field Emission Scanning Electron Microscopy (FESEM) technique. The well-defined grains with homogeneous microstructure were observed. The average grain size was found to be ~ 0.6 µm. A very large value of piezoelectric charge coefficient (d₃₃ ~ 754 pm/V) was obtained for the synthesized ceramic which indicated its potential for use in transducers and actuators. In dielectric measurements, a high value of ferroelectric to paraelectric phase transition temperature (Tm~305 ᵒC), a high value of maximum dielectric permittivity ~ 2110 (at 1 kHz) and a very small value of dielectric loss ( < 0.6) were obtained which suggested the utility of NKLNTS ceramic in high-temperature ferroelectric devices. Also, the degree of diffuseness (γ) was found to be 1.61 which confirmed a relaxor ferroelectric behavior in NKLNTS ceramic. P-E hysteresis loop was traced and the value of spontaneous polarization was found to be ~11μC/cm² at room temperature. The pyroelectric coefficient was obtained to be very high (p ∼ 1870 μCm⁻² ᵒC⁻¹) for the present case indicating its applicability in pyroelectric detector applications including fire and burglar alarms, infrared imaging, etc. NKLNTS ceramic showed fatigue free behavior over 107 switching cycles. Remanent hysteresis task was performed to determine the true-remanent (or intrinsic) polarization of NKLNTS ceramic by eliminating non-switchable components which showed that a major portion (83.10 %) of the remanent polarization (Pr) is switchable in the sample which makes NKLNTS ceramic a suitable material for memory switching devices applications. Time-Dependent Compensated (TDC) hysteresis task was carried out which revealed resistive leakage free nature of the ceramic. The performance of NKLNTS ceramic was found to be superior to many lead based piezoceramics and hence can effectively replace them for use in piezoelectric, pyroelectric and long duration ferroelectric applications.Keywords: dielectric properties, ferroelectric properties , lead free ceramic, piezoelectric property, solid state reaction, true-remanent polarization
Procedia PDF Downloads 136526 Multi-Institutional Report on Toxicities of Concurrent Nivolumab and Radiation Therapy
Authors: Neha P. Amin, Maliha Zainib, Sean Parker, Malcolm Mattes
Abstract:
Purpose/Objectives: Combination immunotherapy (IT) and radiation therapy (RT) is an actively growing field of clinical investigation due to promising findings of synergistic effects from immune-mediated mechanisms observed in preclinical studies and clinical data from case reports of abscopal effects. While there are many ongoing trials of combined IT-RT, there are still limited data on toxicity and outcome optimization regarding RT dose, fractionation, and sequencing of RT with IT. Nivolumab (NIVO), an anti-PD-1 monoclonal antibody, has been rapidly adopted in the clinic over the past 2 years, resulting in more patients being considered for concurrent RT-NIVO. Knowledge about the toxicity profile of combined RT-NIVO is important for both the patient and physician when making educated treatment decisions. The acute toxicity profile of concurrent RT-NIVO was analyzed in this study. Materials/Methods: A retrospective review of all consecutive patients who received NIVO from 1/2015 to 5/2017 at 4 separate centers within two separate institutions was performed. Those patients who completed a course of RT from 1 day prior to initial NIVO infusion through 1 month after last NIVO infusion were considered to have received concurrent therapy and included in the subsequent analysis. Descriptive statistics are reported for patient/tumor/treatment characteristics and observed acute toxicities within 3 months of RT completion. Results: Among 261 patients who received NIVO, 46 (17.6%) received concurrent RT to 67 different sites. The median f/u was 3.3 (.1-19.8) months, and 11/46 (24%) were still alive at last analysis. The most common histology, RT prescription, and treatment site included non-small cell lung cancer (23/46, 50%), 30 Gy in 10 fractions (16/67, 24%), and central thorax/abdomen (26/67, 39%), respectively. 79% (53/67) of irradiated sites were treated with 3D-conformal technique and palliative dose-fractionation. Grade 3, 4, and 5 toxicities were experienced by 11, 1, and 2 patients, respectively. However all grade 4 and 5 toxicities were outside of the irradiated area and attributed to the NIVO alone, and only 4/11 (36%) of the grade 3 toxicities were attributed to the RT-NIVO. The irradiated site in these cases included the brain [2/10 (20%)] and central thorax/abdomen [2/19 (10.5%)], including one unexpected grade 3 pancreatitides following stereotactic body RT to the left adrenal gland. Conclusions: Concurrent RT-NIVO is generally well tolerated, though with potentially increased rates of severe toxicity when irradiating the lung, abdomen, or brain. Pending more definitive data, we recommend counseling patients on the potentially increased rates of side effects from combined immunotherapy and radiotherapy to these locations. Future prospective trials assessing fractionation and sequencing of RT with IT will help inform combined therapy recommendations.Keywords: combined immunotherapy and radiation, immunotherapy, Nivolumab, toxicity of concurrent immunotherapy and radiation
Procedia PDF Downloads 393525 Training Manual of Organic Agriculture Farming for the Farmers: A Case Study from Kunjpura and Surrounding Villages
Authors: Rishi Pal Singh
Abstract:
In Indian Scenario, Organic agriculture is growing by the conscious efforts of inspired people who are able to create the best promising relationship between the earth and men. Nowadays, the major challenge is its entry into the policy-making framework, its entry into the global market and weak sensitization among the farmers. But, during the last two decades, the contamination in environment and food which is linked with the bad agricultural potential/techniques has diverted the mind set of farmers towards the organic farming. In the view of above concept, a small-scale project has been installed to promote the 20 farmers from the Kunjura and surrounding villages for organic farming. This project is working since from the last 3 crops (starting from October, 2016) and found that it can meet both demands and complete development of rural areas. Farmers of this concept are working on the principles such that the nature never demands unreasonable quantities of water, mining and to destroy the microbes and other organisms. As per details of Organic Monitor estimates, global sales reached in billion in the present analysis. In this initiative, firstly, wheat and rice were considered for farming and observed that the production of crop has grown almost 10-15% per year from the last crop production. This is not linked only with the profit or loss but also emphasized on the concept of health, ecology, fairness and care of soil enrichment. Several techniques were used like use of biological fertilizers instead of chemicals, multiple cropping, temperature management, rain water harvesting, development of own seed, vermicompost and integration of animals. In the first year, to increase the fertility of the land, legumes (moong, cow pea and red gram) were grown in strips for the 60, 90 and 120 days. Simultaneously, the mixture of compost and vermicompost in the proportion of 2:1 was applied at the rate of 2.0 ton per acre which was enriched with 5 kg Azotobacter and 5 kg Rhizobium biofertilizer. To complete the amount of phosphorus, 250 kg rock phosphate was used. After the one month, jivamrut can be used with the irrigation water or during the rainy days. In next season, compost-vermicompost mixture @ 2.5 ton/ha was used for all type of crops. After the completion of this treatment, now the soil is ready for high value ordinary/horticultural crops. The amount of above stated biofertilizers, compost-vermicompost and rock phosphate may be increased for the high alternative fertilizers. The significance of the projects is that now the farmers believe in cultural alternative (use of disease-free their own seed, organic pest management), maintenance of biodiversity, crop rotation practices and health benefits of organic farming. This type of organic farming projects should be installed at the level of gram/block/district administration.Keywords: organic farming, Kunjpura, compost, bio-fertilizers
Procedia PDF Downloads 197524 Rural Entrepreneurship as a Response to Climate Change and Resource Conservation
Authors: Omar Romero-Hernandez, Federico Castillo, Armando Sanchez, Sergio Romero, Andrea Romero, Michael Mitchell
Abstract:
Environmental policies for resource conservation in rural areas include subsidies on services and social programs to cover living expenses. Government's expectation is that rural communities who benefit from social programs, such as payment for ecosystem services, are provided with an incentive to conserve natural resources and preserve natural sinks for greenhouse gases. At the same time, global climate change has affected the lives of people worldwide. The capability to adapt to global warming depends on the available resources and the standard of living, putting rural communities at a disadvantage. This paper explores whether rural entrepreneurship can represent a solution to resource conservation and global warming adaptation in rural communities. The research focuses on a sample of two coffee communities in Oaxaca, Mexico. Researchers used geospatial information contained in aerial photographs of the geographical areas of interest. Households were identified in the photos via the roofs of households and georeferenced via coordinates. From the household population, a random selection of roofs was performed and received a visit. A total of 112 surveys were completed, including questions of socio-demographics, perception to climate change and adaptation activities. The population includes two groups of study: entrepreneurs and non-entrepreneurs. Data was sorted, filtered, and validated. Analysis includes descriptive statistics for exploratory purposes and a multi-regression analysis. Outcomes from the surveys indicate that coffee farmers, who demonstrate entrepreneurship skills and hire employees, are more eager to adapt to climate change despite the extreme adverse socioeconomic conditions of the region. We show that farmers with entrepreneurial tendencies are more creative in using innovative farm practices such as the planting of shade trees, the use of live fencing, instead of wires, and watershed protection techniques, among others. This result counters the notion that small farmers are at the mercy of climate change and have no possibility of being able to adapt to a changing climate. The study also points to roadblocks that farmers face when coping with climate change. Among those roadblocks are a lack of extension services, access to credit, and reliable internet, all of which reduces access to vital information needed in today’s constantly changing world. Results indicate that, under some circumstances, funding and supporting entrepreneurship programs may provide more benefit than traditional social programs.Keywords: entrepreneurship, global warming, rural communities, climate change adaptation
Procedia PDF Downloads 241523 Analysis of Interparticle interactions in High Waxy-Heavy Clay Fine Sands for Sand Control Optimization
Authors: Gerald Gwamba
Abstract:
Formation and oil well sand production is one of the greatest and oldest concerns for the Oil and gas industry. The production of sand particles may vary from very small and limited amounts to far elevated levels which has the potential to block or plug the pore spaces near the perforated points to blocking production from surface facilities. Therefore, the timely and reliable investigation of conditions leading to the onset or quantifying sanding while producing is imperative. The challenges of sand production are even more elevated while producing in Waxy and Heavy wells with Clay Fine sands (WHFC). Existing research argues that both waxy and heavy hydrocarbons exhibit far differing characteristics with waxy more paraffinic while heavy crude oils exhibit more asphaltenic properties. Moreover, the combined effect of WHFC conditions presents more complexity in production as opposed to individual effects that could be attributed to a consolidation of a surmountable opposing force. However, research on a combined high WHFC system could depict a better representation of the surmountable effect which in essence is more comparable to field conditions where a one-sided view of either individual effects on sanding has been argued to some extent misrepresentative of actual field conditions since all factors act surmountably. In recognition of the limited customized research on sand production studies with the combined effect of WHFC however, our research seeks to apply the Design of Experiments (DOE) methodology based on latest literature to analyze the relationship between various interparticle factors in relation to selected sand control methods. Our research aims to unearth a better understanding of how the combined effect of interparticle factors including: strength, cementation, particle size and production rate among others could better assist in the design of an optimal sand control system for the WHFC well conditions. In this regard, we seek to answer the following research question: How does the combined effect of interparticle factors affect the optimization of sand control systems for WHFC wells? Results from experimental data collection will inform a better justification for a sand control design for WHFC. In doing so, we hope to contribute to earlier contrasts arguing that sand production could potentially enable well self-permeability enhancement caused by the establishment of new flow channels created by loosening and detachment of sand grains. We hope that our research will contribute to future sand control designs capable of adapting to flexible production adjustments in controlled sand management. This paper presents results which are part of an ongoing research towards the authors' PhD project in the optimization of sand control systems for WHFC wells.Keywords: waxy-heavy oils, clay-fine sands, sand control optimization, interparticle factors, design of experiments
Procedia PDF Downloads 133522 Self-Supervised Learning for Hate-Speech Identification
Authors: Shrabani Ghosh
Abstract:
Automatic offensive language detection in social media has become a stirring task in today's NLP. Manual Offensive language detection is tedious and laborious work where automatic methods based on machine learning are only alternatives. Previous works have done sentiment analysis over social media in different ways such as supervised, semi-supervised, and unsupervised manner. Domain adaptation in a semi-supervised way has also been explored in NLP, where the source domain and the target domain are different. In domain adaptation, the source domain usually has a large amount of labeled data, while only a limited amount of labeled data is available in the target domain. Pretrained transformers like BERT, RoBERTa models are fine-tuned to perform text classification in an unsupervised manner to perform further pre-train masked language modeling (MLM) tasks. In previous work, hate speech detection has been explored in Gab.ai, which is a free speech platform described as a platform of extremist in varying degrees in online social media. In domain adaptation process, Twitter data is used as the source domain, and Gab data is used as the target domain. The performance of domain adaptation also depends on the cross-domain similarity. Different distance measure methods such as L2 distance, cosine distance, Maximum Mean Discrepancy (MMD), Fisher Linear Discriminant (FLD), and CORAL have been used to estimate domain similarity. Certainly, in-domain distances are small, and between-domain distances are expected to be large. The previous work finding shows that pretrain masked language model (MLM) fine-tuned with a mixture of posts of source and target domain gives higher accuracy. However, in-domain performance of the hate classifier on Twitter data accuracy is 71.78%, and out-of-domain performance of the hate classifier on Gab data goes down to 56.53%. Recently self-supervised learning got a lot of attention as it is more applicable when labeled data are scarce. Few works have already been explored to apply self-supervised learning on NLP tasks such as sentiment classification. Self-supervised language representation model ALBERTA focuses on modeling inter-sentence coherence and helps downstream tasks with multi-sentence inputs. Self-supervised attention learning approach shows better performance as it exploits extracted context word in the training process. In this work, a self-supervised attention mechanism has been proposed to detect hate speech on Gab.ai. This framework initially classifies the Gab dataset in an attention-based self-supervised manner. On the next step, a semi-supervised classifier trained on the combination of labeled data from the first step and unlabeled data. The performance of the proposed framework will be compared with the results described earlier and also with optimized outcomes obtained from different optimization techniques.Keywords: attention learning, language model, offensive language detection, self-supervised learning
Procedia PDF Downloads 107521 Fluorescence-Based Biosensor for Dopamine Detection Using Quantum Dots
Authors: Sylwia Krawiec, Joanna Cabaj, Karol Malecha
Abstract:
Nowadays, progress in the field of the analytical methods is of great interest for reliable biological research and medical diagnostics. Classical techniques of chemical analysis, despite many advantages, do not permit to obtain immediate results or automatization of measurements. Chemical sensors have displaced the conventional analytical methods - sensors combine precision, sensitivity, fast response and the possibility of continuous-monitoring. Biosensor is a chemical sensor, which except of conventer also possess a biologically active material, which is the basis for the detection of specific chemicals in the sample. Each biosensor device mainly consists of two elements: a sensitive element, where is recognition of receptor-analyte, and a transducer element which receives the signal and converts it into a measurable signal. Through these two elements biosensors can be divided in two categories: due to the recognition element (e.g immunosensor) and due to the transducer (e.g optical sensor). Working of optical sensor is based on measurements of quantitative changes of parameters characterizing light radiation. The most often analyzed parameters include: amplitude (intensity), frequency or polarization. Changes in the optical properties one of the compound which reacts with biological material coated on the sensor is analyzed by a direct method, in an indirect method indicators are used, which changes the optical properties due to the transformation of the testing species. The most commonly used dyes in this method are: small molecules with an aromatic ring, like rhodamine, fluorescent proteins, for example green fluorescent protein (GFP), or nanoparticles such as quantum dots (QDs). Quantum dots have, in comparison with organic dyes, much better photoluminescent properties, better bioavailability and chemical inertness. These are semiconductor nanocrystals size of 2-10 nm. This very limited number of atoms and the ‘nano’-size gives QDs these highly fluorescent properties. Rapid and sensitive detection of dopamine is extremely important in modern medicine. Dopamine is very important neurotransmitter, which mainly occurs in the brain and central nervous system of mammals. Dopamine is responsible for the transmission information of moving through the nervous system and plays an important role in processes of learning or memory. Detection of dopamine is significant for diseases associated with the central nervous system such as Parkinson or schizophrenia. In developed optical biosensor for detection of dopamine, are used graphene quantum dots (GQDs). In such sensor dopamine molecules coats the GQD surface - in result occurs quenching of fluorescence due to Resonance Energy Transfer (FRET). Changes in fluorescence correspond to specific concentrations of the neurotransmitter in tested sample, so it is possible to accurately determine the concentration of dopamine in the sample.Keywords: biosensor, dopamine, fluorescence, quantum dots
Procedia PDF Downloads 365520 Stochastic Nuisance Flood Risk for Coastal Areas
Authors: Eva L. Suarez, Daniel E. Meeroff, Yan Yong
Abstract:
The U.S. Federal Emergency Management Agency (FEMA) developed flood maps based on experts’ experience and estimates of the probability of flooding. Current flood-risk models evaluate flood risk with regional and subjective measures without impact from torrential rain and nuisance flooding at the neighborhood level. Nuisance flooding occurs in small areas in the community, where a few streets or blocks are routinely impacted. This type of flooding event occurs when torrential rainstorm combined with high tide and sea level rise temporarily exceeds a given threshold. In South Florida, this threshold is 1.7 ft above Mean Higher High Water (MHHW). The National Weather Service defines torrential rain as rain deposition at a rate greater than 0.3-inches per hour or three inches in a single day. Data from the Florida Climate Center, 1970 to 2020, shows 371 events with more than 3-inches of rain in a day in 612 months. The purpose of this research is to develop a data-driven method to determine comprehensive analytical damage-avoidance criteria that account for nuisance flood events at the single-family home level. The method developed uses the Failure Mode and Effect Analysis (FMEA) method from the American Society of Quality (ASQ) to estimate the Damage Avoidance (DA) preparation for a 1-day 100-year storm. The Consequence of Nuisance Flooding (CoNF) is estimated from community mitigation efforts to prevent nuisance flooding damage. The Probability of Nuisance Flooding (PoNF) is derived from the frequency and duration of torrential rainfall causing delays and community disruptions to daily transportation, human illnesses, and property damage. Urbanization and population changes are related to the U.S. Census Bureau's annual population estimates. Data collected by the United States Department of Agriculture (USDA) Natural Resources Conservation Service’s National Resources Inventory (NRI) and locally by the South Florida Water Management District (SFWMD) track the development and land use/land cover changes with time. The intent is to include temporal trends in population density growth and the impact on land development. Results from this investigation provide the risk of nuisance flooding as a function of CoNF and PoNF for coastal areas of South Florida. The data-based criterion provides awareness to local municipalities on their flood-risk assessment and gives insight into flood management actions and watershed development.Keywords: flood risk, nuisance flooding, urban flooding, FMEA
Procedia PDF Downloads 100519 Comparative Vector Susceptibility for Dengue Virus and Their Co-Infection in A. aegypti and A. albopictus
Authors: Monika Soni, Chandra Bhattacharya, Siraj Ahmed Ahmed, Prafulla Dutta
Abstract:
Dengue is now a globally important arboviral disease. Extensive vector surveillance has already established A.aegypti as a primary vector, but A.albopictus is now accelerating the situation through gradual adaptation to human surroundings. Global destabilization and gradual climatic shift with rising in temperature have significantly expanded the geographic range of these species These versatile vectors also host Chikungunya, Zika, and yellow fever virus. Biggest challenge faced by endemic countries now is upsurge in co-infection reported with multiple serotypes and virus co-circulation. To foster vector control interventions and mitigate disease burden, there is surge for knowledge on vector susceptibility and viral tolerance in response to multiple infections. To address our understanding on transmission dynamics and reproductive fitness, both the vectors were exposed to single and dual combinations of all four dengue serotypes by artificial feeding and followed up to third generation. Artificial feeding observed significant difference in feeding rate for both the species where A.albopictus was poor artificial feeder (35-50%) compared to A.aegypti (95-97%) Robust sequential screening of viral antigen in mosquitoes was followed by Dengue NS1 ELISA, RT-PCR and Quantitative PCR. To observe viral dissemination in different mosquito tissues Indirect immunofluorescence assay was performed. Result showed that both the vectors were infected initially with all dengue(1-4)serotypes and its co-infection (D1 and D2, D1 and D3, D1 and D4, D2 and D4) combinations. In case of DENV-2 there was significant difference in the peak titer observed at 16th day post infection. But when exposed to dual infections A.aegypti supported all combinations of virus where A.albopictus only continued single infections in successive days. There was a significant negative effect on the fecundity and fertility of both the vectors compared to control (PANOVA < 0.001). In case of dengue 2 infected mosquito, fecundity in parent generation was significantly higher (PBonferroni < 0.001) for A.albopicus compare to A.aegypti but there was a complete loss of fecundity from second to third generation for A.albopictus. It was observed that A.aegypti becomes infected with multiple serotypes frequently even at low viral titres compared to A.albopictus. Possible reason for this could be the presence of wolbachia infection in A.albopictus or mosquito innate immune response, small RNA interference etc. Based on the observations it could be anticipated that transovarial transmission may not be an important phenomenon for clinical disease outcome, due to the absence of viral positivity by third generation. Also, Dengue NS1 ELISA can be used for preliminary viral detection in mosquitoes as more than 90% of the samples were found positive compared to RT-PCR and viral load estimation.Keywords: co-infection, dengue, reproductive fitness, viral quantification
Procedia PDF Downloads 203518 The Direct Deconvolution Model for the Large Eddy Simulation of Turbulence
Authors: Ning Chang, Zelong Yuan, Yunpeng Wang, Jianchun Wang
Abstract:
Large eddy simulation (LES) has been extensively used in the investigation of turbulence. LES calculates the grid-resolved large-scale motions and leaves small scales modeled by sublfilterscale (SFS) models. Among the existing SFS models, the deconvolution model has been used successfully in the LES of the engineering flows and geophysical flows. Despite the wide application of deconvolution models, the effects of subfilter scale dynamics and filter anisotropy on the accuracy of SFS modeling have not been investigated in depth. The results of LES are highly sensitive to the selection of filters and the anisotropy of the grid, which has been overlooked in previous research. In the current study, two critical aspects of LES are investigated. Firstly, we analyze the influence of sub-filter scale (SFS) dynamics on the accuracy of direct deconvolution models (DDM) at varying filter-to-grid ratios (FGR) in isotropic turbulence. An array of invertible filters are employed, encompassing Gaussian, Helmholtz I and II, Butterworth, Chebyshev I and II, Cauchy, Pao, and rapidly decaying filters. The significance of FGR becomes evident, as it acts as a pivotal factor in error control for precise SFS stress prediction. When FGR is set to 1, the DDM models cannot accurately reconstruct the SFS stress due to the insufficient resolution of SFS dynamics. Notably, prediction capabilities are enhanced at an FGR of 2, resulting in accurate SFS stress reconstruction, except for cases involving Helmholtz I and II filters. A remarkable precision close to 100% is achieved at an FGR of 4 for all DDM models. Additionally, the further exploration extends to the filter anisotropy to address its impact on the SFS dynamics and LES accuracy. By employing dynamic Smagorinsky model (DSM), dynamic mixed model (DMM), and direct deconvolution model (DDM) with the anisotropic filter, aspect ratios (AR) ranging from 1 to 16 in LES filters are evaluated. The findings highlight the DDM's proficiency in accurately predicting SFS stresses under highly anisotropic filtering conditions. High correlation coefficients exceeding 90% are observed in the a priori study for the DDM's reconstructed SFS stresses, surpassing those of the DSM and DMM models. However, these correlations tend to decrease as lter anisotropy increases. In the a posteriori studies, the DDM model consistently outperforms the DSM and DMM models across various turbulence statistics, encompassing velocity spectra, probability density functions related to vorticity, SFS energy flux, velocity increments, strain-rate tensors, and SFS stress. It is observed that as filter anisotropy intensify, the results of DSM and DMM become worse, while the DDM continues to deliver satisfactory results across all filter-anisotropy scenarios. The findings emphasize the DDM framework's potential as a valuable tool for advancing the development of sophisticated SFS models for LES of turbulence.Keywords: deconvolution model, large eddy simulation, subfilter scale modeling, turbulence
Procedia PDF Downloads 76517 Aerosol Characterization in a Coastal Urban Area in Rimini, Italy
Authors: Dimitri Bacco, Arianna Trentini, Fabiana Scotto, Flavio Rovere, Daniele Foscoli, Cinzia Para, Paolo Veronesi, Silvia Sandrini, Claudia Zigola, Michela Comandini, Marilena Montalti, Marco Zamagni, Vanes Poluzzi
Abstract:
The Po Valley, in the north of Italy, is one of the most polluted areas in Europe. The air quality of the area is linked not only to anthropic activities but also to its geographical characteristics and stagnant weather conditions with frequent inversions, especially in the cold season. Even the coastal areas present high values of particulate matter (PM10 and PM2.5) because the area closed between the Adriatic Sea and the Apennines does not favor the dispersion of air pollutants. The aim of the present work was to identify the main sources of particulate matter in Rimini, a tourist city in northern Italy. Two sampling campaigns were carried out in 2018, one in winter (60 days) and one in summer (30 days), in 4 sites: an urban background, a city hotspot, a suburban background, and a rural background. The samples are characterized by the concentration of the ionic composition of the particulates and of the main a hydro-sugars, in particular levoglucosan, a marker of the biomass burning, because one of the most important anthropogenic sources in the area, both in the winter and surprisingly even in the summer, is the biomass burning. Furthermore, three sampling points were chosen in order to maximize the contribution of a specific biomass source: a point in a residential area (domestic cooking and domestic heating), a point in the agricultural area (weed fires), and a point in the tourist area (restaurant cooking). In these sites, the analyzes were enriched with the quantification of the carbonaceous component (organic and elemental carbon) and with measurement of the particle number concentration and aerosol size distribution (6 - 600 nm). The results showed a very significant impact of the combustion of biomass due to domestic heating in the winter period, even though many intense peaks were found attributable to episodic wood fires. In the summer season, however, an appreciable signal was measured linked to the combustion of biomass, although much less intense than in winter, attributable to domestic cooking activities. Further interesting results were the verification of the total absence of sea salt's contribution in the particulate with the lower diameter (PM2.5), and while in the PM10, the contribution becomes appreciable only in particular wind conditions (high wind from north, north-east). Finally, it is interesting to note that in a small town, like Rimini, in summer, the traffic source seems to be even more relevant than that measured in a much larger city (Bologna) due to tourism.Keywords: aerosol, biomass burning, seacoast, urban area
Procedia PDF Downloads 129516 A Holistic View of Microbial Community Dynamics during a Toxic Harmful Algal Bloom
Authors: Shi-Bo Feng, Sheng-Jie Zhang, Jin Zhou
Abstract:
The relationship between microbial diversity and algal bloom has received considerable attention for decades. Microbes undoubtedly affect annual bloom events and impact the physiology of both partners, as well as shape ecosystem diversity. However, knowledge about interactions and network correlations among broader-spectrum microbes that lead to the dynamics in a complete bloom cycle are limited. In this study, pyrosequencing and network approaches simultaneously assessed the associate patterns among bacteria, archaea, and microeukaryotes in surface water and sediments in response to a natural dinoflagellate (Alexandrium sp.) bloom. In surface water, among the bacterial community, Gamma-Proteobacteria and Bacteroidetes dominated in the initial bloom stage, while Alpha-Proteobacteria, Cyanobacteria, and Actinobacteria become the most abundant taxa during the post-stage. In the archaea biosphere, it clustered predominantly with Methanogenic members in the early pre-bloom period while the majority of species identified in the later-bloom stage were ammonia-oxidizing archaea and Halobacteriales. In eukaryotes, dinoflagellate (Alexandrium sp.) was dominated in the onset stage, whereas multiply species (such as microzooplankton, diatom, green algae, and rotifera) coexistence in bloom collapse stag. In sediments, the microbial species biomass and richness are much higher than the water body. Only Flavobacteriales and Rhodobacterales showed a slight response to bloom stages. Unlike the bacteria, there are small fluctuations of archaeal and eukaryotic structure in the sediment. The network analyses among the inter-specific associations show that bacteria (Alteromonadaceae, Oceanospirillaceae, Cryomorphaceae, and Piscirickettsiaceae) and some zooplankton (Mediophyceae, Mamiellophyceae, Dictyochophyceae and Trebouxiophyceae) have a stronger impact on the structuring of phytoplankton communities than archaeal effects. The changes in population were also significantly shaped by water temperature and substrate availability (N & P resources). The results suggest that clades are specialized at different time-periods and that the pre-bloom succession was mainly a bottom-up controlled, and late-bloom period was controlled by top-down patterns. Additionally, phytoplankton and prokaryotic communities correlated better with each other, which indicate interactions among microorganisms are critical in controlling plankton dynamics and fates. Our results supplied a wider view (temporal and spatial scales) to understand the microbial ecological responses and their network association during algal blooming. It gives us a potential multidisciplinary explanation for algal-microbe interaction and helps us beyond the traditional view linked to patterns of algal bloom initiation, development, decline, and biogeochemistry.Keywords: microbial community, harmful algal bloom, ecological process, network
Procedia PDF Downloads 116515 Peak Constituent Fluxes from Small Arctic Rivers Generated by Late Summer Episodic Precipitation Events
Authors: Shawn G. Gallaher, Lilli E. Hirth
Abstract:
As permafrost thaws with the continued warming of the Alaskan North Slope, a progressively thicker active thaw layer is evidently releasing previously sequestered nutrients, metals, and particulate matter exposed to fluvial transport. In this study, we estimate material fluxes on the North Slope of Alaska during the 2019-2022 melt seasons. The watershed of the Alaskan North Slope can be categorized into three regions: mountains, tundra, and coastal plain. Precipitation and discharge data were collected from repeat visits to 14 sample sites for biogeochemical surface water samples, 7 point discharge measurements, 3 project deployed meteorology stations, and 2 U. S. Geological Survey (USGS) continuous discharge observation sites. The timing, intensity, and spatial distribution of precipitation determine the material flux composition in the Sagavanirktok and surrounding bodies of water, with geogenic constituents (e.g., dissolved inorganic carbon (DIC)) expected from mountain flushed events and biogenic constituents (e.g., dissolved organic compound (DOC)) expected from transitional tundra precipitation events. Project goals include connecting late summer precipitation events to peak discharge to determine the responses of the watershed to localized atmospheric forcing. Field study measurements showed widespread precipitation in August 2019, generating an increase in total suspended solids, dissolved organic carbon, and iron fluxes from the tundra, shifting the main-stem mountain river biogeochemistry toward tundra source characteristics typically only observed during the spring floods. Intuitively, a large-scale precipitation event (as defined by this study as exceeding 12.5 mm of precipitation on a single observation day) would dilute a body of water; however, in this study, concentrations increased with higher discharge responses on several occasions. These large-scale precipitation events continue to produce peak constituent fluxes as the thaw layer increases in depth and late summer precipitation increases, evidenced by 6 large-scale events in July 2022 alone. This increase in late summer events is in sharp contrast to the 3 or fewer large events in July in each of the last 10 years. Changes in precipitation intensity, timing, and location have introduced late summer peak constituent flux events previously confined to the spring freshet.Keywords: Alaska North Slope, arctic rivers, material flux, precipitation
Procedia PDF Downloads 76514 Examining the Overuse of Cystoscopy in the Evaluation of Lower Urinary Tract Symptoms in Men with Benign Prostatic Hyperplasia: A Prospective Study
Authors: Ilija Kelepurovski, Stefan Lazorovski, Pece Petkovski, Marian Anakievski, Svetlana Petkovska
Abstract:
Introduction: Benign prostatic hyperplasia (BPH) is a common condition that affects men over the age of 50 and is characterized by an enlarged prostate gland that can cause lower urinary tract symptoms (LUTS). Uroflowmetry and cystoscopy are two commonly used diagnostic tests to evaluate LUTS and diagnose BPH. While both tests can be useful, there is a risk of overusing cystoscopy and underusing uroflowmetry in the evaluation of LUTS. The aim of this study was to compare the use of uroflowmetry and cystoscopy in a prospective cohort of 100 patients with suspected BPH or other urinary tract conditions and to assess the diagnostic yield of each test. Materials and Methods: This was a prospective study of 100 male patients over the age of 50 with suspected BPH or other urinary tract conditions who underwent uroflowmetry and cystoscopy for the evaluation of LUTS at a single tertiary care center. Inclusion criteria included male patients over the age of 50 with suspected BPH or other urinary tract conditions, while exclusion criteria included previous urethral or bladder surgery, active urinary tract infection, and significant comorbidities. The primary outcome of the study was the frequency of cystoscopy in the evaluation of LUTS, and the secondary outcome was the diagnostic yield of each test. Results: Of the 100 patients included in the study, 86 (86%) were diagnosed with BPH and 14 (14%) had other urinary tract conditions. The mean age of the study population was 67 years. Uroflowmetry was performed on all 100 patients, while cystoscopy was performed on 70 (70%) of the patients. The diagnostic yield of uroflowmetry was high, with a clear diagnosis made in 92 (92%) of the patients. The diagnostic yield of cystoscopy was also high, with a clear diagnosis made in 63 (90%) of the patients who underwent the procedure. There was no statistically significant difference in the diagnostic yield of uroflowmetry and cystoscopy (p = 0.20). Discussion: Our study found that uroflowmetry is an effective and well-tolerated diagnostic tool for evaluating LUTS and diagnosing BPH, with a high diagnostic yield and low risk of complications. Cystoscopy is also a useful diagnostic tool, but it is more invasive and carries a small risk of complications such as bleeding or urinary tract infection. Both tests had a high diagnostic yield, suggesting that either test can provide useful information in the evaluation of LUTS. However, the fact that 70% of the study population underwent cystoscopy raises concerns about the potential overuse of this test in the evaluation of LUTS. This is especially relevant given the focus on patient-centered care and the need to minimize unnecessary or invasive procedures. Our findings underscore the importance of considering the clinical context and using evidence-based guidelines. Conclusion: In this prospective study of 100 patients with suspected BPH or other urinary tract conditions, we found that uroflowmetry and cystoscopy were both valuable diagnostic tools for the evaluation of LUTS. However, the potential overuse of cystoscopy in this population warrants further investigation and highlights the need for careful consideration of the optimal use of diagnostic tests in the evaluation of LUTS and the diagnosis of BPH. Further research is needed to better understand the relative roles of uroflowmetry and cystoscopy in the diagnostic workup of patients with LUTS, and to develop evidence-based guidelines for their appropriate use.Keywords: uroflowmetry, cystoscopy, LUTS, BPH
Procedia PDF Downloads 77513 Structural Analysis and Evolution of 18th Century Ottoman Imperial Mosques (1750-1799) in Comparison with the Classical Period Examples
Authors: U. Demir
Abstract:
18th century which is the period of 'change' in the Ottoman Empire, affects the architecture as well, where the Classical period is left behind, architecture is differentiated in the form language. This change is especially noticeable in monumental buildings and thus manifested itself in the mosques. But, is it possible to talk about the structural context of the 'change' which has been occurred in decoration? The aim of this study is to investigate the changes and classical relations of the 18th century mosques through plan schedules and structure systems. This study focuses on the monumental mosques constructed during the reign of the three sultans who ruled in the second half of the century (Mustafa the 3rd 1757-1774, Abdülhamid the 1st 1774-1789 and Selim the 3rd). According to their construction years these are 'Ayazma, Laleli, Zeyneb Sultan, Fatih, Beylerbeyi, Şebsefa Kadın, Eyüb Sultan, Mihrişah Valide Sultan and Üsküdar-Selimiye' mosques. As a plan scheme, four mosques have a square or close to a rectangular square scheme, while the others have a rectangle scheme and showing the longitudinal development of the mihrab axis. This situation is widespread throughout the period. In addition to the longitudinal development plan, which is the general characteristic of the 18th century mosques, the use of the classical plan schemes continued in the same direction. Spatialization of the mihrab area was applied to the five mosques while other mosques were applied as niches on the wall surface. This situation is widespread in the period of the second half of the century. In the classical period, the lodges may be located at the back of the mosques interior, not interfering with the main worship area. In the period, the lodges were withdrawn from the main worship area. They are separated from the main interior with their own structural and covering systems. The plans seem to be formed as a result of the addition of lodge parts to the northern part of the Classical period mosques. The 18th century mosques are the constructions where the change of the architectural language and style can be observed easily. This change and the break from the classical period manifest themselves quickly in the structural elements, wall surface decorations, pencil work designs, small scale decor elements, motifs. The speed and intensity of change in the decor does not occur the same as in structural context. The mosque construction rules from the traditional and classical era still continues in the century. While some mosque structures have a plan which is inherited from the classical successor, some of were constructed with the same classical period rules. Nonetheless, the location and transformation of the lodges, which are affecting the interior design, are noteworthy. They provide a significant transition on the way to the new language of the mosque design that will be experienced in the next century. It is intended to draw attention to the structural evolution of the 18th century Ottoman architecture through the royal mosques within the scope of this conference.Keywords: mosque structure, Ottoman architecture, structural evolution, 18th century architecture
Procedia PDF Downloads 201512 Bayesian Structural Identification with Systematic Uncertainty Using Multiple Responses
Authors: André Jesus, Yanjie Zhu, Irwanda Laory
Abstract:
Structural health monitoring is one of the most promising technologies concerning aversion of structural risk and economic savings. Analysts often have to deal with a considerable variety of uncertainties that arise during a monitoring process. Namely the widespread application of numerical models (model-based) is accompanied by a widespread concern about quantifying the uncertainties prevailing in their use. Some of these uncertainties are related with the deterministic nature of the model (code uncertainty) others with the variability of its inputs (parameter uncertainty) and the discrepancy between a model/experiment (systematic uncertainty). The actual process always exhibits a random behaviour (observation error) even when conditions are set identically (residual variation). Bayesian inference assumes that parameters of a model are random variables with an associated PDF, which can be inferred from experimental data. However in many Bayesian methods the determination of systematic uncertainty can be problematic. In this work systematic uncertainty is associated with a discrepancy function. The numerical model and discrepancy function are approximated by Gaussian processes (surrogate model). Finally, to avoid the computational burden of a fully Bayesian approach the parameters that characterise the Gaussian processes were estimated in a four stage process (modular Bayesian approach). The proposed methodology has been successfully applied on fields such as geoscience, biomedics, particle physics but never on the SHM context. This approach considerably reduces the computational burden; although the extent of the considered uncertainties is lower (second order effects are neglected). To successfully identify the considered uncertainties this formulation was extended to consider multiple responses. The efficiency of the algorithm has been tested on a small scale aluminium bridge structure, subjected to a thermal expansion due to infrared heaters. Comparison of its performance with responses measured at different points of the structure and associated degrees of identifiability is also carried out. A numerical FEM model of the structure was developed and the stiffness from its supports is considered as a parameter to calibrate. Results show that the modular Bayesian approach performed best when responses of the same type had the lowest spatial correlation. Based on previous literature, using different types of responses (strain, acceleration, and displacement) should also improve the identifiability problem. Uncertainties due to parametric variability, observation error, residual variability, code variability and systematic uncertainty were all recovered. For this example the algorithm performance was stable and considerably quicker than Bayesian methods that account for the full extent of uncertainties. Future research with real-life examples is required to fully access the advantages and limitations of the proposed methodology.Keywords: bayesian, calibration, numerical model, system identification, systematic uncertainty, Gaussian process
Procedia PDF Downloads 327511 Mass Flux and Forensic Assessment: Informed Remediation Decision Making at One of Canada’s Most Polluted Sites
Authors: Tony R. Walker, N. Devin MacAskill, Andrew Thalhiemer
Abstract:
Sydney Harbour, Nova Scotia, Canada has long been subject to effluent and atmospheric inputs of contaminants, including thousands of tons of PAHs from a large coking and steel plant which operated in Sydney for nearly a century. Contaminants comprised of coal tar residues which were discharged from coking ovens into a small tidal tributary, which became known as the Sydney Tar Ponds (STPs), and subsequently discharged into Sydney Harbour. An Environmental Impact Statement concluded that mobilization of contaminated sediments posed unacceptable ecological risks, therefore immobilizing contaminants in the STPs using solidification and stabilization was identified as a primary source control remediation option to mitigate against continued transport of contaminated sediments from the STPs into Sydney Harbour. Recent developments in contaminant mass flux techniques focus on understanding “mobile” vs. “immobile” contaminants at remediation sites. Forensic source evaluations are also increasingly used for understanding origins of PAH contaminants in soils or sediments. Flux and forensic source evaluation-informed remediation decision-making uses this information to develop remediation end point goals aimed at reducing off-site exposure and managing potential ecological risk. This study included reviews of previous flux studies, calculating current mass flux estimates and a forensic assessment using PAH fingerprint techniques, during remediation of one of Canada’s most polluted sites at the STPs. Historically, the STPs was thought to be the major source of PAH contamination in Sydney Harbour with estimated discharges of nearly 800 kg/year of PAHs. However, during three years of remediation monitoring only 17-97 kg/year of PAHs were discharged from the STPs, which was also corroborated by an independent PAH flux study during the first year of remediation which estimated 119 kg/year. The estimated mass efflux of PAHs from the STPs during remediation was in stark contrast to ~2000 kg loading thought necessary to cause a short term increase in harbour sediment PAH concentrations. These mass flux estimates during remediation were also between three to eight times lower than PAHs discharged from the STPs a decade prior to remediation, when at the same time, government studies demonstrated on-going reduction in PAH concentrations in harbour sediments. Flux results were also corroborated using forensic source evaluations using PAH fingerprint techniques which found a common source of PAHs for urban soils, marine and aquatic sediments in and around Sydney. Coal combustion (from historical coking) and coal dust transshipment (from current coal transshipment facilities), are likely the principal source of PAHs in these media and not migration of PAH laden sediments from the STPs during a large scale remediation project.Keywords: contaminated sediment, mass flux, forensic source evaluations, remediation
Procedia PDF Downloads 239510 Investigating the Flow Physics within Vortex-Shockwave Interactions
Authors: Frederick Ferguson, Dehua Feng, Yang Gao
Abstract:
No doubt, current CFD tools have a great many technical limitations, and active research is being done to overcome these limitations. Current areas of limitations include vortex-dominated flows, separated flows, and turbulent flows. In general, turbulent flows are unsteady solutions to the fluid dynamic equations, and instances of these solutions can be computed directly from the equations. One of the approaches commonly implemented is known as the ‘direct numerical simulation’, DNS. This approach requires a spatial grid that is fine enough to capture the smallest length scale of the turbulent fluid motion. This approach is called the ‘Kolmogorov scale’ model. It is of interest to note that the Kolmogorov scale model must be captured throughout the domain of interest and at a correspondingly small-time step. In typical problems of industrial interest, the ratio of the length scale of the domain to the Kolmogorov length scale is so great that the required grid set becomes prohibitively large. As a result, the available computational resources are usually inadequate for DNS related tasks. At this time in its development, DNS is not applicable to industrial problems. In this research, an attempt is made to develop a numerical technique that is capable of delivering DNS quality solutions at the scale required by the industry. To date, this technique has delivered preliminary results for both steady and unsteady, viscous and inviscid, compressible and incompressible, and for both high and low Reynolds number flow fields that are very accurate. Herein, it is proposed that the Integro-Differential Scheme (IDS) be applied to a set of vortex-shockwave interaction problems with the goal of investigating the nonstationary physics within the resulting interaction regions. In the proposed paper, the IDS formulation and its numerical error capability will be described. Further, the IDS will be used to solve the inviscid and viscous Burgers equation, with the goal of analyzing their solutions over a considerable length of time, thus demonstrating the unsteady capabilities of the IDS. Finally, the IDS will be used to solve a set of fluid dynamic problems related to flow that involves highly vortex interactions. Plans are to solve the following problems: the travelling wave and vortex problems over considerable lengths of time, the normal shockwave–vortex interaction problem for low supersonic conditions and the reflected oblique shock–vortex interaction problem. The IDS solutions obtained in each of these solutions will be explored further in efforts to determine the distributed density gradients and vorticity, as well as the Q-criterion. Parametric studies will be conducted to determine the effects of the Mach number on the intensity of vortex-shockwave interactions.Keywords: vortex dominated flows, shockwave interactions, high Reynolds number, integro-differential scheme
Procedia PDF Downloads 139509 A Brazilian Study Applied to the Regulatory Environmental Issues of Nanomaterials
Authors: Luciana S. Almeida
Abstract:
Nanotechnology has revolutionized the world of science and technology bringing great expectations due to its great potential of application in the most varied industrial sectors. The same characteristics that make nanoparticles interesting from the point of view of the technological application, these may be undesirable when released into the environment. The small size of nanoparticles facilitates their diffusion and transport in the atmosphere, water, and soil and facilitates the entry and accumulation of nanoparticles in living cells. The main objective of this study is to evaluate the environmental regulatory process of nanomaterials in the Brazilian scenario. Three specific objectives were outlined. The first is to carry out a global scientometric study, in a research platform, with the purpose of identifying the main lines of study of nanomaterials in the environmental area. The second is to verify how environmental agencies in other countries have been working on this issue by means of a bibliographic review. And the third is to carry out an assessment of the Brazilian Nanotechnology Draft Law 6741/2013 with the state environmental agencies. This last one has the aim of identifying the knowledge of the subject by the environmental agencies and necessary resources available in the country for the implementation of the Policy. A questionnaire will be used as a tool for this evaluation to identify the operational elements and build indicators through the Environment of Evaluation Application, a computational application developed for the development of questionnaires. At the end will be verified the need to propose changes in the Draft Law of the National Nanotechnology Policy. Initial studies, in relation to the first specific objective, have already identified that Brazil stands out in the production of scientific publications in the area of nanotechnology, although the minority is in studies focused on environmental impact studies. Regarding the general panorama of other countries, some findings have also been raised. The United States has included the nanoform of the substances in an existing program in the EPA (Environmental Protection Agency), the TSCA (Toxic Substances Control Act). The European Union issued a draft of a document amending Regulation 1907/2006 of the European Parliament and Council to cover the nanoform of substances. Both programs are based on the study and identification of environmental risks associated with nanomaterials taking into consideration the product life cycle. In relation to Brazil, regarding the third specific objective, it is notable that the country does not have any regulations applicable to nanostructures, although there is a Draft Law in progress. In this document, it is possible to identify some requirements related to the environment, such as environmental inspection and licensing; industrial waste management; notification of accidents and application of sanctions. However, it is not known if these requirements are sufficient for the prevention of environmental impacts and if national environmental agencies will know how to apply them correctly. This study intends to serve as a basis for future actions regarding environmental management applied to the use of nanotechnology in Brazil.Keywords: environment; management; nanotecnology; politics
Procedia PDF Downloads 123508 Strategies for Public Space Utilization
Authors: Ben Levenger
Abstract:
Social life revolves around a central meeting place or gathering space. It is where the community integrates, earns social skills, and ultimately becomes part of the community. Following this premise, public spaces are one of the most important spaces that downtowns offer, providing locations for people to be witnessed, heard, and most importantly, seamlessly integrate into the downtown as part of the community. To facilitate this, these local spaces must be envisioned and designed to meet the changing needs of a downtown, offering a space and purpose for everyone. This paper will dive deep into analyzing, designing, and implementing public space design for small plazas or gathering spaces. These spaces often require a detailed level of study, followed by a broad stroke of design implementation, allowing for adaptability. This paper will highlight how to assess needs, define needed types of spaces, outline a program for spaces, detail elements of design to meet the needs, assess your new space, and plan for change. This study will provide participants with the necessary framework for conducting a grass-roots-level assessment of public space and programming, including short-term and long-term improvements. Participants will also receive assessment tools, sheets, and visual representation diagrams. Urbanism, for the sake of urbanism, is an exercise in aesthetic beauty. An economic improvement or benefit must be attained to solidify these efforts' purpose further and justify the infrastructure or construction costs. We will deep dive into case studies highlighting economic impacts to ground this work in quantitative impacts. These case studies will highlight the financial impact on an area, measuring the following metrics: rental rates (per sq meter), tax revenue generation (sales and property), foot traffic generation, increased property valuations, currency expenditure by tenure, clustered development improvements, cost/valuation benefits of increased density in housing. The economic impact results will be targeted by community size, measuring in three tiers: Sub 10,000 in population, 10,001 to 75,000 in population, and 75,000+ in population. Through this classification breakdown, the participants can gauge the impact in communities similar to their work or for which they are responsible. Finally, a detailed analysis of specific urbanism enhancements, such as plazas, on-street dining, pedestrian malls, etc., will be discussed. Metrics that document the economic impact of each enhancement will be presented, aiding in the prioritization of improvements for each community. All materials, documents, and information will be available to participants via Google Drive. They are welcome to download the data and use it for their purposes.Keywords: downtown, economic development, planning, strategic
Procedia PDF Downloads 85507 Accelerating Malaysian Technology Startups: Case Study of Malaysian Technology Development Corporation as the Innovator
Authors: Norhalim Yunus, Mohamad Husaini Dahalan, Nor Halina Ghazali
Abstract:
Building technology start-ups from ground zero into world-class companies in form and substance present a rare opportunity for government-affiliated institutions in Malaysia. The challenge of building such start-ups becomes tougher when their core businesses involve commercialization of unproven technologies for the mass market. These simple truths, while difficult to execute, will go a long way in getting a business off the ground and flying high. Malaysian Technology Development Corporation (MTDC), a company founded to facilitate the commercial exploitation of R&D findings from research institutions and universities, and eventually help translate these findings of applications in the marketplace, is an excellent case in point. The purpose of this paper is to examine MTDC as an institution as it explores the concept of ‘it takes a village to raise a child’ in an effort to create and nurture start-ups into established world class Malaysian technology companies. With MTDC at the centre of Malaysia's innovative start-ups, the analysis seeks to specifically answer two questions: How has the concept been applied in MTDC? and what can we learn from this successful case? A key aim is to elucidate how MTDC's journey as a private limited company can help leverage reforms and achieve transformation, a process that might be suitable for other small, open, third world and developing countries. This paper employs a single case study, designed to acquire an in-depth understanding of how MTDC has developed and grown technology start-ups to world-class technology companies. The case study methodology is employed as the focus is on a contemporary phenomenon within a real business context. It also explains the causal links in real-life situations where a single survey or experiment is unable to unearth. The findings show that MTDC maximises the concept of it needs a village to raise a child in totality, as MTDC itself assumes the role of the innovator to 'raise' start-up companies into world-class stature. As the innovator, MTDC creates shared value and leadership, introduces innovative programmes ahead of the curve, mobilises talents for optimum results and aggregates knowledge for personnel advancement. The success of the company's effort is attributed largely to leadership, visionary, adaptability, commitment to innovate, partnership and networking, and entrepreneurial drive. The findings of this paper are however limited by the single case study of MTDC. Future research is required to study more cases of success or/and failure where the concept of it takes a village to raise a child have been explored and applied.Keywords: start-ups, technology transfer, commercialization, technology incubator
Procedia PDF Downloads 151506 Monitoring and Evaluation of Web-Services Quality and Medium-Term Impact on E-Government Agencies' Efficiency
Authors: A. F. Huseynov, N. T. Mardanov, J. Y. Nakhchivanski
Abstract:
This practical research is aimed to improve the management quality and efficiency of public administration agencies providing e-services. The monitoring system developed will provide continuous review of the websites compliance with the selected indicators, their evaluation based on the selected indicators and ranking of services according to the quality criteria. The responsible departments in the government agencies were surveyed; the questionnaire includes issues of management and feedback, e-services provided, and the application of information systems. By analyzing the main affecting factors and barriers, the recommendations will be given that lead to the relevant decisions to strengthen the state agencies competencies for the management and the provision of their services. Component 1. E-services monitoring system. Three separate monitoring activities are proposed to be executed in parallel: Continuous tracing of e-government sites using built-in web-monitoring program; this program generates several quantitative values which are basically related to the technical characteristics and the performance of websites. The expert assessment of e-government sites in accordance with the two general criteria. Criterion 1. Technical quality of the site. Criterion 2. Usability/accessibility (load, see, use). Each high-level criterion is in turn subdivided into several sub-criteria, such as: the fonts and the color of the background (Is it readable?), W3C coding standards, availability of the Robots.txt and the site map, the search engine, the feedback/contact and the security mechanisms. The on-line survey of the users/citizens – a small group of questions embedded in the e-service websites. The questionnaires comprise of the information concerning navigation, users’ experience with the website (whether it was positive or negative), etc. Automated monitoring of web-sites by its own could not capture the whole evaluation process, and should therefore be seen as a complement to expert’s manual web evaluations. All of the separate results were integrated to provide the complete evaluation picture. Component 2. Assessment of the agencies/departments efficiency in providing e-government services. - the relevant indicators to evaluate the efficiency and the effectiveness of e-services were identified; - the survey was conducted in all the governmental organizations (ministries, committees and agencies) that provide electronic services for the citizens or the businesses; - the quantitative and qualitative measures are covering the following sections of activities: e-governance, e-services, the feedback from the users, the information systems at the agencies’ disposal. Main results: 1. The software program and the set of indicators for internet sites evaluation has been developed and the results of pilot monitoring have been presented. 2. The evaluation of the (internal) efficiency of the e-government agencies based on the survey results with the practical recommendations related to the human potential, the information systems used and e-services provided.Keywords: e-government, web-sites monitoring, survey, internal efficiency
Procedia PDF Downloads 305505 Source-Detector Trajectory Optimization for Target-Based C-Arm Cone Beam Computed Tomography
Authors: S. Hatamikia, A. Biguri, H. Furtado, G. Kronreif, J. Kettenbach, W. Birkfellner
Abstract:
Nowadays, three dimensional Cone Beam CT (CBCT) has turned into a widespread clinical routine imaging modality for interventional radiology. In conventional CBCT, a circular sourcedetector trajectory is used to acquire a high number of 2D projections in order to reconstruct a 3D volume. However, the accumulated radiation dose due to the repetitive use of CBCT needed for the intraoperative procedure as well as daily pretreatment patient alignment for radiotherapy has become a concern. It is of great importance for both health care providers and patients to decrease the amount of radiation dose required for these interventional images. Thus, it is desirable to find some optimized source-detector trajectories with the reduced number of projections which could therefore lead to dose reduction. In this study we investigate some source-detector trajectories with the optimal arbitrary orientation in the way to maximize performance of the reconstructed image at particular regions of interest. To achieve this approach, we developed a box phantom consisting several small target polytetrafluoroethylene spheres at regular distances through the entire phantom. Each of these spheres serves as a target inside a particular region of interest. We use the 3D Point Spread Function (PSF) as a measure to evaluate the performance of the reconstructed image. We measured the spatial variance in terms of Full-Width-Half-Maximum (FWHM) of the local PSFs each related to a particular target. The lower value of FWHM shows the better spatial resolution of reconstruction results at the target area. One important feature of interventional radiology is that we have very well-known imaging targets as a prior knowledge of patient anatomy (e.g. preoperative CT) is usually available for interventional imaging. Therefore, we use a CT scan from the box phantom as the prior knowledge and consider that as the digital phantom in our simulations to find the optimal trajectory for a specific target. Based on the simulation phase we have the optimal trajectory which can be then applied on the device in real situation. We consider a Philips Allura FD20 Xper C-arm geometry to perform the simulations and real data acquisition. Our experimental results based on both simulation and real data show our proposed optimization scheme has the capacity to find optimized trajectories with minimal number of projections in order to localize the targets. Our results show the proposed optimized trajectories are able to localize the targets as good as a standard circular trajectory while using just 1/3 number of projections. Conclusion: We demonstrate that applying a minimal dedicated set of projections with optimized orientations is sufficient to localize targets, may minimize radiation.Keywords: CBCT, C-arm, reconstruction, trajectory optimization
Procedia PDF Downloads 132504 The Effects of the New Silk Road Initiatives and the Eurasian Union to the East-Central-Europe’s East Opening Policies
Authors: Tamas Dani
Abstract:
The author’s research explores the geo-economical role and importance of some small and medium sized states, reviews their adaption strategies in foreign trade and also in foreign affairs in the course of changing into a multipolar world, uses international background. With these, the paper analyses the recent years and the future of ‘Opening towards Eastern foreign economic policies’ from East-Central Europe and parallel with that the ‘Western foreign economy policies’ from Asia, as the Chinese One Belt One Road new silk route plans (so far its huge part is an infrastructural development plan to reach international trade and investment aims). It can be today’s question whether these ideas will reshape the global trade or not. How does the new silk road initiatives and the Eurasian Union reflect the effect of globalization? It is worth to analyse that how did Central and Eastern European countries open to Asia; why does China have the focus of the opening policies in many countries and why could China be seen as the ‘winner’ of the world economic crisis after 2008. The research is based on the following methodologies: national and international literature, policy documents and related design documents, complemented by processing of international databases, statistics and live interviews with leaders from East-Central European countries’ companies and public administration, diplomats and international traders. The results also illustrated by mapping and graphs. The research will find out as major findings whether the state decision-makers have enough margin for manoeuvres to strengthen foreign economic relations. This work has a hypothesis that countries in East-Central Europe have real chance to diversify their relations in foreign trade, focus beyond their traditional partners. This essay focuses on the opportunities of East-Central-European countries in diversification of foreign trade relations towards China and Russia in terms of ‘Eastern Openings’. The effects of the new silk road initiatives and the Eurasian Union to Hungary’s economy with a comparing outlook on East-Central European countries and exploring common regional cooperation opportunities in this area. The essay concentrate on the changing trade relations between East-Central-Europe and China as well as Russia, try to analyse the effects of the new silk road initiatives and the Eurasian Union also. In the conclusion part, it shows how the cooperation is necessary for the East-Central European countries if they want to have a non-asymmetric trade with Russia, China or some Chinese regions (Pearl River Delta, Hainan, …). The form of the cooperation for the East-Central European nations can be Visegrad 4 Cooperation (V4), Central and Eastern European Countries (CEEC16), 3 SEAS Cooperation (or BABS – Baltic, Adriatic, Black Seas Initiative).Keywords: China, East-Central Europe, foreign trade relations, geoeconomics, geopolitics, Russia
Procedia PDF Downloads 183