Search results for: multi stage flash distillation
966 A Development of English Pronunciation Using Principles of Phonetics for English Major Students at Loei Rajabhat University
Authors: Pongthep Bunrueng
Abstract:
This action research accentuates the outcome of a development in English pronunciation, using principles of phonetics for English major students at Loei Rajabhat University. The research is split into 5 separate modules: 1) Organs of Speech and How to Produce Sounds, 2) Monopthongs, 3) Diphthongs, 4) Consonant sounds, and 5) Suprasegmental Features. Each module followed a 4 step action research process, 1) Planning, 2) Acting, 3) Observing, and 4) Reflecting. The research targeted 2nd year students who were majoring in English Education at Loei Rajabhat University during the academic year of 2011. A mixed methodology employing both quantitative and qualitative research was used, which put theory into action, taking segmental features up to suprasegmental features. Multiple tools were employed which included the following documents: pre-test and post-test papers, evaluation and assessment papers, group work assessment forms, a presentation grading form, an observation of participants form and a participant self-reflection form. All 5 modules for the target group showed that results from the post-tests were higher than those of the pre-tests, with 0.01 statistical significance. All target groups attained results ranging from low to moderate and from moderate to high performance. The participants who attained low to moderate results had to re-sit the second round. During the first development stage, participants attended classes with group participation, in which they addressed planning through mutual co-operation and sharing of responsibility. Analytic induction of strong points for this operation illustrated that learner cognition, comprehension, application, and group practices were all present whereas the participants with weak results could be attributed to biological differences, differences in life and learning, or individual differences in responsiveness and self-discipline. Participants who were required to be re-treated in Spiral 2 received the same treatment again. Results of tests from the 5 modules after the 2nd treatment were that the participants attained higher scores than those attained in the pre-test. Their assessment and development stages also showed improved results. They showed greater confidence at participating in activities, produced higher quality work, and correctly followed instructions for each activity. Analytic induction of strong and weak points for this operation remains the same as for Spiral 1, though there were improvements to problems which existed prior to undertaking the second treatment.Keywords: action research, English pronunciation, phonetics, segmental features, suprasegmental features
Procedia PDF Downloads 302965 Influence of Controlled Retting on the Quality of the Hemp Fibres Harvested at the Seed Maturity by Using a Designed Lab-Scale Pilot Unit
Authors: Brahim Mazian, Anne Bergeret, Jean-Charles Benezet, Sandrine Bayle, Luc Malhautier
Abstract:
Hemp fibers are increasingly used as reinforcements in polymer matrix composites due to their competitive performance (low density, mechanical properties and biodegradability) compared to conventional fibres such as glass fibers. However, the huge variation of their biochemical, physical and mechanical properties limits the use of these natural fibres in structural applications when high consistency and homogeneity are required. In the hemp industry, traditional processes termed field retting are commonly used to facilitate the extraction and separation of stem fibers. This retting treatment consists to spread out the stems on the ground for a duration ranging from a few days to several weeks. Microorganisms (fungi and bacteria) grow on the stem surface and produce enzymes that degrade pectinolytic substances in the middle lamellae surrounding the fibers. This operation depends on the weather conditions and is currently carried out very empirically in the fields so that a large variability in the hemp fibers quality (mechanical properties, color, morphology, chemical composition…) is resulting. Nonetheless, if controlled, retting might be favorable for good properties of hemp fibers and then of hemp fibers reinforced composites. Therefore, the present study aims to investigate the influence of controlled retting within a designed environmental chamber (lab-scale pilot unit) on the quality of the hemp fibres harvested at the seed maturity growth stage. Various assessments were applied directly on fibers: color observations, morphological (optical microscope), surface (ESEM), biochemical (gravimetry) analysis, spectrocolorimetric measurements (pectins content), thermogravimetric analysis (TGA) and tensile testing. The results reveal that controlled retting leads to a rapid change of color from yellow to dark grey due to development of microbial communities (fungi and bacteria) at the stem surface. An increase of thermal stability of fibres due to the removal of non-cellulosic components along retting is also observed. A separation of bast fibers to elementary fibers occurred with an evolution of chemical composition (degradation of pectins) and a rapid decrease in tensile properties (380MPa to 170MPa after 3 weeks) due to accelerated retting process. The influence of controlled retting on the biocomposite material (PP / hemp fibers) properties is under investigation.Keywords: controlled retting, hemp fibre, mechanical properties, thermal stability
Procedia PDF Downloads 158964 Aerosol Radiative Forcing Over Indian Subcontinent for 2000-2021 Using Satellite Observations
Authors: Shreya Srivastava, Sushovan Ghosh, Sagnik Dey
Abstract:
Aerosols directly affect Earth’s radiation budget by scattering and absorbing incoming solar radiation and outgoing terrestrial radiation. While the uncertainty in aerosol radiative forcing (ARF) has decreased over the years, it is still higher than that of greenhouse gas forcing, particularly in the South Asian region, due to high heterogeneity in their chemical properties. Understanding the Spatio-temporal heterogeneity of aerosol composition is critical in improving climate prediction. Studies using satellite data, in-situ and aircraft measurements, and models have investigated the Spatio-temporal variability of aerosol characteristics. In this study, we have taken aerosol data from Multi-angle Imaging Spectro-Radiometer (MISR) level-2 version 23 aerosol products retrieved at 4.4 km and radiation data from Clouds and the Earth’s Radiant Energy System (CERES, spatial resolution=1ox1o) for 21 years (2000-2021) over the Indian subcontinent. MISR aerosol product includes size and shapes segregated aerosol optical depth (AOD), Angstrom exponent (AE), and single scattering albedo (SSA). Additionally, 74 aerosol mixtures are included in version 23 data that is used for aerosol speciation. We have seasonally mapped aerosol optical and microphysical properties from MISR for India at quarter degrees resolution. Results show strong Spatio-temporal variability, with a constant higher value of AOD for the Indo-Gangetic Plain (IGP). The contribution of small-size particles is higher throughout the year, spatially during winter months. SSA is found to be overestimated where absorbing particles are present. The climatological map of short wave (SW) ARF at the top of the atmosphere (TOA) shows a strong cooling except in only a few places (values ranging from +2.5o to -22.5o). Cooling due to aerosols is higher in the absence of clouds. Higher negative values of ARF are found over the IGP region, given the high aerosol concentration above the region. Surface ARF values are everywhere negative for our study domain, with higher values in clear conditions. The results strongly correlate with AOD from MISR and ARF from CERES.Keywords: aerosol Radiative forcing (ARF), aerosol composition, single scattering albedo (SSA), CERES
Procedia PDF Downloads 57963 Quantitative Polymerase Chain Reaction Analysis of Phytoplankton Composition and Abundance to Assess Eutrophication: A Multi-Year Study in Twelve Large Rivers across the United States
Authors: Chiqian Zhang, Kyle D. McIntosh, Nathan Sienkiewicz, Ian Struewing, Erin A. Stelzer, Jennifer L. Graham, Jingrang Lu
Abstract:
Phytoplankton plays an essential role in freshwater aquatic ecosystems and is the primary group synthesizing organic carbon and providing food sources or energy to ecosystems. Therefore, the identification and quantification of phytoplankton are important for estimating and assessing ecosystem productivity (carbon fixation), water quality, and eutrophication. Microscopy is the current gold standard for identifying and quantifying phytoplankton composition and abundance. However, microscopic analysis of phytoplankton is time-consuming, has a low sample throughput, and requires deep knowledge and rich experience in microbial morphology to implement. To improve this situation, quantitative polymerase chain reaction (qPCR) was considered for phytoplankton identification and quantification. Using qPCR to assess phytoplankton composition and abundance, however, has not been comprehensively evaluated. This study focused on: 1) conducting a comprehensive performance comparison of qPCR and microscopy techniques in identifying and quantifying phytoplankton and 2) examining the use of qPCR as a tool for assessing eutrophication. Twelve large rivers located throughout the United States were evaluated using data collected from 2017 to 2019 to understand the relation between qPCR-based phytoplankton abundance and eutrophication. This study revealed that temporal variation of phytoplankton abundance in the twelve rivers was limited within years (from late spring to late fall) and among different years (2017, 2018, and 2019). Midcontinent rivers had moderately greater phytoplankton abundance than eastern and western rivers, presumably because midcontinent rivers were more eutrophic. The study also showed that qPCR- and microscope-determined phytoplankton abundance had a significant positive linear correlation (adjusted R² 0.772, p-value < 0.001). In addition, phytoplankton abundance assessed via qPCR showed promise as an indicator of the eutrophication status of those rivers, with oligotrophic rivers having low phytoplankton abundance and eutrophic rivers having (relatively) high phytoplankton abundance. This study demonstrated that qPCR could serve as an alternative tool to traditional microscopy for phytoplankton quantification and eutrophication assessment in freshwater rivers.Keywords: phytoplankton, eutrophication, river, qPCR, microscopy, spatiotemporal variation
Procedia PDF Downloads 105962 A Concept in Addressing the Singularity of the Emerging Universe
Authors: Mahmoud Reza Hosseini
Abstract:
The universe is in a continuous expansion process, resulting in the reduction of its density and temperature. Also, by extrapolating back from its current state, the universe at its early times has been studied known as the big bang theory. According to this theory, moments after creation, the universe was an extremely hot and dense environment. However, its rapid expansion due to nuclear fusion led to a reduction in its temperature and density. This is evidenced through the cosmic microwave background and the universe structure at a large scale. However, extrapolating back further from this early state reaches singularity which cannot be explained by modern physics and the big bang theory is no longer valid. In addition, one can expect a nonuniform energy distribution across the universe from a sudden expansion. However, highly accurate measurements reveal an equal temperature mapping across the universe which is contradictory to the big bang principles. To resolve this issue, it is believed that cosmic inflation occurred at the very early stages of the birth of the universe According to the cosmic inflation theory, the elements which formed the universe underwent a phase of exponential growth due to the existence of a large cosmological constant. The inflation phase allows the uniform distribution of energy so that an equal maximum temperature could be achieved across the early universe. Also, the evidence of quantum fluctuations of this stage provides a means for studying the types of imperfections the universe would begin with. Although well-established theories such as cosmic inflation and the big bang together provide a comprehensive picture of the early universe and how it evolved into its current state, they are unable to address the singularity paradox at the time of universe creation. Therefore, a practical model capable of describing how the universe was initiated is needed. This research series aims at addressing the singularity issue by introducing an energy conversion mechanism. This is accomplished by establishing a state of energy called a “neutral state”, with an energy level which is referred to as “base energy” capable of converting into other states. Although it follows the same principles, the unique quanta state of the base energy allows it to be distinguishable from other states and have a uniform distribution at the ground level. Although the concept of base energy can be utilized to address the singularity issue, to establish a complete picture, the origin of the base energy should be also identified. This matter is the subject of the first study in the series “A Conceptual Study for Investigating the Creation of Energy and Understanding the Properties of Nothing” which is discussed in detail. Therefore, the proposed concept in this research series provides a road map for enhancing our understating of the universe's creation from nothing and its evolution and discusses the possibility of base energy as one of the main building blocks of this universe.Keywords: big bang, cosmic inflation, birth of universe, energy creation
Procedia PDF Downloads 93961 Aerosol Direct Radiative Forcing Over the Indian Subcontinent: A Comparative Analysis from the Satellite Observation and Radiative Transfer Model
Authors: Shreya Srivastava, Sagnik Dey
Abstract:
Aerosol direct radiative forcing (ADRF) refers to the alteration of the Earth's energy balance from the scattering and absorption of solar radiation by aerosol particles. India experiences substantial ADRF due to high aerosol loading from various sources. These aerosols' radiative impact depends on their physical characteristics (such as size, shape, and composition) and atmospheric distribution. Quantifying ADRF is crucial for understanding aerosols’ impact on the regional climate and the Earth's radiative budget. In this study, we have taken radiation data from Clouds and the Earth’s Radiant Energy System (CERES, spatial resolution=1ox1o) for 22 years (2000-2021) over the Indian subcontinent. Except for a few locations, the short-wave DARF exhibits aerosol cooling at the TOA (values ranging from +2.5 W/m2 to -22.5W/m2). Cooling due to aerosols is more pronounced in the absence of clouds. Being an aerosol hotspot, higher negative ADRF is observed over the Indo-Gangetic Plain (IGP). Aerosol Forcing Efficiency (AFE) shows a decreasing seasonal trend in winter (DJF) over the entire study region while an increasing trend over IGP and western south India during the post-monsoon season (SON) in clear-sky conditions. Analysing atmospheric heating and AOD trends, we found that only the aerosol loading is not governing the change in atmospheric heating but also the aerosol composition and/or their vertical profile. We used a Multi-angle Imaging Spectro-Radiometer (MISR) Level-2 Version 23 aerosol products to look into aerosol composition. MISR incorporates 74 aerosol mixtures in its retrieval algorithm based on size, shape, and absorbing properties. This aerosol mixture information was used for analysing long-term changes in aerosol composition and dominating aerosol species corresponding to the aerosol forcing value. Further, ADRF derived from this method is compared with around 35 studies across India, where a plane parallel Radiative transfer model was used, and the model inputs were taken from the OPAC (Optical Properties of Aerosols and Clouds) utilizing only limited aerosol parameter measurements. The result shows a large overestimation of TOA warming by the latter (i.e., Model-based method).Keywords: aerosol radiative forcing (ARF), aerosol composition, MISR, CERES, SBDART
Procedia PDF Downloads 60960 The Role of Demographics and Service Quality in the Adoption and Diffusion of E-Government Services: A Study in India
Authors: Sayantan Khanra, Rojers P. Joseph
Abstract:
Background and Significance: This study is aimed at analyzing the role of demographic and service quality variables in the adoption and diffusion of e-government services among the users in India. The study proposes to examine the users' perception about e-Government services and investigate the key variables that are most salient to the Indian populace. Description of the Basic Methodologies: The methodology to be adopted in this study is Hierarchical Regression Analysis, which will help in exploring the impact of the demographic variables and the quality dimensions on the willingness to use e-government services in two steps. First, the impact of demographic variables on the willingness to use e-government services is to be examined. In the second step, quality dimensions would be used as inputs to the model for explaining variance in excess of prior contribution by the demographic variables. Present Status: Our study is in the data collection stage in collaboration with a highly reliable, authentic and adequate source of user data. Assuming that the population of the study comprises all the Internet users in India, a massive sample size of more than 10,000 random respondents is being approached. Data is being collected using an online survey questionnaire. A pilot survey has already been carried out to refine the questionnaire with inputs from an expert in management information systems and a small group of users of e-government services in India. The first three questions in the survey pertain to the Internet usage pattern of a respondent and probe whether the person has used e-government services. If the respondent confirms that he/she has used e-government services, then an aggregate of 15 indicators are used to measure the quality dimensions under consideration and the willingness of the respondent to use e-government services, on a five-point Likert scale. If the respondent reports that he/she has not used e-government services, then a few optional questions are asked to understand the reason(s) behind the same. Last four questions in the survey are dedicated to collect data related to the demographic variables. An indication of the Major Findings: Based on the extensive literature review carried out to develop several propositions; a research model is prescribed to start with. A major outcome expected at the completion of the study is the development of a research model that would help to understand the relationship involving the demographic variables and service quality dimensions, and the willingness to adopt e-government services, particularly in an emerging economy like India. Concluding Statement: Governments of emerging economies and other relevant agencies can use the findings from the study in designing, updating, and promoting e-government services to enhance public participation, which in turn, would help to improve efficiency, convenience, engagement, and transparency in implementing these services.Keywords: adoption and diffusion of e-government services, demographic variables, hierarchical regression analysis, service quality dimensions
Procedia PDF Downloads 272959 Streamlining the Fuzzy Front-End and Improving the Usability of the Tools Involved
Authors: Michael N. O'Sullivan, Con Sheahan
Abstract:
Researchers have spent decades developing tools and techniques to aid teams in the new product development (NPD) process. Despite this, it is evident that there is a huge gap between their academic prevalence and their industry adoption. For the fuzzy front-end, in particular, there is a wide range of tools to choose from, including the Kano Model, the House of Quality, and many others. In fact, there are so many tools that it can often be difficult for teams to know which ones to use and how they interact with one another. Moreover, while the benefits of using these tools are obvious to industrialists, they are rarely used as they carry a learning curve that is too steep and they become too complex to manage over time. In essence, it is commonly believed that they are simply not worth the effort required to learn and use them. This research explores a streamlined process for the fuzzy front-end, assembling the most effective tools and making them accessible to everyone. The process was developed iteratively over the course of 3 years, following over 80 final year NPD teams from engineering, design, technology, and construction as they carried a product from concept through to production specification. Questionnaires, focus groups, and observations were used to understand the usability issues with the tools involved, and a human-centred design approach was adopted to produce a solution to these issues. The solution takes the form of physical toolkit, similar to a board game, which allows the team to play through an example of a new product development in order to understand the process and the tools, before using it for their own product development efforts. A complimentary website is used to enhance the physical toolkit, and it provides more examples of the tools being used, as well as deeper discussions on each of the topics, allowing teams to adapt the process to their skills, preferences and product type. Teams found the solution very useful and intuitive and experienced significantly less confusion and mistakes with the process than teams who did not use it. Those with a design background found it especially useful for the engineering principles like Quality Function Deployment, while those with an engineering or technology background found it especially useful for design and customer requirements acquisition principles, like Voice of the Customer. Products developed using the toolkit are added to the website as more examples of how it can be used, creating a loop which helps future teams understand how the toolkit can be adapted to their project, whether it be a small consumer product or a large B2B service. The toolkit unlocks the potential of these beneficial tools to those in industry, both for large, experienced teams and for inexperienced start-ups. It allows users to assess the market potential of their product concept faster and more effectively, arriving at the product design stage with technical requirements prioritized according to their customers’ needs and wants.Keywords: new product development, fuzzy front-end, usability, Kano model, quality function deployment, voice of customer
Procedia PDF Downloads 110958 The Effect of Chloride Dioxide and High Concentration of CO2 Gas Injection on the Quality and Shelf-Life for Exporting Strawberry 'Maehyang' in Modified Atmosphere Condition
Authors: Hyuk Sung Yoon, In-Lee Choi, Mohammad Zahirul Islam, Jun Pill Baek, Ho-Min Kang
Abstract:
The strawberry ‘Maehyang’ cultivated in South Korea has been increased to export to Southeast Asia. The degradation of quality often occurs in strawberries during short export period. Botrytis cinerea has been known to cause major damage to the export strawberries and the disease was caused during shipping and distribution. This study was conducted to find out the sterilized effect of chlorine dioxide(ClO2) gas and high concentration of CO2 gas injection for ‘Maehyang’ strawberry and it was packaged with oxygen transmission rate (OTR) films. The strawberry was harvested at 80% color changed stage and packaged with OTR film and perforated film (control). The treatments were a MAP used by with 20,000 cc·m-2·day·atm OTR film and gas injection in packages. The gas type of ClO2 and CO2 were injected into OTR film packages, and treatments were 6 mg/L ClO2, 15% CO2, and they were combined. The treated strawberries were stored at 3℃ for 30 days. Fresh weight loss rate was less than 1% in all OTR film packages but it was more than 15% in a perforated film treatment that showed severe deterioration of visual quality during storage. Carbon dioxide concentration within a package showed approximately 15% of the maximum CO2 concentration in all treatments except control until the 21st day, it was the tolerated range of maximum CO2 concentration of strawberry in recommended CA or MA conditions. But, it increased to almost 50% on the 30th day. Oxygen concentration showed a decrease down to approximately 0% in all treatments except control for 25 days. Ethylene concentration was shown to be steady until the 17th day, but it quickly increased on the 17th day and dropped down on the final storage day (30th day). All treatments did not show any significant differences in gas treatments. Firmness increased in CO2 (15%) and ClO2 (6mg/L) + CO2 (15%) treatments during storage. It might be the effect of high concentration CO2 known by reducing decay and cell wall degradation. The soluble solid decreased in all treatments during storage. These results were caused to use up the sugar by the increase of respiration during storage. The titratable acidity showed a similarity in all treatments. Incidence of fungi was 0% in CO2 (15%) and ClO2 (6mg/L)+ CO2 (15%), but was more than 20% in a perforated film treatment. Consequently, The result indicates that Chloride Dioxide(ClO2) and high concentration of CO2 inhibited fungi growth. Due to the fact that fresh weight loss rate and incidence of fungi were lower, the ClO2(6mg/L)+ CO2(15%) prove to be most efficient in sterilization. These results suggest that Chloride Dioxide (ClO2) and high concentration of CO2 gas injection treatments were an effective decontamination technique for improving the safety of strawberries.Keywords: chloride dioxide, high concentration of CO2, modified atmosphere condition, oxygen transmission rate films
Procedia PDF Downloads 341957 Interplay of Physical Activity, Hypoglycemia, and Psychological Factors: A Longitudinal Analysis in Diabetic Youth
Authors: Georges Jabbour
Abstract:
Background and aims: This two-year follow-up study explores the long-term sustainability of physical activity (PA) levels in young people with type 1 diabetes, focusing on the relationship between PA, hypoglycemia, and behavioral scores. The literature highlights the importance of PA and its health benefits, as well as the barriers to engaging in PA practices. Studies have shown that individuals with high levels of vigorous physical activity have higher fear of hypoglycemia (FOH) scores and more hypoglycemia episodes. Considering that hypoglycemia episodes are a major barrier to physical activity, and many studies reported a negative association between PA and high FOH scores, it cannot be guaranteed that those experiencing hypoglycemia over a long period will remain active. Building on that, the present work assesses whether high PA levels, despite elevated hypoglycemia risk, can be maintained over time. The study tracks PA levels at one and two years, correlating them with hypoglycemia instances and Fear of Hypoglycemia (FOH) scores. Materials and methods: A self-administered questionnaire was completed by 61 youth with T1D, and their PA was assessed. Hypoglycemia episodes, fear of hypoglycemia scores and HbA1C levels were collected. All assessments were realized at baseline (visit 0: V0), one year (V1) and two years later (V2). For the purpose of the present work, we explore the relationships between PA levels, hypoglycemia episodes, and FOH scores at each time point. We used multiple linear regression to model the mean outcomes for each exposure of interest. Results: Findings indicate no changes in total moderate to vigorous PA (MVPA) and VPA levels among visits, and HbA1c (%) was negatively correlated with the total amount of VPA per day in minutes (β= -0.44; p=0.01, β= -0.37; p=0.04, and β= -0.66; p=0.01 for V0, V1, and V2, respectively). Our linear regression model reported a significant negative correlation between VPA and FOH across the visits (β=-0.59, p=0.01; β= -0.44, p=0.01; and β= -0.34, p=0.03 for V0, V1, and V2, respectively), and HbA1c (%) was influenced by both the number of hypoglycemic episodes and FOH score at V2 (β=0.48; p=0.02 and β=0.38; p=0.03, respectively). Conclusion: The sustainability of PA levels and HbA1c (%) in young individuals with type 1 diabetes is influenced by various factors, including fear of hypoglycemia. Understanding these complex interactions is essential for developing effective interventions to promote sustained PA levels in this population. Our results underline the necessity of a multi-strategic approach to promoting active lifestyles among diabetic youths. This approach should synergize PA enhancement with vigilant glucose monitoring and effective FOH management.Keywords: physical activity, hypoglycemia, fear of hypoglycemia, youth
Procedia PDF Downloads 34956 Innovative Technologies of Management of Personnel Processes in the Public Civil Service
Authors: O. V. Jurieva, O. U. Jurieva, R. H. Yagudin, P. B. Chursin
Abstract:
In the recent scientific researches on the problems of public service the idea of the use of innovative technologies of management of personnel processes is accurately formulated. Authors made an attempt to analyze the changes in the public service organizations and to understand how the studied situation is interpreted by the government employees themselves. For this purpose the strategy of sociological research was carried out on the basis of application of questionnaire developed by M. Rokich and focus group research. For the research purposes it was necessary to get to microlevel in order to include daily activities of employees of an organization, their life experience and values in the focus of the analysis. Based on P. Bourdieu's methodology, authors investigated the established patterns of consciousness and behavior of officials (doxa) and also analyzed the tendencies of re-thinking (change) of the settled content of values (heterodoxy) by them. The distinctive feature of the conducted research is that the public servants who have different length of service in the public service took part in the research procedure. The obtained data helped to answer the following question: what are the specifics of doxs of the public servants who work in the public civil service more than 7-10 years and what perception of values of civil service have junior experts whose work experience doesn't exceed 3 years. Respondents were presented by two groups: (1) public servants of the level of main positions in the public civil service of the Republic of Tatarstan. (2) Public servants of the level of lower positions in the ministries and departments of the Republic of Tatarstan. For the study of doxa or of the existing values of public servants, the research with use of the questionnaire based on M. Rokich's system is conducted. Two types of values are emphasised: terminal and instrumental, which are united by us in the collective concept doxa. Doxa: the instrument of research of the established patterns of consciousness and behavior which can either resist to changes in the organization or, on the contrary, support their implementation. In the following stage an attempt to deepen our understanding of the essence and specifics of doxa of officials by means of the applied sociological research which is carried out by focus group method is made. Information obtained by authors during the research convinces that for the success of policy of changes in the organizations of public service it is necessary to develop special technologies of informing employees about the essence and inevitability of the developed innovations, to involve them in the process of changes, to train and to develop the younger generation of civil servants, seriously to perceive additional training and retraining of officials.Keywords: innovative technologies, public service organizations, public servants
Procedia PDF Downloads 277955 Artificial Habitat Mapping in Adriatic Sea
Authors: Annalisa Gaetani, Anna Nora Tassetti, Gianna Fabi
Abstract:
The hydroacoustic technology is an efficient tool to study the sea environment: the most recent advancement in artificial habitat mapping involves acoustic systems to investigate fish abundance, distribution and behavior in specific areas. Along with a detailed high-coverage bathymetric mapping of the seabed, the high-frequency Multibeam Echosounder (MBES) offers the potential of detecting fine-scale distribution of fish aggregation, combining its ability to detect at the same time the seafloor and the water column. Surveying fish schools distribution around artificial structures, MBES allows to evaluate how their presence modifies the biological natural habitat overtime in terms of fish attraction and abundance. In the last years, artificial habitat mapping experiences have been carried out by CNR-ISMAR in the Adriatic sea: fish assemblages aggregating at offshore gas platforms and artificial reefs have been systematically monitored employing different kinds of methodologies. This work focuses on two case studies: a gas extraction platform founded at 80 meters of depth in the central Adriatic sea, 30 miles far from the coast of Ancona, and the concrete and steel artificial reef of Senigallia, deployed by CNR-ISMAR about 1.2 miles offshore at a depth of 11.2 m . Relating the MBES data (metrical dimensions of fish assemblages, shape, depth, density etc.) with the results coming from other methodologies, such as experimental fishing surveys and underwater video camera, it has been possible to investigate the biological assemblage attracted by artificial structures hypothesizing which species populate the investigated area and their spatial dislocation from these artificial structures. Processing MBES bathymetric and water column data, 3D virtual scenes of the artificial habitats have been created, receiving an intuitive-looking depiction of their state and allowing overtime to evaluate their change in terms of dimensional characteristics and depth fish schools’ disposition. These MBES surveys play a leading part in the general multi-year programs carried out by CNR-ISMAR with the aim to assess potential biological changes linked to human activities on.Keywords: artificial habitat mapping, fish assemblages, hydroacustic technology, multibeam echosounder
Procedia PDF Downloads 261954 The Role and Tasks of a Social Worker in the Care of a Terminally Ill Child with Regard to the Malopolska Hospice for Children
Authors: Ewelina Zdebska
Abstract:
A social worker is an integral part of an interdisciplinary team working with the child and his family in a terminal state. Social support is an integral part of the medical procedure in the care of hospice. This is the basis and prerequisite of full treatment and good care of the child - patient, whose illness often finds at least the expected period of his life when his personal and legal issues are not regulated, and the family burdened with the problem requires care and support specialists - professionals. Hospice for Children in Krakow: a palliative care team operating in the province of Krakow and Malopolska, conducts specialized care for terminally ill children in place of their residence from the time when parents and doctors decided to end of treatment in hospital, allows parents to carry out medical care at home, provides parents social and legal assistance and provides care, psychological support and friendship to families throughout the life of the child's illness and after his death, as long as it is needed. The social worker in a hospice does not bear the burden of solving social problems, which is the responsibility of other authorities, but provides support possible and necessary at the moment. The most common form of assistance is to provide information on benefits, which for the child and his family may be subject to any treatment and fight for the life and health of a child. Employee assists in the preparation and completion of documents, requests to increase the degree of disability because of progressive disease or Allowance care because of the inability to live independently. It works in settling all the issues with the Department of Social Security, as well as with the Municipal and District Team Affairs of disability. Seeking help and support using multi-faceted childcare. With the Centres for Social Welfare contacts are also often on the organization of additional respite care for the sick at home (care), especially in the work of the other members of the family or if the family can not cope with the care and needs extra help. Hospice for Children in Cracow completing construction of Poland's first Respite Care Centre for chronically and terminally ill children, will be an open house where children suffering from chronic and incurable diseases and their families can get professional help, whenever - when they need it. The social worker has to pick up a very important role in caring for a terminally ill child. His presence gives a little patient and family the opportunity to be at this difficult time together while organizing assistance and support.Keywords: social worker, care, terminal care, hospice
Procedia PDF Downloads 253953 Microfluidic Plasmonic Bio-Sensing of Exosomes by Using a Gold Nano-Island Platform
Authors: Srinivas Bathini, Duraichelvan Raju, Simona Badilescu, Muthukumaran Packirisamy
Abstract:
A bio-sensing method, based on the plasmonic property of gold nano-islands, has been developed for detection of exosomes in a clinical setting. The position of the gold plasmon band in the UV-Visible spectrum depends on the size and shape of gold nanoparticles as well as on the surrounding environment. By adsorbing various chemical entities, or binding them, the gold plasmon band will shift toward longer wavelengths and the shift is proportional to the concentration. Exosomes transport cargoes of molecules and genetic materials to proximal and distal cells. Presently, the standard method for their isolation and quantification from body fluids is by ultracentrifugation, not a practical method to be implemented in a clinical setting. Thus, a versatile and cutting-edge platform is required to selectively detect and isolate exosomes for further analysis at clinical level. The new sensing protocol, instead of antibodies, makes use of a specially synthesized polypeptide (Vn96), to capture and quantify the exosomes from different media, by binding the heat shock proteins from exosomes. The protocol has been established and optimized by using a glass substrate, in order to facilitate the next stage, namely the transfer of the protocol to a microfluidic environment. After each step of the protocol, the UV-Vis spectrum was recorded and the position of gold Localized Surface Plasmon Resonance (LSPR) band was measured. The sensing process was modelled, taking into account the characteristics of the nano-island structure, prepared by thermal convection and annealing. The optimal molar ratios of the most important chemical entities, involved in the detection of exosomes were calculated as well. Indeed, it was found that the results of the sensing process depend on the two major steps: the molar ratios of streptavidin to biotin-PEG-Vn96 and, the final step, the capture of exosomes by the biotin-PEG-Vn96 complex. The microfluidic device designed for sensing of exosomes consists of a glass substrate, sealed by a PDMS layer that contains the channel and a collecting chamber. In the device, the solutions of linker, cross-linker, etc., are pumped over the gold nano-islands and an Ocean Optics spectrometer is used to measure the position of the Au plasmon band at each step of the sensing. The experiments have shown that the shift of the Au LSPR band is proportional to the concentration of exosomes and, thereby, exosomes can be accurately quantified. An important advantage of the method is the ability to discriminate between exosomes having different origins.Keywords: exosomes, gold nano-islands, microfluidics, plasmonic biosensing
Procedia PDF Downloads 176952 Integrated Management System Applied in Dismantling and Waste Management of the Primary Cooling System from the VVR-S Nuclear Reactor Magurele, Bucharest
Authors: Radu Deju, Carmen Mustata
Abstract:
The VVR-S nuclear research reactor owned by Horia Hubulei National Institute of Physics and Nuclear Engineering (IFIN-HH) was designed for research and radioisotope production, being permanently shut-down in 2002, after 40 years of operation. All amount of the nuclear spent fuel S-36 and EK-10 type was returned to Russian Federation (first in 2009 and last in 2012), and the radioactive waste resulted from the reprocessing of it will remain permanently in the Russian Federation. The decommissioning strategy chosen is immediate dismantling. At this moment, the radionuclides with half-life shorter than 1 year have a minor contribution to the contamination of materials and equipment used in reactor department. The decommissioning of the reactor has started in 2010 and is planned to be finalized in 2020, being the first nuclear research reactor that has started the decommissioning project from the South-East of Europe. The management system applied in the decommissioning of the VVR-S research reactor integrates all common elements of management: nuclear safety, occupational health and safety, environment, quality- compliance with the requirements for decommissioning activities, physical protection and economic elements. This paper presents the application of integrated management system in decommissioning of systems, structures, equipment and components (SSEC) from pumps room, including the management of the resulted radioactive waste. The primary cooling system of this type of reactor includes circulation pumps, heat exchangers, degasser, filter ion exchangers, piping connection, drainage system and radioactive leaks. All the decommissioning activities of primary circuit were performed in stage 2 (year 2014), and they were developed and recorded according to the applicable documents, within the requirements of the Regulatory Body Licenses. In the presentation there will be emphasized how the integrated management system provisions are applied in the dismantling of the primary cooling system, for elaboration, approval, application of necessary documentation, records keeping before, during and after the dismantling activities. Radiation protection and economics are the key factors for the selection of the proper technology. Dedicated and advanced technologies were chosen to perform specific tasks. Safety aspects have been taken into consideration. Resource constraints have also been an important issue considered in defining the decommissioning strategy. Important aspects like radiological monitoring of the personnel and areas, decontamination, waste management and final characterization of the released site are demonstrated and documented.Keywords: decommissioning, integrated management system, nuclear reactor, waste management
Procedia PDF Downloads 292951 Multi-Residue Analysis (GC-ECD) of Some Organochlorine Pesticides in Commercial Broiler Meat Marketed in Shivamogga City, Karnataka State, India
Authors: L. V. Lokesha, Jagadeesh S. Sanganal, Yogesh S. Gowda, Shekhar, N. B. Shridhar, N. Prakash, Prashantkumar Waghe, H. D. Narayanaswamy, Girish V. Kumar
Abstract:
Organochlorine (OC) insecticides are among the most important organotoxins and make a large group of pesticides. Physicochemical properties of these toxins, especially their lipophilicity, facilitate the absorption and storage of these toxins in the meat thus possess public health threat to humans. The presence of these toxins in broiler meat can be a quantitative and qualitative index for the presence of these toxins in animal bodies, which is attributed to Waste water of irrigation after spraying the crops, contaminated animal feeds with pesticides, polluted air are the potential sources of residues in animal products. Fifty broiler meat samples were collected from different retail outlets of Bengaluru city, Karnataka state, in ice cold conditions and later stored under -20°C until analysis. All the samples were subjected to Gas Chromatograph attached to Electron Capture Detector(GC-ECD, VARIAN make) screening and quantification of OC pesticides viz; Alachlor, Aldrin, Alpha-BHC, Beta-BHC, Dieldrin, Delta-BHC, o,p-DDE, p,p-DDE, o,p-DDD, p,p-DDD, o,p-DDT, p,p-DDT, Endosulfan-I, Endosulfan-II, Endosulfan Sulphate and Lindane(all the standards were procured from Merck). Extraction was undertaken by blending fifty grams (g) of meat sample with 50g Sodium Sulphate anahydrous, 120 ml of n-hexane, 120 ml acetone for 15 mins, extract is washed with distilled water and sample moisture is dried by sodium sulphate anahydrous, partitioning is done with 25 ml petroleum ether, 10 ml acetonitrile and 15 ml n-hexane shake vigorously for two minutes, sample clean up was done with florosil column. The reconstituted samples (using n-hexane) (Merck chem) were injected to Gas Chromatograph–Electron Capture Detector(GC-ECD). The present study reveals that, among the fifty chicken samples subjected for analysis, 60% (15/50), 32% (8/50), 28% (7/50), 20% (5/50) and 16% (4/50) of samples contaminated with DDTs, Delta-BHC, Dieldrin, Aldrin and Alachlor respectively. DDT metabolites, Delta-BHC were the most frequently detected OC pesticides. The detected levels of the pesticides were below the levels of MRL(according to Export Council of India notification for fresh poultry meat).Keywords: accuracy, gas chromatography, meat, pesticide, petroleum ether
Procedia PDF Downloads 330950 Determinants of Probability Weighting and Probability Neglect: An Experimental Study of the Role of Emotions, Risk Perception, and Personality in Flood Insurance Demand
Authors: Peter J. Robinson, W. J. Wouter Botzen
Abstract:
Individuals often over-weight low probabilities and under-weight moderate to high probabilities, however very low probabilities are either significantly over-weighted or neglected. Little is known about factors affecting probability weighting in Prospect Theory related to emotions specific to risk (anticipatory and anticipated emotions), the threshold of concern, as well as personality traits like locus of control. This study provides these insights by examining factors that influence probability weighting in the context of flood insurance demand in an economic experiment. In particular, we focus on determinants of flood probability neglect to provide recommendations for improved risk management. In addition, results obtained using real incentives and no performance-based payments are compared in the experiment with high experimental outcomes. Based on data collected from 1’041 Dutch homeowners, we find that: flood probability neglect is related to anticipated regret, worry and the threshold of concern. Moreover, locus of control and regret affect probabilistic pessimism. Nevertheless, we do not observe strong evidence that incentives influence flood probability neglect nor probability weighting. The results show that low, moderate and high flood probabilities are under-weighted, which is related to framing in the flooding context and the degree of realism respondents attach to high probability property damages. We suggest several policies to overcome psychological factors related to under-weighting flood probabilities to improve flood preparations. These include policies that promote better risk communication to enhance insurance decisions for individuals with a high threshold of concern, and education and information provision to change the behaviour of internal locus of control types as well as people who see insurance as an investment. Multi-year flood insurance may also prevent short-sighted behaviour of people who have a tendency to regret paying for insurance. Moreover, bundling low-probability/high-impact risks with more immediate risks may achieve an overall covered risk which is less likely to be judged as falling below thresholds of concern. These measures could aid the development of a flood insurance market in the Netherlands for which we find to be demand.Keywords: flood insurance demand, prospect theory, risk perceptions, risk preferences
Procedia PDF Downloads 278949 Airborne CO₂ Lidar Measurements for Atmospheric Carbon and Transport: America (ACT-America) Project and Active Sensing of CO₂ Emissions over Nights, Days, and Seasons 2017-2018 Field Campaigns
Authors: Joel F. Campbell, Bing Lin, Michael Obland, Susan Kooi, Tai-Fang Fan, Byron Meadows, Edward Browell, Wayne Erxleben, Doug McGregor, Jeremy Dobler, Sandip Pal, Christopher O'Dell, Ken Davis
Abstract:
The Active Sensing of CO₂ Emissions over Nights, Days, and Seasons (ASCENDS) CarbonHawk Experiment Simulator (ACES) is a NASA Langley Research Center instrument funded by NASA’s Science Mission Directorate that seeks to advance technologies critical to measuring atmospheric column carbon dioxide (CO₂ ) mixing ratios in support of the NASA ASCENDS mission. The ACES instrument, an Intensity-Modulated Continuous-Wave (IM-CW) lidar, was designed for high-altitude aircraft operations and can be directly applied to space instrumentation to meet the ASCENDS mission requirements. The ACES design demonstrates advanced technologies critical for developing an airborne simulator and spaceborne instrument with lower platform consumption of size, mass, and power, and with improved performance. The Atmospheric Carbon and Transport – America (ACT-America) is an Earth Venture Suborbital -2 (EVS-2) mission sponsored by the Earth Science Division of NASA’s Science Mission Directorate. A major objective is to enhance knowledge of the sources/sinks and transport of atmospheric CO₂ through the application of remote and in situ airborne measurements of CO₂ and other atmospheric properties on spatial and temporal scales. ACT-America consists of five campaigns to measure regional carbon and evaluate transport under various meteorological conditions in three regional areas of the Continental United States. Regional CO₂ distributions of the lower atmosphere were observed from the C-130 aircraft by the Harris Corp. Multi-Frequency Fiber Laser Lidar (MFLL) and the ACES lidar. The airborne lidars provide unique data that complement the more traditional in situ sensors. This presentation shows the applications of CO₂ lidars in support of these science needs.Keywords: CO₂ measurement, IMCW, CW lidar, laser spectroscopy
Procedia PDF Downloads 166948 Transportation Mode Choice Analysis for Accessibility of the Mehrabad International Airport by Statistical Models
Authors: Navid Mirzaei Varzeghani, Mahmoud Saffarzadeh, Ali Naderan, Amirhossein Taheri
Abstract:
Countries are progressing, and the world's busiest airports see year-on-year increases in travel demand. Passenger acceptability of an airport depends on the airport's appeals, which may include one of these routes between the city and the airport, as well as the facilities to reach them. One of the critical roles of transportation planners is to predict future transportation demand so that an integrated, multi-purpose system can be provided and diverse modes of transportation (rail, air, and land) can be delivered to a destination like an airport. In this study, 356 questionnaires were filled out in person over six days. First, the attraction of business and non-business trips was studied using data and a linear regression model. Lower travel costs, a range of ages more significant than 55, and other factors are essential for business trips. Non-business travelers, on the other hand, have prioritized using personal vehicles to get to the airport and ensuring convenient access to the airport. Business travelers are also less price-sensitive than non-business travelers regarding airport travel. Furthermore, carrying additional luggage (for example, more than one suitcase per person) undoubtedly decreases the attractiveness of public transit. Afterward, based on the manner and purpose of the trip, the locations with the highest trip generation to the airport were identified. The most famous district in Tehran was District 2, with 23 visits, while the most popular mode of transportation was an online taxi, with 12 trips from that location. Then, significant variables in separation and behavior of travel methods to access the airport were investigated for all systems. In this scenario, the most crucial factor is the time it takes to get to the airport, followed by the method's user-friendliness as a component of passenger preference. It has also been demonstrated that enhancing public transportation trip times reduces private transportation's market share, including taxicabs. Based on the responses of personal and semi-public vehicles, the desire of passengers to approach the airport via public transportation systems was explored to enhance present techniques and develop new strategies for providing the most efficient modes of transportation. Using the binary model, it was clear that business travelers and people who had already driven to the airport were the least likely to change.Keywords: multimodal transportation, demand modeling, travel behavior, statistical models
Procedia PDF Downloads 179947 Impact of Sensory Marketing on Consumer Consumption Behaviour in the Hotel Spa Industry
Authors: Li (Claudia) Chen
Abstract:
With the rapid development of the global economy, the growing prevalence of customer health consciousness has arisen over the last decade. Consumers are considered more healthy lifestyles and wellness routines in their daily life, and likewise, they are inclined to invest disposable incomes in enhancing their health and wellness, beauty, and social identity. Nowadays, visiting spas has become a popular activity; particularly, millennials are increasingly prone to visiting spas. It has now become one of the major places for relaxation, rejuvenation, revitalization, and enjoyment by providing various types of spa services such as hotel and resort spas, destination spas, mineral, and thermal spring spas, medical spas, and so forth. The hotel and resort spa has been becoming increasingly popular among other spas, which is the largest number of spas and revenue over the last five years, and has now surpassed day/salon spas as the industry revenue leader. In the hotel and resort spa industry, sensory experience plays a vital role in the customer journey, and it encompasses all aspects of the sense that can affect the overall experience. Consumers use senses-sight, sound, touch, smell, and taste to gather the information that contributes to the establishment of an experience, and all senses interacting together form the foundation of sensory experiences. Sensory marketing as a marketing strategy engages consumers' senses and affects their behaviour, yet consumers are often unaware of the way senses interact with their day-to-day experiences. Indeed, it is important to understand consumer sensory experience in terms of how it influences consumer consumption behaviour. The aim of this paper is to evaluate the sensory experiences of consumers and the ways that sensory experiences shape consumer behaviour in the hotel and resort spa industry. This paper consists of in-depth interviews, focus groups, and participant-observation methods to collect data from different stakeholders. The findings reveal that multisensory experiences play vital roles in consumer spa experiences and are highly influential in consumer perception, cognition, and behaviour. Moreover, the findings also demonstrate that sensory stimuli bring positive or negative effects on consumer experience in the hotel spa industry. Ultimately, the findings also offer additional insight to managers on sensory marketing strategy to stimulate brand experience that can establish customer loyalty.Keywords: sensory marketing, senses, consumer behaviour, multi-sensory marketing, hotel and resorts spa industry, qualitative research
Procedia PDF Downloads 83946 GIS Technology for Environmentally Polluted Sites with Innovative Process to Improve the Quality and Assesses the Environmental Impact Assessment (EIA)
Authors: Hamad Almebayedh, Chuxia Lin, Yu wang
Abstract:
The environmental impact assessment (EIA) must be improved, assessed, and quality checked for human and environmental health and safety. Soil contamination is expanding, and sites and soil remediation activities proceeding around the word which simplifies the answer “quality soil characterization” will lead to “quality EIA” to illuminate the contamination level and extent and reveal the unknown for the way forward to remediate, countifying, containing, minimizing and eliminating the environmental damage. Spatial interpolation methods play a significant role in decision making, planning remediation strategies, environmental management, and risk assessment, as it provides essential elements towards site characterization, which need to be informed into the EIA. The Innovative 3D soil mapping and soil characterization technology presented in this research paper reveal the unknown information and the extent of the contaminated soil in specific and enhance soil characterization information in general which will be reflected in improving the information provided in developing the EIA related to specific sites. The foremost aims of this research paper are to present novel 3D mapping technology to quality and cost-effectively characterize and estimate the distribution of key soil characteristics in contaminated sites and develop Innovative process/procedure “assessment measures” for EIA quality and assessment. The contaminated site and field investigation was conducted by innovative 3D mapping technology to characterize the composition of petroleum hydrocarbons contaminated soils in a decommissioned oilfield waste pit in Kuwait. The results show the depth and extent of the contamination, which has been interred into a developed assessment process and procedure for the EIA quality review checklist to enhance the EIA and drive remediation and risk assessment strategies. We have concluded that to minimize the possible adverse environmental impacts on the investigated site in Kuwait, the soil-capping approach may be sufficient and may represent a cost-effective management option as the environmental risk from the contaminated soils is considered to be relatively low. This research paper adopts a multi-method approach involving reviewing the existing literature related to the research area, case studies, and computer simulation.Keywords: quality EIA, spatial interpolation, soil characterization, contaminated site
Procedia PDF Downloads 90945 A Unified Approach for Digital Forensics Analysis
Authors: Ali Alshumrani, Nathan Clarke, Bogdan Ghite, Stavros Shiaeles
Abstract:
Digital forensics has become an essential tool in the investigation of cyber and computer-assisted crime. Arguably, given the prevalence of technology and the subsequent digital footprints that exist, it could have a significant role across almost all crimes. However, the variety of technology platforms (such as computers, mobiles, Closed-Circuit Television (CCTV), Internet of Things (IoT), databases, drones, cloud computing services), heterogeneity and volume of data, forensic tool capability, and the investigative cost make investigations both technically challenging and prohibitively expensive. Forensic tools also tend to be siloed into specific technologies, e.g., File System Forensic Analysis Tools (FS-FAT) and Network Forensic Analysis Tools (N-FAT), and a good deal of data sources has little to no specialist forensic tools. Increasingly it also becomes essential to compare and correlate evidence across data sources and to do so in an efficient and effective manner enabling an investigator to answer high-level questions of the data in a timely manner without having to trawl through data and perform the correlation manually. This paper proposes a Unified Forensic Analysis Tool (U-FAT), which aims to establish a common language for electronic information and permit multi-source forensic analysis. Core to this approach is the identification and development of forensic analyses that automate complex data correlations, enabling investigators to investigate cases more efficiently. The paper presents a systematic analysis of major crime categories and identifies what forensic analyses could be used. For example, in a child abduction, an investigation team might have evidence from a range of sources including computing devices (mobile phone, PC), CCTV (potentially a large number), ISP records, and mobile network cell tower data, in addition to third party databases such as the National Sex Offender registry and tax records, with the desire to auto-correlate and across sources and visualize in a cognitively effective manner. U-FAT provides a holistic, flexible, and extensible approach to providing digital forensics in technology, application, and data-agnostic manner, providing powerful and automated forensic analysis.Keywords: digital forensics, evidence correlation, heterogeneous data, forensics tool
Procedia PDF Downloads 201944 Differences in Patient Satisfaction Observed between Female Japanese Breast Cancer Patients Who Receive Breast-Conserving Surgery or Total Mastectomy
Authors: Keiko Yamauchi, Motoyuki Nakao, Yoko Ishihara
Abstract:
The increase in the number of women with breast cancer in Japan has required hospitals to provide a higher quality of medicine so that patients are satisfied with the treatment they receive. However, patients’ satisfaction following breast cancer treatment has not been sufficiently studied. Hence, we investigated the factors influencing patient satisfaction following breast cancer treatment among Japanese women. These women underwent either breast-conserving surgery (BCS) (n = 380) or total mastectomy (TM) (n = 247). In March 2016, we conducted a cross-sectional internet survey of Japanese women with breast cancer in Japan. We assessed the following factors: socioeconomic status, cancer-related information, the role of medical decision-making, the degree of satisfaction regarding the treatments received, and the regret arising from the medical decision-making processes. We performed logistic regression analyses with the following dependent variables: extreme satisfaction with the treatments received, and regret regarding the medical decision-making process. For both types of surgery, the odds ratio (OR) of being extremely satisfied with the cancer treatment was significantly higher among patients who did not have any regrets compared to patients who had. Also, the OR tended to be higher among patients who chose to play a wanted role in the medical decision-making process, compared with patients who did not. In the BCS group, the OR of being extremely satisfied with the treatment was higher if, at diagnosis, the patient’s youngest child was older than 19 years, compared with patients with no children. The OR was also higher if patient considered the stage and characteristics of their cancer significant. The OR of being extremely satisfied with the treatments was lower among patients who were not employed on full-time basis, and among patients who considered the second medical opinions and medical expenses to be significant. These associations were not observed in the TM group. The OR of having regrets regarding the medical decision-making process was higher among patients who chose to play a role in the decision-making process as they preferred, and was also higher in patients who were employed on either a part-time or contractual basis. For both types of surgery, the OR was higher among patients who considered a second medical opinion to be significant. Regardless of surgical type, regret regarding the medical decision-making process decreases treatment satisfaction. Patients who received breast-conserving surgery were more likely to have regrets concerning the medical decision-making process if they could not play a role in the process as they preferred. In addition, factors associated with the satisfaction with treatment in BCS group but not TM group included the second medical opinion, medical expenses, employment status, and age of the youngest child at diagnosis.Keywords: medical decision making, breast-conserving surgery, total mastectomy, Japanese
Procedia PDF Downloads 151943 Colored Image Classification Using Quantum Convolutional Neural Networks Approach
Authors: Farina Riaz, Shahab Abdulla, Srinjoy Ganguly, Hajime Suzuki, Ravinesh C. Deo, Susan Hopkins
Abstract:
Recently, quantum machine learning has received significant attention. For various types of data, including text and images, numerous quantum machine learning (QML) models have been created and are being tested. Images are exceedingly complex data components that demand more processing power. Despite being mature, classical machine learning still has difficulties with big data applications. Furthermore, quantum technology has revolutionized how machine learning is thought of, by employing quantum features to address optimization issues. Since quantum hardware is currently extremely noisy, it is not practicable to run machine learning algorithms on it without risking the production of inaccurate results. To discover the advantages of quantum versus classical approaches, this research has concentrated on colored image data. Deep learning classification models are currently being created on Quantum platforms, but they are still in a very early stage. Black and white benchmark image datasets like MNIST and Fashion MINIST have been used in recent research. MNIST and CIFAR-10 were compared for binary classification, but the comparison showed that MNIST performed more accurately than colored CIFAR-10. This research will evaluate the performance of the QML algorithm on the colored benchmark dataset CIFAR-10 to advance QML's real-time applicability. However, deep learning classification models have not been developed to compare colored images like Quantum Convolutional Neural Network (QCNN) to determine how much it is better to classical. Only a few models, such as quantum variational circuits, take colored images. The methodology adopted in this research is a hybrid approach by using penny lane as a simulator. To process the 10 classes of CIFAR-10, the image data has been translated into grey scale and the 28 × 28-pixel image containing 10,000 test and 50,000 training images were used. The objective of this work is to determine how much the quantum approach can outperform a classical approach for a comprehensive dataset of color images. After pre-processing 50,000 images from a classical computer, the QCNN model adopted a hybrid method and encoded the images into a quantum simulator for feature extraction using quantum gate rotations. The measurements were carried out on the classical computer after the rotations were applied. According to the results, we note that the QCNN approach is ~12% more effective than the traditional classical CNN approaches and it is possible that applying data augmentation may increase the accuracy. This study has demonstrated that quantum machine and deep learning models can be relatively superior to the classical machine learning approaches in terms of their processing speed and accuracy when used to perform classification on colored classes.Keywords: CIFAR-10, quantum convolutional neural networks, quantum deep learning, quantum machine learning
Procedia PDF Downloads 134942 High Throughput LC-MS/MS Studies on Sperm Proteome of Malnad Gidda (Bos Indicus) Cattle
Authors: Kerekoppa Puttaiah Bhatta Ramesha, Uday Kannegundla, Praseeda Mol, Lathika Gopalakrishnan, Jagish Kour Reen, Gourav Dey, Manish Kumar, Sakthivel Jeyakumar, Arumugam Kumaresan, Kiran Kumar M., Thottethodi Subrahmanya Keshava Prasad
Abstract:
Spermatozoa are the highly specialized transcriptionally and translationally inactive haploid male gamete. The understanding of proteome of sperm is indispensable to explore the mechanism of sperm motility and fertility. Though there is a large number of human sperm proteomic studies, in-depth proteomic information on Bos indicus spermatozoa is not well established yet. Therefore, we illustrated the profile of sperm proteome in indigenous cattle, Malnad gidda (Bos Indicus), using high-resolution mass spectrometry. In the current study, two semen ejaculates from 3 breeding bulls were collected employing the artificial vaginal method. Using 45% percoll purification, spermatozoa cells were isolated. Protein was extracted using lysis buffer containing 2% Sodium Dodecyl Sulphate (SDS) and protein concentration was estimated. Fifty micrograms of protein from each individual were pooled for further downstream processing. Pooled sample was fractionated using SDS-Poly Acrylamide Gel Electrophoresis, which is followed by in-gel digestion. The peptides were subjected to C18 Stage Tip clean-up and analyzed in Orbitrap Fusion Tribrid mass spectrometer interfaced with Proxeon Easy-nano LC II system (Thermo Scientific, Bremen, Germany). We identified a total of 6773 peptides with 28426 peptide spectral matches, which belonged to 1081 proteins. Gene ontology analysis has been carried out to determine the biological processes, molecular functions and cellular components associated with sperm protein. The biological process chiefly represented our data is an oxidation-reduction process (5%), spermatogenesis (2.5%) and spermatid development (1.4%). The highlighted molecular functions are ATP, and GTP binding (14%) and the prominent cellular components most observed in our data were nuclear membrane (1.5%), acrosomal vesicle (1.4%), and motile cilium (1.3%). Seventeen percent of sperm proteins identified in this study were involved in metabolic pathways. To the best of our knowledge, this data represents the first total sperm proteome from indigenous cattle, Malnad Gidda. We believe that our preliminary findings could provide a strong base for the future understanding of bovine sperm proteomics.Keywords: Bos indicus, Malnad Gidda, mass spectrometry, spermatozoa
Procedia PDF Downloads 198941 A Conceptual Model of the Factors Affecting Saudi Citizens' Use of Social Media to Communicate with the Government
Authors: Reemiah Alotaibi, Muthu Ramachandran, Ah-Lian Kor, Amin Hosseinian-Far
Abstract:
In the past decade, developers of Web 2.0 technologies have shown increasing interest in the topic of e-government. There has been a rapid growth in social media technology because of its significant role in backing up some essential social needs. Its importance and power is derived from its capacity to support two-way communication. Governments are curious to get engaged in these websites, hoping to benefit from the new forms of communication and interaction offered by such technology. Greater participation by the public can be viewed as a chief indicator of effective government communication. Yet, the level of public participation in government 2.0 is not quite satisfactory. In general, it is still at the early stage in most developing countries, including Saudi Arabia. Although it is a fact that Saudi people are among the most active in using social media, the number of people who use social media to communicate with the public institutions is not high. Furthermore, most of the governmental organisations are not using social media tools to communicate with the public. They use these platforms to disseminate information. Our study focuses on the factors affecting citizens’ adoption of social media in Saudi Arabia. Our research question is: what are the factors affecting Saudi citizens’ use of social media to communicate with the government? To answer this research question, the research aims to validate the UTAUT model for examining social media tools from the citizen perspective. An amendment will be proposed to fit the adoption of social media platforms as a communication channel in government by using a developed conceptual model which integrates constructs from the UTAUT model and others external variables based on the literature review. The set of potential factors that affect these citizens' decisions to adopt social media to communicate with their government has been identified as perceived encouragement, trust and cultural influence. The connection between the above-mentioned constructs from the basis for the research hypothesis will be examined in the light of a quantitative methodology. Data collection will be performed through a survey targeting a number of Saudi citizens who are social media users. The data collected from the primary survey will later be analysed by using statistical methods. The outcomes of this research project are argued to have potential contributions to the fields of social media and e-Government adoption, both on the theoretical and practical levels. It is believed that this research project is the first of its type that attempts to identify the factors that affect citizens’ adoption of social media to communicate with the government. The importance of identifying these factors stems from the potential use of them to enhance the government’s implementation of social media and help in making more accurate decisions and strategies based on comprehending the most important factors that affect citizens’ decisions.Keywords: social media, adoption, citizen, UTAUT model
Procedia PDF Downloads 421940 Positive Disruption: Towards a Definition of Artist-in-Residence Impact on Organisational Creativity
Authors: Denise Bianco
Abstract:
Several studies on innovation and creativity in organisations emphasise the need to expand horizons and take on alternative and unexpected views to produce something new. This paper theorises the potential impact artists can have as creative catalysts, working embedded in non-artistic organisations. It begins from an understanding that in today's ever-changing scenario, organisations are increasingly seeking to open up new creative thinking through deviant behaviours to produce innovation and that art residencies need to be critically revised in this specific context in light of their disruptive potential. On the one hand, this paper builds upon recent contributions made on workplace creativity and related concepts of deviance and disruption. Research suggests that creativity is likely to be lower in work contexts where utter conformity is a cardinal value and higher in work contexts that show some tolerance for uncertainty and deviance. On the other hand, this paper draws attention to Artist-in-Residence as a vehicle for epistemic friction between divergent and convergent thinking, which allows the creation of unparalleled ways of knowing in the dailiness of situated and contextualised social processes. In order to do so, this contribution brings together insights from the most relevant theories on organisational creativity and unconventional agile methods such as Art Thinking and direct insights from ethnographic fieldwork in the context of embedded art residencies within work organisations to propose a redefinition of Artist-in-Residence and their potential impact on organisational creativity. The result is a re-definition of embedded Artist-in-Residence in organisational settings from a more comprehensive, multi-disciplinary, and relational perspective that builds on three focal points. First the notion that organisational creativity is a dynamic and synergistic process throughout which an idea is framed by recurrent activities subjected to multiple influences. Second, the definition of embedded Artist-in-Residence as an assemblage of dynamic, productive relations and unexpected possibilities for new networks of relationality that encourage the recombination of knowledge. Third, and most importantly, the acknowledgment that embedded residencies are, at the very essence, bi-cultural knowledge contexts where creativity flourishes as the result of open-to-change processes that are highly relational, constantly negotiated, and contextualised in time and space.Keywords: artist-in-residence, convergent and divergent thinking, creativity, creative friction, deviance and creativity
Procedia PDF Downloads 100939 The Silent Tuberculosis: A Case Study to Highlight Awareness of a Global Health Disease and Difficulties in Diagnosis
Authors: Susan Scott, Dina Hanna, Bassel Zebian, Gary Ruiz, Sreena Das
Abstract:
Although the number of cases of TB in England has fallen over the last 4 years, it remains an important public health burden with 1 in 20 cases dying annually. The vast majority of cases present in non-UK born individuals with social risk factors. We present a case of non-pulmonary TB presenting in a healthy child born in the UK to professional parents. We present a case of a healthy 10 year old boy who developed acute back pain during school PE. Over the next 5 months, he was seen by various health and allied professionals with worsening back pain and kyphosis. He became increasing unsteady and for the 10 days prior to admission to our hospital, he developed fevers. He was admitted to his local hospital for tonsillitis where he suffered two falls on account of his leg weakness. A spinal X-ray revealed a pathological fracture and gibbus formation. He was transferred to our unit for further management. On arrival, the patient had lower motor neurone signs of his left leg. He underwent spinal fixture, laminectomy and decompression. Microbiology samples taken intra-operatively confirmed Mycobacterium Tuberculosis. He had a positive Mantoux and T-spot and treatment were commenced. There was no evidence of immune compromise. The patient was born in the UK, had a BCG scar and his only travel history had been two years prior to presentation when he travelled to the Phillipines for a short holiday. The patient continues to have issues around neuropathic pain, mobility, pill burden and mild liver side effects from treatment. Discussion: There is a paucity of case reports on spinal TB in paediatrics and diagnosis is often difficult due to the non-specific symptomatology. Although prognosis on treatment is good, a delayed diagnosis can have devastating consequences. This case highlights the continued need for higher index of suspicion and diagnosis in a world with changing patterns of migration and increase global travel. Surgical intervention is limited to the most serious cases to minimise further neurological damage and improve prognosis. There remains the need for a multi-disciplinary approach to deal with challenges of treatment and rehabilitation.Keywords: tuberculosis, non-pulmonary TB, public health burden, diagnostic challenge
Procedia PDF Downloads 197938 Robust Batch Process Scheduling in Pharmaceutical Industries: A Case Study
Authors: Tommaso Adamo, Gianpaolo Ghiani, Antonio Domenico Grieco, Emanuela Guerriero
Abstract:
Batch production plants provide a wide range of scheduling problems. In pharmaceutical industries a batch process is usually described by a recipe, consisting of an ordering of tasks to produce the desired product. In this research work we focused on pharmaceutical production processes requiring the culture of a microorganism population (i.e. bacteria, yeasts or antibiotics). Several sources of uncertainty may influence the yield of the culture processes, including (i) low performance and quality of the cultured microorganism population or (ii) microbial contamination. For these reasons, robustness is a valuable property for the considered application context. In particular, a robust schedule will not collapse immediately when a cell of microorganisms has to be thrown away due to a microbial contamination. Indeed, a robust schedule should change locally in small proportions and the overall performance measure (i.e. makespan, lateness) should change a little if at all. In this research work we formulated a constraint programming optimization (COP) model for the robust planning of antibiotics production. We developed a discrete-time model with a multi-criteria objective, ordering the different criteria and performing a lexicographic optimization. A feasible solution of the proposed COP model is a schedule of a given set of tasks onto available resources. The schedule has to satisfy tasks precedence constraints, resource capacity constraints and time constraints. In particular time constraints model tasks duedates and resource availability time windows constraints. To improve the schedule robustness, we modeled the concept of (a, b) super-solutions, where (a, b) are input parameters of the COP model. An (a, b) super-solution is one in which if a variables (i.e. the completion times of a culture tasks) lose their values (i.e. cultures are contaminated), the solution can be repaired by assigning these variables values with a new values (i.e. the completion times of a backup culture tasks) and at most b other variables (i.e. delaying the completion of at most b other tasks). The efficiency and applicability of the proposed model is demonstrated by solving instances taken from Sanofi Aventis, a French pharmaceutical company. Computational results showed that the determined super-solutions are near-optimal.Keywords: constraint programming, super-solutions, robust scheduling, batch process, pharmaceutical industries
Procedia PDF Downloads 624937 Risk Assessment and Haloacetic Acids Exposure in Drinking Water in Tunja, Colombia
Authors: Bibiana Matilde Bernal Gómez, Manuel Salvador Rodríguez Susa, Mildred Fernanda Lemus Perez
Abstract:
In chlorinated drinking water, Haloacetic acids have been identified and are classified as disinfection byproducts originating from reaction between natural organic matter and/or bromide ions in water sources. These byproducts can be generated through a variety of chemical and pharmaceutical processes. The term ‘Total Haloacetic Acids’ (THAAs) is used to describe the cumulative concentration of dichloroacetic acid, trichloroacetic acid, monochloroacetic acid, monobromoacetic acid, and dibromoacetic acid in water samples, which are usually measured to evaluate water quality. Chronic presence of these acids in drinking water has a risk of cancer in humans. The detection of THAAs for the first time in 15 municipalities of Boyacá was accomplished in 2023. Aim is to describe the correlation between the levels of THAAs and digestive cancer in Tunja, a city in Colombia with higher rates of digestive cancer and to compare the risk across 15 towns, taking into account factors such as water quality. A research project was conducted with the aim of comparing water sources based on the geographical features of the town, describing the disinfection process in 15 municipalities, and exploring physical properties such as water temperature and pH level. The project also involved a study of contact time based on habits documented through a survey, and a comparison of socioeconomic factors and lifestyle, in order to assess the personal risk of exposure. Data on the levels of THAAs were obtained after characterizing the water quality in urban sectors in eight months of 2022. This, based on the protocol described in the Stage 2 DBP of the United States Environmental Protection Agency (USEPA) from 2006, which takes into account the size of the population being supplied. A cancer risk assessment was conducted to evaluate the likelihood of an individual developing cancer due to exposure to pollutants THAAs. The assessment considered exposure methods like oral ingestion, skin absorption, and inhalation. The chronic daily intake (CDI) for these exposure routes was calculated using specific equations. The lifetime cancer risk (LCR) was then determined by adding the cancer risks from the three exposure routes for each HAA. The risk assessment process involved four phases: exposure assessment, toxicity evaluation, data gathering and analysis, and risk definition and management. The results conclude that there is a cumulative higher risk of digestive cancer due to THAAs exposure in drinking water.Keywords: haloacetic acids, drinking water, water quality, cancer risk assessment
Procedia PDF Downloads 64