Search results for: geospatial technology competency model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 23042

Search results for: geospatial technology competency model

1592 Association of a Genetic Polymorphism in Cytochrome P450, Family 1 with Risk of Developing Esophagus Squamous Cell Carcinoma

Authors: Soodabeh Shahid Sales, Azam Rastgar Moghadam, Mehrane Mehramiz, Malihe Entezari, Kazem Anvari, Mohammad Sadegh Khorrami, Saeideh Ahmadi Simab, Ali Moradi, Seyed Mahdi Hassanian, Majid Ghayour-Mobarhan, Gordon A. Ferns, Amir Avan

Abstract:

Background Esophageal cancer has been reported as the eighth most common cancer universal and the seventh cause of cancer-related death in men .recent studies have revealed that cytochrome P450, family 1, subfamily B, polypeptide 1, which plays a role in metabolizing xenobiotics, is associated with different cancers. Therefore in the present study, we investigated the impact of CYP1B1-rs1056836 on esophagus squamous cell carcinoma (ESCC) patients. Method: 317 subjects, with and without ESCC were recruited. DNA was extracted and genotyped via Real-time PCR-Based Taq Man. Kaplan Meier curves were utilized to assess overall and progression-free survival. To evaluate the relationship between patients clinicopathological data, genotypic frequencies, disease prognosis, and patients survival, Pearson chi-square and t-test were used. Logistic regression was utilized to assess the association between the risk of ESCC and genotypes. Results: the genotypic frequency for GG, GC, and CC are respectively 58.6% , 29.8%, 11.5% in the healthy group and 51.8%, 36.14% and 12% in ESCC group. With respect to the recessive genetic inheritance model, an association between the GG genotype and stage of ESCC were found. Also, statistically significant results were not found for this variation and risk of ESCC. Patients with GG genotype had a decreased risk of nodal metastasis in comparison with patients with CC/CG genotype, although this link was not statistically significant. Conclusion: Our findings illustrated the correlation of CYP1B1-rs1056836 as a potential biomarker for ESCC patients, supporting further studies in larger populations in different ethnic groups. Moreover, further investigations are warranted to evaluate the association of emerging marker with dietary intake and lifestyle.

Keywords: Cytochrome P450, esophagus squamous cell carcinoma, dietary intake, lifestyle

Procedia PDF Downloads 198
1591 Biodegradation of Endoxifen in Wastewater: Isolation and Identification of Bacteria Degraders, Kinetics, and By-Products

Authors: Marina Arino Martin, John McEvoy, Eakalak Khan

Abstract:

Endoxifen is an active metabolite responsible for the effectiveness of tamoxifen, a chemotherapeutic drug widely used for endocrine responsive breast cancer and chemo-preventive long-term treatment. Tamoxifen and endoxifen are not completely metabolized in human body and are actively excreted. As a result, they are released to the water environment via wastewater treatment plants (WWTPs). The presence of tamoxifen in the environment produces negative effects on aquatic lives due to its antiestrogenic activity. Because endoxifen is 30-100 times more potent than tamoxifen itself and also presents antiestrogenic activity, its presence in the water environment could result in even more toxic effects on aquatic lives compared to tamoxifen. Data on actual concentrations of endoxifen in the environment is limited due to recent discovery of endoxifen pharmaceutical activity. However, endoxifen has been detected in hospital and municipal wastewater effluents. The detection of endoxifen in wastewater effluents questions the treatment efficiency of WWTPs. Studies reporting information about endoxifen removal in WWTPs are also scarce. There was a study that used chlorination to eliminate endoxifen in wastewater. However, an inefficient degradation of endoxifen by chlorination and the production of hazardous disinfection by-products were observed. Therefore, there is a need to remove endoxifen from wastewater prior to chlorination in order to reduce the potential release of endoxifen into the environment and its possible effects. The aim of this research is to isolate and identify bacteria strain(s) capable of degrading endoxifen into less hazardous compound(s). For this purpose, bacteria strains from WWTPs were exposed to endoxifen as a sole carbon and nitrogen source for 40 days. Bacteria presenting positive growth were isolated and tested for endoxifen biodegradation. Endoxifen concentration and by-product formation were monitored. The Monod kinetic model was used to determine endoxifen biodegradation rate. Preliminary results of the study suggest that isolated bacteria from WWTPs are able to growth in presence of endoxifen as a sole carbon and nitrogen source. Ongoing work includes identification of these bacteria strains and by-product(s) of endoxifen biodegradation.

Keywords: biodegradation, bacterial degraders, endoxifen, wastewater

Procedia PDF Downloads 209
1590 Groundwater Potential Delineation Using Geodetector Based Convolutional Neural Network in the Gunabay Watershed of Ethiopia

Authors: Asnakew Mulualem Tegegne, Tarun Kumar Lohani, Abunu Atlabachew Eshete

Abstract:

Groundwater potential delineation is essential for efficient water resource utilization and long-term development. The scarcity of potable and irrigation water has become a critical issue due to natural and anthropogenic activities in meeting the demands of human survival and productivity. With these constraints, groundwater resources are now being used extensively in Ethiopia. Therefore, an innovative convolutional neural network (CNN) is successfully applied in the Gunabay watershed to delineate groundwater potential based on the selected major influencing factors. Groundwater recharge, lithology, drainage density, lineament density, transmissivity, and geomorphology were selected as major influencing factors during the groundwater potential of the study area. For dataset training, 70% of samples were selected and 30% were used for serving out of the total 128 samples. The spatial distribution of groundwater potential has been classified into five groups: very low (10.72%), low (25.67%), moderate (31.62%), high (19.93%), and very high (12.06%). The area obtains high rainfall but has a very low amount of recharge due to a lack of proper soil and water conservation structures. The major outcome of the study showed that moderate and low potential is dominant. Geodetoctor results revealed that the magnitude influences on groundwater potential have been ranked as transmissivity (0.48), recharge (0.26), lineament density (0.26), lithology (0.13), drainage density (0.12), and geomorphology (0.06). The model results showed that using a convolutional neural network (CNN), groundwater potentiality can be delineated with higher predictive capability and accuracy. CNN-based AUC validation platform showed that 81.58% and 86.84% were accrued from the accuracy of training and testing values, respectively. Based on the findings, the local government can receive technical assistance for groundwater exploration and sustainable water resource development in the Gunabay watershed. Finally, the use of a detector-based deep learning algorithm can provide a new platform for industrial sectors, groundwater experts, scholars, and decision-makers.

Keywords: CNN, geodetector, groundwater influencing factors, Groundwater potential, Gunabay watershed

Procedia PDF Downloads 3
1589 Treatment Process of Sludge from Leachate with an Activated Sludge System and Extended Aeration System

Authors: A. Chávez, A. Rodríguez, F. Pinzón

Abstract:

Society is concerned about measures of environmental, economic and social impacts generated in the solid waste disposal. These places of confinement, also known as landfills, are locations where problems of pollution and damage to human health are reduced. They are technically designed and operated, using engineering principles, storing the residue in a small area, compact it to reduce volume and covering them with soil layers. Problems preventing liquid (leachate) and gases produced by the decomposition of organic matter. Despite planning and site selection for disposal, monitoring and control of selected processes, remains the dilemma of the leachate as extreme concentration of pollutants, devastating soil, flora and fauna; aggressive processes requiring priority attention. A biological technology is the activated sludge system, used for tributaries with high pollutant loads. Since transforms biodegradable dissolved and particulate matter into CO2, H2O and sludge; transform suspended and no Settleable solids; change nutrients as nitrogen and phosphorous; and degrades heavy metals. The microorganisms that remove organic matter in the processes are in generally facultative heterotrophic bacteria, forming heterogeneous populations. Is possible to find unicellular fungi, algae, protozoa and rotifers, that process the organic carbon source and oxygen, as well as the nitrogen and phosphorus because are vital for cell synthesis. The mixture of the substrate, in this case sludge leachate, molasses and wastewater is maintained ventilated by mechanical aeration diffusers. Considering as the biological processes work to remove dissolved material (< 45 microns), generating biomass, easily obtained by decantation processes. The design consists of an artificial support and aeration pumps, favoring develop microorganisms (denitrifying) using oxygen (O) with nitrate, resulting in nitrogen (N) in the gas phase. Thus, avoiding negative effects of the presence of ammonia or phosphorus. Overall the activated sludge system includes about 8 hours of hydraulic retention time, which does not prevent the demand for nitrification, which occurs on average in a value of MLSS 3,000 mg/L. The extended aeration works with times greater than 24 hours detention; with ratio of organic load/biomass inventory under 0.1; and average stay time (sludge age) more than 8 days. This project developed a pilot system with sludge leachate from Doña Juana landfill - RSDJ –, located in Bogota, Colombia, where they will be subjected to a process of activated sludge and extended aeration through a sequential Bach reactor - SBR, to be dump in hydric sources, avoiding ecological collapse. The system worked with a dwell time of 8 days, 30 L capacity, mainly by removing values of BOD and COD above 90%, with initial data of 1720 mg/L and 6500 mg/L respectively. Motivating the deliberate nitrification is expected to be possible commercial use diffused aeration systems for sludge leachate from landfills.

Keywords: sludge, landfill, leachate, SBR

Procedia PDF Downloads 265
1588 The Basin Management Methodology for Integrated Water Resources Management and Development

Authors: Julio Jesus Salazar, Max Jesus De Lama

Abstract:

The challenges of water management are aggravated by global change, which implies high complexity and associated uncertainty; water management is difficult because water networks cross domains (natural, societal, and political), scales (space, time, jurisdictional, institutional, knowledge, etc.) and levels (area: patches to global; knowledge: a specific case to generalized principles). In this context, we need to apply natural and non-natural measures to manage water and soil. The Basin Management Methodology considers multifunctional measures of natural water retention and erosion control and soil formation to protect water resources and address the challenges related to the recovery or conservation of the ecosystem, as well as natural characteristics of water bodies, to improve the quantitative status of water bodies and reduce vulnerability to floods and droughts. This method of water management focuses on the positive impacts of the chemical and ecological status of water bodies, restoration of the functioning of the ecosystem and its natural services; thus, contributing to both adaptation and mitigation of climate change. This methodology was applied in 7 interventions in the sub-basin of the Shullcas River in Huancayo-Junín-Peru, obtaining great benefits in the framework of the participation of alliances of actors and integrated planning scenarios. To implement the methodology in the sub-basin of the Shullcas River, a process called Climate Smart Territories (CST) was used; with which the variables were characterized in a highly complex space. The diagnosis was then worked using risk management and adaptation to climate change. Finally, it was concluded with the selection of alternatives and projects of this type. Therefore, the CST approach and process face the challenges of climate change through integrated, systematic, interdisciplinary and collective responses at different scales that fit the needs of ecosystems and their services that are vital to human well-being. This methodology is now replicated at the level of the Mantaro river basin, improving with other initiatives that lead to the model of a resilient basin.

Keywords: climate-smart territories, climate change, ecosystem services, natural measures, Climate Smart Territories (CST) approach

Procedia PDF Downloads 138
1587 Statistical Investigation Projects: A Way for Pre-Service Mathematics Teachers to Actively Solve a Campus Problem

Authors: Muhammet Şahal, Oğuz Köklü

Abstract:

As statistical thinking and problem-solving processes have become increasingly important, teachers need to be more rigorously prepared with statistical knowledge to teach their students effectively. This study examined preservice mathematics teachers' development of statistical investigation projects using data and exploratory data analysis tools, following a design-based research perspective and statistical investigation cycle. A total of 26 pre-service senior mathematics teachers from a public university in Turkiye participated in the study. They formed groups of 3-4 members voluntarily and worked on their statistical investigation projects for six weeks. The data sources were audio recordings of pre-service teachers' group discussions while working on their projects in class, whole-class video recordings, and each group’s weekly and final reports. As part of the study, we reviewed weekly reports, provided timely feedback specific to each group, and revised the following week's class work based on the groups’ needs and development in their project. We used content analysis to analyze groups’ audio and classroom video recordings. The participants encountered several difficulties, which included formulating a meaningful statistical question in the early phase of the investigation, securing the most suitable data collection strategy, and deciding on the data analysis method appropriate for their statistical questions. The data collection and organization processes were challenging for some groups and revealed the importance of comprehensive planning. Overall, preservice senior mathematics teachers were able to work on a statistical project that contained the formulation of a statistical question, planning, data collection, analysis, and reaching a conclusion holistically, even though they faced challenges because of their lack of experience. The study suggests that preservice senior mathematics teachers have the potential to apply statistical knowledge and techniques in a real-world context, and they could proceed with the project with the support of the researchers. We provided implications for the statistical education of teachers and future research.

Keywords: design-based study, pre-service mathematics teachers, statistical investigation projects, statistical model

Procedia PDF Downloads 77
1586 Latent Heat Storage Using Phase Change Materials

Authors: Debashree Ghosh, Preethi Sridhar, Shloka Atul Dhavle

Abstract:

The judicious and economic consumption of energy for sustainable growth and development is nowadays a thing of primary importance; Phase Change Materials (PCM) provide an ingenious option of storing energy in the form of Latent Heat. Energy storing mechanism incorporating phase change material increases the efficiency of the process by minimizing the difference between supply and demand; PCM heat exchangers are used to storing the heat or non-convectional energy within the PCM as the heat of fusion. The experimental study evaluates the effect of thermo-physical properties, variation in inlet temperature, and flow rate on charging period of a coiled heat exchanger. Secondly, a numerical study is performed on a PCM double pipe heat exchanger packed with two different PCMs, namely, RT50 and Fatty Acid, in the annular region. In this work, the simulation of charging of paraffin wax (RT50) using water as high-temperature fluid (HTF) is performed. Commercial software Ansys-Fluent 15 is used for simulation, and hence charging of PCM is studied. In the Enthalpy-porosity model, a single momentum equation is applicable to describe the motion of both solid and liquid phases. The details of the progress of phase change with time are presented through the contours of melt-fraction, temperature. The velocity contour is shown to describe the motion of the liquid phase. The experimental study revealed that paraffin wax melts with almost the same temperature variation at the two Intermediate positions. Fatty acid, on the other hand, melts faster owing to greater thermal conductivity and low melting temperature. It was also observed that an increase in flow rate leads to a reduction in the charging period. The numerical study also supports some of the observations found in the experimental study like the significant dependence of driving force on the process of melting. The numerical study also clarifies the melting pattern of the PCM, which cannot be observed in the experimental study.

Keywords: latent heat storage, charging period, discharging period, coiled heat exchanger

Procedia PDF Downloads 112
1585 The Selectivities of Pharmaceutical Spending Containment: Social Profit, Incentivization Games and State Power

Authors: Ben Main Piotr Ozieranski

Abstract:

State government spending on pharmaceuticals stands at 1 trillion USD globally, promoting criticism of the pharmaceutical industry's monetization of drug efficacy, product cost overvaluation, and health injustice. This paper elucidates the mechanisms behind a state-institutional response to this problem through the sociological lens of the strategic relational approach to state power. To do so, 30 expert interviews, legal and policy documents are drawn on to explain how state elites in New Zealand have successfully contested a 30-year “pharmaceutical spending containment policy”. Proceeding from Jessop's notion of strategic “selectivity”, encompassing analyses of the enabling features of state actors' ability to harness state structures, a theoretical explanation is advanced. First, a strategic context is described that consists of dynamics around pharmaceutical dealmaking between the state bureaucracy, pharmaceutical pricing strategies (and their effects), and the industry. Centrally, the pricing strategy of "bundling" -deals for packages of drugs that combine older and newer patented products- reflect how state managers have instigated an “incentivization game” that is played by state and industry actors, including HTA professionals, over pharmaceutical products (both current and in development). Second, a protective context is described that is comprised of successive legislative-judicial responses to the strategic context and characterized by the regulation and the societalisation of commercial law. Third, within the policy, the achievement of increased pharmaceutical coverage (pharmaceutical “mix”) alongside contained spending is conceptualized as a state defence of a "social profit". As such, in contrast to scholarly expectations that political and economic cultures of neo-liberalism drive pharmaceutical policy-making processes, New Zealand's state elites' approach is shown to be antipathetic to neo-liberals within an overall capitalist economy. The paper contributes an analysis of state pricing strategies and how they are embedded in state regulatory structures. Additionally, through an analysis of the interconnections of state power and pharmaceutical value Abrahams's neo-liberal corporate bias model for pharmaceutical policy analysis is problematised.

Keywords: pharmaceutical governance, pharmaceutical bureaucracy, pricing strategies, state power, value theory

Procedia PDF Downloads 68
1584 Challenges of Outreach Team Leaders in Managing Ward Based Primary Health Care Outreach Teams in National Health Insurance Pilot Districts in Kwazulu-Natal

Authors: E. M. Mhlongo, E. Lutge

Abstract:

In 2010, South Africa’s National Department of Health (NDoH) launched national primary health care (PHC) initiative to strengthen health promotion, disease prevention, and early disease detection. The strategy, called Re-engineering Primary Health Care (rPHC), aims to support a preventive and health-promoting community-based PHC model by using community-based outreach teams (known in South Africa as Ward-based Primary Health Care Outreach teams or WBPHCOTs). These teams provide health education, promote healthy behaviors, assess community health needs, manage minor health problems, and support linkages to health services and health facilities. Ward based primary health care outreach teams are supervised by a professional nurse who is the outreach team leader. In South Africa, the WBPHCOTs have been established, registered, and are reporting their activities in the District Health Information System (DHIS). This study explored and described the challenges faced by outreach team leaders in supporting and supervising the WBPHCOTs. Qualitative data were obtained through interviews conducted with the outreach team leaders at a sub-district level. Thematic analysis of data was done. Findings revealed some challenges faced by team leaders in day to day execution of their duties. Issues such as staff shortages, inadequate resources to carry out health promotion activities, and lack of co-operation from team members may undermine the capacity of team leaders to support and supervise the WBPHCOTs. Many community members are under the impression that the outreach team is responsible for bringing the clinic to the community while the outreach teams do not carry any medication/treatment with them when doing home visits. The study further highlights issues around the challenges of WBPHCOTs at a household level. In conclusion, the WBPHCOTs are an important component of National Health Insurance (NHI), and in order for NHI to be optimally implemented, the issues raised in this research should be addressed with some urgency.

Keywords: community health worker, national health insurance, primary health care, ward-based primary health care outreach teams

Procedia PDF Downloads 139
1583 Towards Learning Query Expansion

Authors: Ahlem Bouziri, Chiraz Latiri, Eric Gaussier

Abstract:

The steady growth in the size of textual document collections is a key progress-driver for modern information retrieval techniques whose effectiveness and efficiency are constantly challenged. Given a user query, the number of retrieved documents can be overwhelmingly large, hampering their efficient exploitation by the user. In addition, retaining only relevant documents in a query answer is of paramount importance for an effective meeting of the user needs. In this situation, the query expansion technique offers an interesting solution for obtaining a complete answer while preserving the quality of retained documents. This mainly relies on an accurate choice of the added terms to an initial query. Interestingly enough, query expansion takes advantage of large text volumes by extracting statistical information about index terms co-occurrences and using it to make user queries better fit the real information needs. In this respect, a promising track consists in the application of data mining methods to extract dependencies between terms, namely a generic basis of association rules between terms. The key feature of our approach is a better trade off between the size of the mining result and the conveyed knowledge. Thus, face to the huge number of derived association rules and in order to select the optimal combination of query terms from the generic basis, we propose to model the problem as a classification problem and solve it using a supervised learning algorithm such as SVM or k-means. For this purpose, we first generate a training set using a genetic algorithm based approach that explores the association rules space in order to find an optimal set of expansion terms, improving the MAP of the search results. The experiments were performed on SDA 95 collection, a data collection for information retrieval. It was found that the results were better in both terms of MAP and NDCG. The main observation is that the hybridization of text mining techniques and query expansion in an intelligent way allows us to incorporate the good features of all of them. As this is a preliminary attempt in this direction, there is a large scope for enhancing the proposed method.

Keywords: supervised leaning, classification, query expansion, association rules

Procedia PDF Downloads 319
1582 Real-Time Data Stream Partitioning over a Sliding Window in Real-Time Spatial Big Data

Authors: Sana Hamdi, Emna Bouazizi, Sami Faiz

Abstract:

In recent years, real-time spatial applications, like location-aware services and traffic monitoring, have become more and more important. Such applications result dynamic environments where data as well as queries are continuously moving. As a result, there is a tremendous amount of real-time spatial data generated every day. The growth of the data volume seems to outspeed the advance of our computing infrastructure. For instance, in real-time spatial Big Data, users expect to receive the results of each query within a short time period without holding in account the load of the system. But with a huge amount of real-time spatial data generated, the system performance degrades rapidly especially in overload situations. To solve this problem, we propose the use of data partitioning as an optimization technique. Traditional horizontal and vertical partitioning can increase the performance of the system and simplify data management. But they remain insufficient for real-time spatial Big data; they can’t deal with real-time and stream queries efficiently. Thus, in this paper, we propose a novel data partitioning approach for real-time spatial Big data named VPA-RTSBD (Vertical Partitioning Approach for Real-Time Spatial Big data). This contribution is an implementation of the Matching algorithm for traditional vertical partitioning. We find, firstly, the optimal attribute sequence by the use of Matching algorithm. Then, we propose a new cost model used for database partitioning, for keeping the data amount of each partition more balanced limit and for providing a parallel execution guarantees for the most frequent queries. VPA-RTSBD aims to obtain a real-time partitioning scheme and deals with stream data. It improves the performance of query execution by maximizing the degree of parallel execution. This affects QoS (Quality Of Service) improvement in real-time spatial Big Data especially with a huge volume of stream data. The performance of our contribution is evaluated via simulation experiments. The results show that the proposed algorithm is both efficient and scalable, and that it outperforms comparable algorithms.

Keywords: real-time spatial big data, quality of service, vertical partitioning, horizontal partitioning, matching algorithm, hamming distance, stream query

Procedia PDF Downloads 154
1581 Factors Associated with Recurrence and Long-Term Survival in Younger and Postmenopausal Women with Breast Cancer

Authors: Sopit Tubtimhin, Chaliya Wamaloon, Anchalee Supattagorn

Abstract:

Background and Significance: Breast cancer is the most frequently diagnosed and leading cause of cancer death among women. This study aims to determine factors potentially predicting recurrence and long-term survival after the first recurrence in surgically treated patients between postmenopausal and younger women. Methods and Analysis: A retrospective cohort study was performed on 498 Thai women with invasive breast cancer, who had undergone mastectomy and been followed-up at Ubon Ratchathani Cancer Hospital, Thailand. We collected based on a systematic chart audit from medical records and pathology reports between January 1, 2002, and December 31, 2011. The last follow-up time point for surviving patients was December 31, 2016. A Cox regression model was used to calculate hazard ratios of recurrence and death. Findings: The median age was 49 (SD ± 9.66) at the time of diagnosis, 47% was post-menopausal women ( ≥ 51years and not experienced any menstrual flow for a minimum of 12 months), and 53 % was younger women ( ˂ 51 years and have menstrual period). Median time from the diagnosis to the last follow-up or death was 10.81 [95% CI = 9.53-12.07] years in younger cases and 8.20 [95% CI = 6.57-9.82] years in postmenopausal cases. The recurrence-free survival (RFS) for younger estimates at 1, 5 and 10 years of 95.0 %, 64.0% and 58.93% respectively, appeared slightly better than the 92.7%, 58.1% and 53.1% for postmenopausal women [HRadj = 1.25, 95% CI = 0.95-1.64]. Regarding overall survival (OS) for younger at 1, 5 and 10 years were 97.7%, 72.7 % and 52.7% respectively, for postmenopausal patients, OS at 1, 5 and 10 years were 95.7%, 70.0% and 44.5 respectively, there were no significant differences in survival [HRadj = 1.23, 95% CI = 0.94 -1.64]. Multivariate analysis identified five risk factors for negatively impacting on survival were triple negative [HR= 2.76, 95% CI = 1.47-5.19], Her2-enriched [HR = 2.59, 95% CI = 1.37-4.91], luminal B [HR = 2.29, 95 % CI=1.35-3.89], not free margin [HR = 1.98, 95%CI=1.00-3.96] and patients who received only adjuvant chemotherapy [HR= 3.75, 95% CI = 2.00-7.04]. Statistically significant risks of overall cancer recurrence were Her2-enriched [HR = 5.20, 95% CI = 2.75-9.80], triple negative [HR = 3.87, 95% CI = 1.98-7.59], luminal B [HR= 2.59, 95% CI = 1.48-4.54,] and patients who received only adjuvant chemotherapy [HR= 2.59, 95% CI = 1.48-5.66]. Discussion and Implications: Outcomes from this studies have shown that postmenopausal women have been associated with increased risk of recurrence and mortality. As the results, it provides useful information for planning the screening and treatment of early-stage breast cancer in the future.

Keywords: breast cancer, menopause status, recurrence-free survival, overall survival

Procedia PDF Downloads 160
1580 Sexual Health Experiences of Older Men: Health Care Professionals' Perspectives

Authors: Andriana E. Tran, Anna Chur-Hansen

Abstract:

Sexual health is an important aspect of overall wellbeing. This study aimed to explore the sexual health experiences of men aged 50 years and over from the perspective of health care professional participants who were specializing in sexual health care and who consulted with older men. A total of ten interviews were conducted. Eleven themes were identified regarding men’s experiences with sexual health care as reported by participants. 1) Biologically focused: older male clients focus largely on the biological aspect of their sexual health without consideration of other factors which might affect their functioning. 2) Psychological concerns: there is an interaction between mental and sexual health but older male clients do not necessarily see this. 3) Medicalization of sexual functioning: advances in medicine that aid with erectile difficulties which consequently mean that older men tend to favor a medical solution to their sexual concerns. 4) Masculine identity: sexual health concerns are linked to older male clients’ sense of masculinity. 5) Penile functionality: most concerns that older male clients have center on their penile functionality. 6) Relationships: many male clients seek sexual help as they believe it improves relationships. Conversely, having supportive partners may mean older male clients focus less on the physicality of sex. 7) Grief and loss: men experience grief and loss – the loss of their sexual functioning, grief from loss of a long-term partner, and loss of intimacy and privacy when moving from independent living to residential care. 8) Social stigma: older male clients experience stigma around aging sexuality and sex in general. 9) Help-seeking behavior: older male clients will usually seek mechanistic solution for biological sexual concerns, such as medication used for penile dysfunction. 10) Dismissed by health care professionals: many older male clients seek specialist sexual health care without the knowledge of their doctors as they feel dismissed due to lack of expertise, lack of time, and the doctor’s personal attitudes and characteristics. Finally, 11) Lack of resources: there is a distinct lack of resources and training to understand sexuality for healthy older men. These findings may inform future research, professional training, public health campaigns and policies for sexual health in older men.

Keywords: ageing, biopsychosocial model, men's health, sexual health

Procedia PDF Downloads 170
1579 Krill-Herd Step-Up Approach Based Energy Efficiency Enhancement Opportunities in the Offshore Mixed Refrigerant Natural Gas Liquefaction Process

Authors: Kinza Qadeer, Muhammad Abdul Qyyum, Moonyong Lee

Abstract:

Natural gas has become an attractive energy source in comparison with other fossil fuels because of its lower CO₂ and other air pollutant emissions. Therefore, compared to the demand for coal and oil, that for natural gas is increasing rapidly world-wide. The transportation of natural gas over long distances as a liquid (LNG) preferable for several reasons, including economic, technical, political, and safety factors. However, LNG production is an energy-intensive process due to the tremendous amount of power requirements for compression of refrigerants, which provide sufficient cold energy to liquefy natural gas. Therefore, one of the major issues in the LNG industry is to improve the energy efficiency of existing LNG processes through a cost-effective approach that is 'optimization'. In this context, a bio-inspired Krill-herd (KH) step-up approach was examined to enhance the energy efficiency of a single mixed refrigerant (SMR) natural gas liquefaction (LNG) process, which is considered as a most promising candidate for offshore LNG production (FPSO). The optimal design of a natural gas liquefaction processes involves multivariable non-linear thermodynamic interactions, which lead to exergy destruction and contribute to process irreversibility. As key decision variables, the optimal values of mixed refrigerant flow rates and process operating pressures were determined based on the herding behavior of krill individuals corresponding to the minimum energy consumption for LNG production. To perform the rigorous process analysis, the SMR process was simulated in Aspen Hysys® software and the resulting model was connected with the Krill-herd approach coded in MATLAB. The optimal operating conditions found by the proposed approach significantly reduced the overall energy consumption of the SMR process by ≤ 22.5% and also improved the coefficient of performance in comparison with the base case. The proposed approach was also compared with other well-proven optimization algorithms, such as genetic and particle swarm optimization algorithms, and was found to exhibit a superior performance over these existing approaches.

Keywords: energy efficiency, Krill-herd, LNG, optimization, single mixed refrigerant

Procedia PDF Downloads 153
1578 Effects of Cold Treatments on Methylation Profiles and Reproduction Mode of Diploid and Tetraploid Plants of Ranunculus kuepferi (Ranunculaceae)

Authors: E. Syngelaki, C. C. F. Schinkel, S. Klatt, E. Hörandl

Abstract:

Environmental influence can alter the conditions for plant development and can trigger changes in epigenetic variation. Thus, the exposure to abiotic environmental stress can lead to different DNA methylation profiles and may have evolutionary consequences for adaptation. Epigenetic control mechanisms may further influence mode of reproduction. The alpine species R. kuepferi has diploid and tetraploid cytotypes, that are mostly sexual and facultative apomicts, respectively. Hence, it is a suitable model system for studying the correlations of mode of reproduction, ploidy, and environmental stress. Diploid and tetraploid individuals were placed in two climate chambers and treated with low (+7°C day/+2°C night, -1°C cold shocks for three nights per week) and warm (control) temperatures (+15°C day/+10°C night). Subsequently, methylation sensitive-Amplified Fragment-Length Polymorphism (AFPL) markers were used to screen genome-wide methylation alterations triggered by stress treatments. The dataset was analyzed for four groups regarding treatment (cold/warm) and ploidy level (diploid/tetraploid), and also separately for full methylated, hemi-methylated and unmethylated sites. Patterns of epigenetic variation suggested that diploids differed significantly in their profiles from tetraploids independent from treatment, while treatments did not differ significantly within cytotypes. Furthermore, diploids are more differentiated than the tetraploids in overall methylation profiles of both treatments. This observation is in accordance with the increased frequency of apomictic seed formation in diploids and maintenance of facultative apomixis in tetraploids during the experiment. Global analysis of molecular variance showed higher epigenetic variation within groups than among them, while locus-by-locus analysis of molecular variance showed a high number (54.7%) of significantly differentiated un-methylated loci. To summarise, epigenetic variation seems to depend on ploidy level, and in diploids may be correlated to changes in mode of reproduction. However, further studies are needed to elucidate the mechanism and possible functional significance of these correlations.

Keywords: apomixis, cold stress, DNA methylation, Ranunculus kuepferi

Procedia PDF Downloads 155
1577 Phenotype Prediction of DNA Sequence Data: A Machine and Statistical Learning Approach

Authors: Mpho Mokoatle, Darlington Mapiye, James Mashiyane, Stephanie Muller, Gciniwe Dlamini

Abstract:

Great advances in high-throughput sequencing technologies have resulted in availability of huge amounts of sequencing data in public and private repositories, enabling a holistic understanding of complex biological phenomena. Sequence data are used for a wide range of applications such as gene annotations, expression studies, personalized treatment and precision medicine. However, this rapid growth in sequence data poses a great challenge which calls for novel data processing and analytic methods, as well as huge computing resources. In this work, a machine and statistical learning approach for DNA sequence classification based on $k$-mer representation of sequence data is proposed. The approach is tested using whole genome sequences of Mycobacterium tuberculosis (MTB) isolates to (i) reduce the size of genomic sequence data, (ii) identify an optimum size of k-mers and utilize it to build classification models, (iii) predict the phenotype from whole genome sequence data of a given bacterial isolate, and (iv) demonstrate computing challenges associated with the analysis of whole genome sequence data in producing interpretable and explainable insights. The classification models were trained on 104 whole genome sequences of MTB isoloates. Cluster analysis showed that k-mers maybe used to discriminate phenotypes and the discrimination becomes more concise as the size of k-mers increase. The best performing classification model had a k-mer size of 10 (longest k-mer) an accuracy, recall, precision, specificity, and Matthews Correlation coeffient of 72.0%, 80.5%, 80.5%, 63.6%, and 0.4 respectively. This study provides a comprehensive approach for resampling whole genome sequencing data, objectively selecting a k-mer size, and performing classification for phenotype prediction. The analysis also highlights the importance of increasing the k-mer size to produce more biological explainable results, which brings to the fore the interplay that exists amongst accuracy, computing resources and explainability of classification results. However, the analysis provides a new way to elucidate genetic information from genomic data, and identify phenotype relationships which are important especially in explaining complex biological mechanisms.

Keywords: AWD-LSTM, bootstrapping, k-mers, next generation sequencing

Procedia PDF Downloads 164
1576 Phenotype Prediction of DNA Sequence Data: A Machine and Statistical Learning Approach

Authors: Darlington Mapiye, Mpho Mokoatle, James Mashiyane, Stephanie Muller, Gciniwe Dlamini

Abstract:

Great advances in high-throughput sequencing technologies have resulted in availability of huge amounts of sequencing data in public and private repositories, enabling a holistic understanding of complex biological phenomena. Sequence data are used for a wide range of applications such as gene annotations, expression studies, personalized treatment and precision medicine. However, this rapid growth in sequence data poses a great challenge which calls for novel data processing and analytic methods, as well as huge computing resources. In this work, a machine and statistical learning approach for DNA sequence classification based on k-mer representation of sequence data is proposed. The approach is tested using whole genome sequences of Mycobacterium tuberculosis (MTB) isolates to (i) reduce the size of genomic sequence data, (ii) identify an optimum size of k-mers and utilize it to build classification models, (iii) predict the phenotype from whole genome sequence data of a given bacterial isolate, and (iv) demonstrate computing challenges associated with the analysis of whole genome sequence data in producing interpretable and explainable insights. The classification models were trained on 104 whole genome sequences of MTB isoloates. Cluster analysis showed that k-mers maybe used to discriminate phenotypes and the discrimination becomes more concise as the size of k-mers increase. The best performing classification model had a k-mer size of 10 (longest k-mer) an accuracy, recall, precision, specificity, and Matthews Correlation coeffient of 72.0 %, 80.5 %, 80.5 %, 63.6 %, and 0.4 respectively. This study provides a comprehensive approach for resampling whole genome sequencing data, objectively selecting a k-mer size, and performing classification for phenotype prediction. The analysis also highlights the importance of increasing the k-mer size to produce more biological explainable results, which brings to the fore the interplay that exists amongst accuracy, computing resources and explainability of classification results. However, the analysis provides a new way to elucidate genetic information from genomic data, and identify phenotype relationships which are important especially in explaining complex biological mechanisms

Keywords: AWD-LSTM, bootstrapping, k-mers, next generation sequencing

Procedia PDF Downloads 153
1575 Novel Numerical Technique for Dusty Plasma Dynamics (Yukawa Liquids): Microfluidic and Role of Heat Transport

Authors: Aamir Shahzad, Mao-Gang He

Abstract:

Currently, dusty plasmas motivated the researchers' widespread interest. Since the last two decades, substantial efforts have been made by the scientific and technological community to investigate the transport properties and their nonlinear behavior of three-dimensional and two-dimensional nonideal complex (dusty plasma) liquids (NICDPLs). Different calculations have been made to sustain and utilize strongly coupled NICDPLs because of their remarkable scientific and industrial applications. Understanding of the thermophysical properties of complex liquids under various conditions is of practical interest in the field of science and technology. The determination of thermal conductivity is also a demanding question for thermophysical researchers, due to some reasons; very few results are offered for this significant property. Lack of information of the thermal conductivity of dense and complex liquids at different parameters related to the industrial developments is a major barrier to quantitative knowledge of the heat flux flow from one medium to another medium or surface. The exact numerical investigation of transport properties of complex liquids is a fundamental research task in the field of thermophysics, as various transport data are closely related with the setup and confirmation of equations of state. A reliable knowledge of transport data is also important for an optimized design of processes and apparatus in various engineering and science fields (thermoelectric devices), and, in particular, the provision of precise data for the parameters of heat, mass, and momentum transport is required. One of the promising computational techniques, the homogenous nonequilibrium molecular dynamics (HNEMD) simulation, is over viewed with a special importance on the application to transport problems of complex liquids. This proposed work is particularly motivated by the FIRST TIME to modify the problem of heat conduction equations leads to polynomial velocity and temperature profiles algorithm for the investigation of transport properties with their nonlinear behaviors in the NICDPLs. The aim of proposed work is to implement a NEMDS algorithm (Poiseuille flow) and to delve the understanding of thermal conductivity behaviors in Yukawa liquids. The Yukawa system is equilibrated through the Gaussian thermostat in order to maintain the constant system temperature (canonical ensemble ≡ NVT)). The output steps will be developed between 3.0×105/ωp and 1.5×105/ωp simulation time steps for the computation of λ data. The HNEMD algorithm shows that the thermal conductivity is dependent on plasma parameters and the minimum value of lmin shifts toward higher G with an increase in k, as expected. New investigations give more reliable simulated data for the plasma conductivity than earlier known simulation data and generally the plasma λ0 by 2%-20%, depending on Γ and κ. It has been shown that the obtained results at normalized force field are in satisfactory agreement with various earlier simulation results. This algorithm shows that the new technique provides more accurate results with fast convergence and small size effects over a wide range of plasma states.

Keywords: molecular dynamics simulation, thermal conductivity, nonideal complex plasma, Poiseuille flow

Procedia PDF Downloads 270
1574 An Investigative Study into Good Governance in the Non-Profit Sector in South Africa: A Systems Approach Perspective

Authors: Frederick M. Dumisani Xaba, Nokuthula G. Khanyile

Abstract:

There is a growing demand for greater accountability, transparency and ethical conduct based on sound governance principles in the developing world. Funders, donors and sponsors are increasingly demanding more transparency, better value for money and adherence to good governance standards. The drive towards improved governance measures is largely influenced by the need to ‘plug the leaks’, deal with malfeasance, engender greater levels of accountability and good governance and to ultimately attract further funding or investment. This is the case with the Non-Profit Organizations (NPOs) in South Africa in general, and in the province of KwaZulu-Natal in particular. The paper draws from the good governance theory, stakeholder theory and systems thinking to critically examine the requirements for good governance for the NPO sector from a theoretical and legislative point and to systematically looks at the contours of governance currently among the NPOs. The paper did this through the rigorous examination of the vignettes of cases of governance among selected NPOs based in KwaZulu-Natal. The study used qualitative and quantitative research methodologies through document analysis, literature review, semi-structured interviews, focus groups and statistical analysis from the various primary and secondary sources. It found some good cases of good governance but also found frightening levels of poor governance. There was an exponential growth of NPOs registered during the period under review, equally so there was an increase in cases of non-compliance to good governance practices. NPOs operate in an increasingly complex environment. There is contestation for influence and access to resources. Stakeholder management is poorly conceptualized and executed. Recognizing that the NPO sector operates in an environment characterized by complexity, constant changes, unpredictability, contestation, diversity and divergent views of different stakeholders, there is a need to apply legislative and systems thinking approaches to strengthen governance to withstand this turbulence through a capacity development model that recognizes these contextual and environmental challenges.

Keywords: good governance, non-profit organizations, stakeholder theory, systems theory

Procedia PDF Downloads 119
1573 Big Data and Health: An Australian Perspective Which Highlights the Importance of Data Linkage to Support Health Research at a National Level

Authors: James Semmens, James Boyd, Anna Ferrante, Katrina Spilsbury, Sean Randall, Adrian Brown

Abstract:

‘Big data’ is a relatively new concept that describes data so large and complex that it exceeds the storage or computing capacity of most systems to perform timely and accurate analyses. Health services generate large amounts of data from a wide variety of sources such as administrative records, electronic health records, health insurance claims, and even smart phone health applications. Health data is viewed in Australia and internationally as highly sensitive. Strict ethical requirements must be met for the use of health data to support health research. These requirements differ markedly from those imposed on data use from industry or other government sectors and may have the impact of reducing the capacity of health data to be incorporated into the real time demands of the Big Data environment. This ‘big data revolution’ is increasingly supported by national governments, who have invested significant funds into initiatives designed to develop and capitalize on big data and methods for data integration using record linkage. The benefits to health following research using linked administrative data are recognised internationally and by the Australian Government through the National Collaborative Research Infrastructure Strategy Roadmap, which outlined a multi-million dollar investment strategy to develop national record linkage capabilities. This led to the establishment of the Population Health Research Network (PHRN) to coordinate and champion this initiative. The purpose of the PHRN was to establish record linkage units in all Australian states, to support the implementation of secure data delivery and remote access laboratories for researchers, and to develop the Centre for Data Linkage for the linkage of national and cross-jurisdictional data. The Centre for Data Linkage has been established within Curtin University in Western Australia; it provides essential record linkage infrastructure necessary for large-scale, cross-jurisdictional linkage of health related data in Australia and uses a best practice ‘separation principle’ to support data privacy and security. Privacy preserving record linkage technology is also being developed to link records without the use of names to overcome important legal and privacy constraint. This paper will present the findings of the first ‘Proof of Concept’ project selected to demonstrate the effectiveness of increased record linkage capacity in supporting nationally significant health research. This project explored how cross-jurisdictional linkage can inform the nature and extent of cross-border hospital use and hospital-related deaths. The technical challenges associated with national record linkage, and the extent of cross-border population movements, were explored as part of this pioneering research project. Access to person-level data linked across jurisdictions identified geographical hot spots of cross border hospital use and hospital-related deaths in Australia. This has implications for planning of health service delivery and for longitudinal follow-up studies, particularly those involving mobile populations.

Keywords: data integration, data linkage, health planning, health services research

Procedia PDF Downloads 214
1572 Study of the Transport of ²²⁶Ra Colloidal in Mining Context Using a Multi-Disciplinary Approach

Authors: Marine Reymond, Michael Descostes, Marie Muguet, Clemence Besancon, Martine Leermakers, Catherine Beaucaire, Sophie Billon, Patricia Patrier

Abstract:

²²⁶Ra is one of the radionuclides resulting from the disintegration of ²³⁸U. Due to its half-life (1600 y) and its high specific activity (3.7 x 1010 Bq/g), ²²⁶Ra is found at the ultra-trace level in the natural environment (usually below 1 Bq/L, i.e. 10-13 mol/L). Because of its decay in ²²²Rn, a radioactive gas with a shorter half-life (3.8 days) which is difficult to control and dangerous for humans when inhaled, ²²⁶Ra is subject to a dedicated monitoring in surface waters especially in the context of uranium mining. In natural waters, radionuclides occur in dissolved, colloidal or particular forms. Due to the size of colloids, generally ranging between 1 nm and 1 µm and their high specific surface areas, the colloidal fraction could be involved in the transport of trace elements, including radionuclides in the environment. The colloidal fraction is not always easy to determine and few existing studies focus on ²²⁶Ra. In the present study, a complete multidisciplinary approach is proposed to assess the colloidal transport of ²²⁶Ra. It includes water sampling by conventional filtration (0.2µm) and the innovative Diffusive Gradient in Thin Films technique to measure the dissolved fraction (<10nm), from which the colloidal fraction could be estimated. Suspended matter in these waters were also sampled and characterized mineralogically by X-Ray Diffraction, infrared spectroscopy and scanning electron microscopy. All of these data, which were acquired on a rehabilitated former uranium mine, allowed to build a geochemical model using the geochemical calculation code PhreeqC to describe, as accurately as possible, the colloidal transport of ²²⁶Ra. Colloidal transport of ²²⁶Ra was found, for some of the sampling points, to account for up to 95% of the total ²²⁶Ra measured in water. Mineralogical characterization and associated geochemical modelling highlight the role of barite, a barium sulfate mineral well known to trap ²²⁶Ra into its structure. Barite was shown to be responsible for the colloidal ²²⁶Ra fraction despite the presence of kaolinite and ferrihydrite, which are also known to retain ²²⁶Ra by sorption.

Keywords: colloids, mining context, radium, transport

Procedia PDF Downloads 154
1571 Three Issues for Integrating Artificial Intelligence into Legal Reasoning

Authors: Fausto Morais

Abstract:

Artificial intelligence has been widely used in law. Programs are able to classify suits, to identify decision-making patterns, to predict outcomes, and to formalize legal arguments as well. In Brazil, the artificial intelligence victor has been classifying cases to supreme court’s standards. When those programs act doing those tasks, they simulate some kind of legal decision and legal arguments, raising doubts about how artificial intelligence can be integrated into legal reasoning. Taking this into account, the following three issues are identified; the problem of hypernormatization, the argument of legal anthropocentrism, and the artificial legal principles. Hypernormatization can be seen in the Brazilian legal context in the Supreme Court’s usage of the Victor program. This program generated efficiency and consistency. On the other hand, there is a feasible risk of over standardizing factual and normative legal features. Then legal clerks and programmers should work together to develop an adequate way to model legal language into computational code. If this is possible, intelligent programs may enact legal decisions in easy cases automatically cases, and, in this picture, the legal anthropocentrism argument takes place. Such an argument argues that just humans beings should enact legal decisions. This is so because human beings have a conscience, free will, and self unity. In spite of that, it is possible to argue against the anthropocentrism argument and to show how intelligent programs may work overcoming human beings' problems like misleading cognition, emotions, and lack of memory. In this way, intelligent machines could be able to pass legal decisions automatically by classification, as Victor in Brazil does, because they are binding by legal patterns and should not deviate from them. Notwithstanding, artificial intelligent programs can be helpful beyond easy cases. In hard cases, they are able to identify legal standards and legal arguments by using machine learning. For that, a dataset of legal decisions regarding a particular matter must be available, which is a reality in Brazilian Judiciary. Doing such procedure, artificial intelligent programs can support a human decision in hard cases, providing legal standards and arguments based on empirical evidence. Those legal features claim an argumentative weight in legal reasoning and should serve as references for judges when they must decide to maintain or overcome a legal standard.

Keywords: artificial intelligence, artificial legal principles, hypernormatization, legal anthropocentrism argument, legal reasoning

Procedia PDF Downloads 141
1570 Topographic Characteristics Derived from UAV Images to Detect Ephemeral Gully Channels

Authors: Recep Gundogan, Turgay Dindaroglu, Hikmet Gunal, Mustafa Ulukavak, Ron Bingner

Abstract:

A majority of total soil losses in agricultural areas could be attributed to ephemeral gullies caused by heavy rains in conventionally tilled fields; however, ephemeral gully erosion is often ignored in conventional soil erosion assessments. Ephemeral gullies are often easily filled from normal soil tillage operations, which makes capturing the existing ephemeral gullies in croplands difficult. This study was carried out to determine topographic features, including slope and aspect composite topographic index (CTI) and initiation points of gully channels, using images obtained from unmanned aerial vehicle (UAV) images. The study area was located in Topcu stream watershed in the eastern Mediterranean Region, where intense rainfall events occur over very short time periods. The slope varied between 0.7 and 99.5%, and the average slope was 24.7%. The UAV (multi-propeller hexacopter) was used as the carrier platform, and images were obtained with the RGB camera mounted on the UAV. The digital terrain models (DTM) of Topçu stream micro catchment produced using UAV images and manual field Global Positioning System (GPS) measurements were compared to assess the accuracy of UAV based measurements. Eighty-one gully channels were detected in the study area. The mean slope and CTI values in the micro-catchment obtained from DTMs generated using UAV images were 19.2% and 3.64, respectively, and both slope and CTI values were lower than those obtained using GPS measurements. The total length and volume of the gully channels were 868.2 m and 5.52 m³, respectively. Topographic characteristics and information on ephemeral gully channels (location of initial point, volume, and length) were estimated with high accuracy using the UAV images. The results reveal that UAV-based measuring techniques can be used in lieu of existing GPS and total station techniques by using images obtained with high-resolution UAVs.

Keywords: aspect, compound topographic index, digital terrain model, initial gully point, slope, unmanned aerial vehicle

Procedia PDF Downloads 108
1569 Matrix-Based Linear Analysis of Switched Reluctance Generator with Optimum Pole Angles Determination

Authors: Walid A. M. Ghoneim, Hamdy A. Ashour, Asmaa E. Abdo

Abstract:

In this paper, linear analysis of a Switched Reluctance Generator (SRG) model is applied on the most common configurations (4/2, 6/4 and 8/6) for both conventional short-pitched and fully-pitched designs, in order to determine the optimum stator/rotor pole angles at which the maximum output voltage is generated per unit excitation current. This study is focused on SRG analysis and design as a proposed solution for renewable energy applications, such as wind energy conversion systems. The world’s potential to develop the renewable energy technologies through dedicated scientific researches was the motive behind this study due to its positive impact on economy and environment. In addition, the problem of rare earth metals (Permanent magnet) caused by mining limitations, banned export by top producers and environment restrictions leads to the unavailability of materials used for rotating machines manufacturing. This challenge gave authors the opportunity to study, analyze and determine the optimum design of the SRG that has the benefit to be free from permanent magnets, rotor windings, with flexible control system and compatible with any application that requires variable-speed operation. In addition, SRG has been proved to be very efficient and reliable in both low-speed or high-speed applications. Linear analysis was performed using MATLAB simulations based on the (Modified generalized matrix approach) of Switched Reluctance Machine (SRM). About 90 different pole angles combinations and excitation patterns were simulated through this study, and the optimum output results for each case were recorded and presented in detail. This procedure has been proved to be applicable for any SRG configuration, dimension and excitation pattern. The delivered results of this study provide evidence for using the 4-phase 8/6 fully pitched SRG as the main optimum configuration for the same machine dimensions at the same angular speed.

Keywords: generalized matrix approach, linear analysis, renewable applications, switched reluctance generator

Procedia PDF Downloads 192
1568 Application of Infrared Thermal Imaging, Eye Tracking and Behavioral Analysis for Deception Detection

Authors: Petra Hypšová, Martin Seitl

Abstract:

One of the challenges of forensic psychology is to detect deception during a face-to-face interview. In addition to the classical approaches of monitoring the utterance and its components, detection is also sought by observing behavioral and physiological changes that occur as a result of the increased emotional and cognitive load caused by the production of distorted information. Typical are changes in facial temperature, eye movements and their fixation, pupil dilation, emotional micro-expression, heart rate and its variability. Expanding technological capabilities have opened the space to detect these psychophysiological changes and behavioral manifestations through non-contact technologies that do not interfere with face-to-face interaction. Non-contact deception detection methodology is still in development, and there is a lack of studies that combine multiple non-contact technologies to investigate their accuracy, as well as studies that show how different types of lies produced by different interviewers affect physiological and behavioral changes. The main objective of this study is to apply a specific non-contact technology for deception detection. The next objective is to investigate scenarios in which non-contact deception detection is possible. A series of psychophysiological experiments using infrared thermal imaging, eye tracking and behavioral analysis with FaceReader 9.0 software was used to achieve our goals. In the laboratory experiment, 16 adults (12 women, 4 men) between 18 and 35 years of age (SD = 4.42) were instructed to produce alternating prepared and spontaneous truths and lies. The baseline of each proband was also measured, and its results were compared to the experimental conditions. Because the personality of the examiner (particularly gender and facial appearance) to whom the subject is lying can influence physiological and behavioral changes, the experiment included four different interviewers. The interviewer was represented by a photograph of a face that met the required parameters in terms of gender and facial appearance (i.e., interviewer likability/antipathy) to follow standardized procedures. The subject provided all information to the simulated interviewer. During follow-up analyzes, facial temperature (main ROIs: forehead, cheeks, the tip of the nose, chin, and corners of the eyes), heart rate, emotional expression, intensity and fixation of eye movements and pupil dilation were observed. The results showed that the variables studied varied with respect to the production of prepared truths and lies versus the production of spontaneous truths and lies, as well as the variability of the simulated interviewer. The results also supported the assumption of variability in physiological and behavioural values during the subject's resting state, the so-called baseline, and the production of prepared and spontaneous truths and lies. A series of psychophysiological experiments provided evidence of variability in the areas of interest in the production of truths and lies to different interviewers. The combination of technologies used also led to a comprehensive assessment of the physiological and behavioral changes associated with false and true statements. The study presented here opens the space for further research in the field of lie detection with non-contact technologies.

Keywords: emotional expression decoding, eye-tracking, functional infrared thermal imaging, non-contact deception detection, psychophysiological experiment

Procedia PDF Downloads 97
1567 Factors Impacting Geostatistical Modeling Accuracy and Modeling Strategy of Fluvial Facies Models

Authors: Benbiao Song, Yan Gao, Zhuo Liu

Abstract:

Geostatistical modeling is the key technic for reservoir characterization, the quality of geological models will influence the prediction of reservoir performance greatly, but few studies have been done to quantify the factors impacting geostatistical reservoir modeling accuracy. In this study, 16 fluvial prototype models have been established to represent different geological complexity, 6 cases range from 16 to 361 wells were defined to reproduce all those 16 prototype models by different methodologies including SIS, object-based and MPFS algorithms accompany with different constraint parameters. Modeling accuracy ratio was defined to quantify the influence of each factor, and ten realizations were averaged to represent each accuracy ratio under the same modeling condition and parameters association. Totally 5760 simulations were done to quantify the relative contribution of each factor to the simulation accuracy, and the results can be used as strategy guide for facies modeling in the similar condition. It is founded that data density, geological trend and geological complexity have great impact on modeling accuracy. Modeling accuracy may up to 90% when channel sand width reaches up to 1.5 times of well space under whatever condition by SIS and MPFS methods. When well density is low, the contribution of geological trend may increase the modeling accuracy from 40% to 70%, while the use of proper variogram may have very limited contribution for SIS method. It can be implied that when well data are dense enough to cover simple geobodies, few efforts were needed to construct an acceptable model, when geobodies are complex with insufficient data group, it is better to construct a set of robust geological trend than rely on a reliable variogram function. For object-based method, the modeling accuracy does not increase obviously as SIS method by the increase of data density, but kept rational appearance when data density is low. MPFS methods have the similar trend with SIS method, but the use of proper geological trend accompany with rational variogram may have better modeling accuracy than MPFS method. It implies that the geological modeling strategy for a real reservoir case needs to be optimized by evaluation of dataset, geological complexity, geological constraint information and the modeling objective.

Keywords: fluvial facies, geostatistics, geological trend, modeling strategy, modeling accuracy, variogram

Procedia PDF Downloads 257
1566 Climate Change Winners and Losers: Contrasting Responses of Two Aphaniops Species in Oman

Authors: Aziza S. Al Adhoobi, Amna Al Ruheili, Saud M. Al Jufaili

Abstract:

This study investigates the potential effects of climate change on the habitat suitability of two Aphaniops species (Teleostei: Aphaniidae) found in the Oman Mountains and the Southwestern Arabian Coast. Aphaniops kruppi, an endemic species, is found in various water bodies such as wadis, springs, aflaj, spring-fed streams, and some coastal backwaters. Aphaniops stoliczkanus, on the other hand, inhabits brackish and freshwater habitats, particularly in the lower parts of wadies and aflaj, and exhibits euryhaline characteristics. Using Maximum Entropy Modeling (MaxEnt) in conjunction with ArcGIS (10.8.2) and CHELSA bioclimatic variables, topographic indices, and other pertinent environmental factors, the study modeled the potential impacts of climate change based on three Representative Concentration Pathways (RCPs 2.6, 7.0, 8.5) for the periods 2011-2040, 2041-2070, and 2071-2100. The model demonstrated exceptional predictive accuracy, achieving AUC values of 0.992 for A. kruppi and 0.983 for A. stoliczkanus. For A. kruppi, the most influential variables were the mean monthly climate moisture index (Cmi_m), the mean diurnal range (Bio2), and the sediment transport index (STI), accounting for 39.9%, 18.3%, and 8.4%, respectively. As for A. stoliczkanus, the key variables were the sediment transport index (STI), stream power index (SPI), and precipitation of the coldest quarter (Bio19), contributing 31%, 20.2%, and 13.3%, respectively. A. kruppi showed an increase in habitat suitability, especially in low and medium suitability areas. By 2071-2100, high suitability areas increased slightly by 0.05% under RCP 2.6, but declined by -0.02% and -0.04% under RCP 7.0 and 8.5, respectively. A. stoliczkanus exhibited a broader range of responses. Under RCP 2.6, all suitability categories increased by 2071-2100, with high suitability areas increasing by 0.01%. However, low and medium suitability areas showed mixed trends under RCP 7.0 and 8.5, with declines of -0.17% and -0.16%, respectively. The study highlights that climatic and topographical factors significantly influence the habitat suitability of Aphaniops species in Oman. Therefore, species-specific conservation strategies are crucial to address the impacts of climate change.

Keywords: Aphaniops kruppi, Aphaniops stoliczkanus, Climate change, Habitat suitability, MaxEnt

Procedia PDF Downloads 2
1565 Identification of the Most Effective Dosage of Clove Oil Solution as an Alternative for Synthetic Anaesthetics on Zebrafish (Danio rerio)

Authors: D. P. N. De Silva, N. P. P. Liyanage

Abstract:

Zebrafish (Danio rerio) in the family Cyprinidae, is a tropical freshwater fish widely used as a model organism in scientific research. Use of effective and economical anaesthetic is very important when handling fish. Clove oil (active ingredient: eugenol) was identified as a natural product which is safer and economical compared to synthetic chemicals like methanesulfonate (MS-222). Therefore, the aim of this study was to identify the most effective dosage of clove oil solution as an anaesthetic on mature Zebrafish. Clove oil solution was prepared by mixing pure clove oil with 94% ethanol at a ratio of 1:9 respectively. From that solution, different volumes were selected as (0.4 ml, 0.6 ml and 0.8 ml) and dissolved in one liter of conditioned water (dosages : 0.4 ml/L, 0.6 ml/L and 0.8 ml/L). Water quality parameters (pH, temperature and conductivity) were measured before and after adding clove oil solution. Mature Zebrafish with similar standard length (2.76 ± 0.1 cm) and weight (0.524 ± 0.1 g) were selected for this experiment. Time taken for loss of equilibrium (initiation phase) and complete loss of movements including opercular movement (anaesthetic phase) were measured. To detect the efficacy on anaesthetic recovery, time taken to begin opercular movements (initiation of recovery phase) until swimming (post anaesthetic phase) were observed. The results obtained were analyzed according to the analysis of variance (ANOVA) and Tukeys’ method using SPSS version 17.0 at 95% confidence interval (p<0.5). According to the results, there was no significant difference at the initiation phase of anaesthesia in all three doses though the time taken was varied from 0.14 to 0.41 minutes. Mean value of the time taken to complete the anaesthetic phase at 0.4 ml/L dosage was significantly different with 0.6 ml/L and 0.8 ml/L dosages independently (p=0.01). There was no significant difference among recovery times at all dosages but 0.8 ml/L dosage took longer time compared to 0.6 ml/L dosage. The water quality parameters (pH and temperature) were stable throughout the experiment except conductivity, which increased with the higher dosage. In conclusion, the best dosage need to anaesthetize Zebrafish using clove oil solution was 0.6 ml/L due to its fast initiation of anaesthesia and quick recovery compared to the other two dosages. Therefore clove oil can be used as a good substitute for synthetic anaesthetics because of its efficacy at a lower dosage with higher safety at a low cost.

Keywords: anaesthetics, clove oil, zebrafish, Cyprinidae

Procedia PDF Downloads 713
1564 Optimal Framework of Policy Systems with Innovation: Use of Strategic Design for Evolution of Decisions

Authors: Yuna Lee

Abstract:

In the current policy process, there has been a growing interest in more open approaches that incorporate creativity and innovation based on the forecasting groups composed by the public and experts together into scientific data-driven foresight methods to implement more effective policymaking. Especially, citizen participation as collective intelligence in policymaking with design and deep scale of innovation at the global level has been developed and human-centred design thinking is considered as one of the most promising methods for strategic foresight. Yet, there is a lack of a common theoretical foundation for a comprehensive approach for the current situation of and post-COVID-19 era, and substantial changes in policymaking practice are insignificant and ongoing with trial and error. This project hypothesized that rigorously developed policy systems and tools that support strategic foresight by considering the public understanding could maximize ways to create new possibilities for a preferable future, however, it must involve a better understating of Behavioural Insights, including individual and cultural values, profit motives and needs, and psychological motivations, for implementing holistic and multilateral foresight and creating more positive possibilities. To what extent is the policymaking system theoretically possible that incorporates the holistic and comprehensive foresight and policy process implementation, assuming that theory and practice, in reality, are different and not connected? What components and environmental conditions should be included in the strategic foresight system to enhance the capacity of decision from policymakers to predict alternative futures, or detect uncertainties of the future more accurately? And, compared to the required environmental condition, what are the environmental vulnerabilities of the current policymaking system? In this light, this research contemplates the question of how effectively policymaking practices have been implemented through the synthesis of scientific, technology-oriented innovation with the strategic design for tackling complex societal challenges and devising more significant insights to make society greener and more liveable. Here, this study conceptualizes the notions of a new collaborative way of strategic foresight that aims to maximize mutual benefits between policy actors and citizens through the cooperation stemming from evolutionary game theory. This study applies mixed methodology, including interviews of policy experts, with the case in which digital transformation and strategic design provided future-oriented solutions or directions to cities’ sustainable development goals and society-wide urgent challenges such as COVID-19. As a result, artistic and sensual interpreting capabilities through strategic design promote a concrete form of ideas toward a stable connection from the present to the future and enhance the understanding and active cooperation among decision-makers, stakeholders, and citizens. Ultimately, an improved theoretical foundation proposed in this study is expected to help strategically respond to the highly interconnected future changes of the post-COVID-19 world.

Keywords: policymaking, strategic design, sustainable innovation, evolution of cooperation

Procedia PDF Downloads 191
1563 Innovation Outputs from Higher Education Institutions: A Case Study of the University of Waterloo, Canada

Authors: Wendy De Gomez

Abstract:

The University of Waterloo is situated in central Canada in the Province of Ontario- one hour from the metropolitan city of Toronto. For over 30 years, it has held Canada’s top spot as the most innovative university; and has been consistently ranked in the top 25 computer science and top 50 engineering schools in the world. Waterloo benefits from the federal government’s over 100 domestic innovation policies which have assisted in the country’s 15th place global ranking in the World Intellectual Property Organization’s (WIPO) 2022 Global Innovation Index. Yet undoubtedly, the University of Waterloo’s unique characteristics are what propels its innovative creativeness forward. This paper will provide a contextual definition of innovation in higher education and then demonstrate the five operational attributes that contribute to the University of Waterloo’s innovative reputation. The methodology is based on statistical analyses obtained from ranking bodies such as the QS World University Rankings, a secondary literature review related to higher education innovation in Canada, and case studies that exhibit the operationalization of the attributes outlined below. The first attribute is geography. Specifically, the paper investigates the network structure effect of the Toronto-Waterloo high-tech corridor and the resultant industrial relationships built there. The second attribute is University Policy 73-Intellectal Property Rights. This creator-owned policy grants all ownership to the creator/inventor regardless of the use of the University of Waterloo property or funding. Essentially, through the incentivization of IP ownership by all researchers, further commercialization and entrepreneurship are formed. Third, this IP policy works hand in hand with world-renowned business incubators such as the Accelerator Centre in the dedicated research and technology park and velocity, a 14-year-old facility that equips and guides founders to build and scale companies. Communitech, a 25-year-old provincially backed facility in the region, also works closely with the University of Waterloo to build strong teams, access capital, and commercialize products. Fourth, Waterloo’s co-operative education program contributes 31% of all co-op participants to the Canadian economy. Home to the world’s largest co-operative education program, data shows that over 7,000 from around the world recruit Waterloo students for short- and long-term placements- directly contributing to the student’s ability to learn and optimize essential employment skills when they graduate. Finally, the students themselves at Waterloo are exceptional. The entrance average ranges from the low 80s to the mid-90s depending on the program. In computer, electrical, mechanical, mechatronics, and systems design engineering, to have a 66% chance of acceptance, the applicant’s average must be 95% or above. Singularly, none of these five attributes could lead to the university’s outstanding track record of innovative creativity, but when bundled up into a 1000 acre- 100 building main campus with 6 academic faculties, 40,000+ students, and over 1300 world-class faculty, the recipe for success becomes quite evident.

Keywords: IP policy, higher education, economy, innovation

Procedia PDF Downloads 64