Search results for: generating sets
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2216

Search results for: generating sets

446 Chemical Fabrication of Gold Nanorings: Controlled Reduction and Optical Tuning for Nanomedicine Applications

Authors: Mehrnaz Mostafavi, Jalaledin Ghanavi

Abstract:

This research investigates the production of nanoring structures through a chemical reduction approach, exploring gradual reduction processes assisted by reductant agents, leading to the formation of these specialized nanorings. The study focuses on the controlled reduction of metal atoms within these agents, crucial for shaping these nanoring structures over time. The paper commences by highlighting the wide-ranging applications of metal nanostructures across fields like Nanomedicine, Nanobiotechnology, and advanced spectroscopy methods such as Surface Enhanced Raman Spectroscopy (SERS) and Surface Enhanced Infrared Absorption Spectroscopy (SEIRA). Particularly, gold nanoparticles, especially in the nanoring configuration, have gained significant attention due to their distinctive properties, offering accessible spaces suitable for sensing and spectroscopic applications. The methodology involves utilizing human serum albumin as a reducing agent to create gold nanoparticles through a chemical reduction process. This process involves the transfer of electrons from albumin's carboxylic groups, converting them into carbonyl, while AuCl4− acquires electrons to form gold nanoparticles. Various characterization techniques like Ultraviolet–visible spectroscopy (UV-Vis), Atomic-force microscopy (AFM), and Transmission electron microscopy (TEM) were employed to examine and validate the creation and properties of the gold nanoparticles and nanorings. The findings suggest that precise and gradual reduction processes, in conjunction with optimal pH conditions, play a pivotal role in generating nanoring structures. Experiments manipulating optical properties revealed distinct responses in the visible and infrared spectrums, demonstrating the tunability of these nanorings. Detailed examinations of the morphology confirmed the formation of gold nanorings, elucidating their size, distribution, and structural characteristics. These nanorings, characterized by an empty volume enclosed by uniform walls, exhibit promising potential in the realms of Nanomedicine and Nanobiotechnology. In summary, this study presents a chemical synthesis approach using organic reducing agents to produce gold nanorings. The results underscore the significance of controlled and gradual reduction processes in crafting nanoring structures with unique optical traits, offering considerable value across diverse nanotechnological applications.

Keywords: nanoring structures, chemical reduction approach, gold nanoparticles, spectroscopy methods, nano medicine applications

Procedia PDF Downloads 119
445 The Effect of Innovation Capability and Activity, and Wider Sector Condition on the Performance of Malaysian Public Sector Innovation Policy

Authors: Razul Ikmal Ramli

Abstract:

Successful implementation of innovation is a key success formula of a great organization. Innovation will ensure competitive advantages as well as sustainability of organization in the long run. In public sector context, the role of innovation is crucial to resolve dynamic challenges of public services such as operating in economic uncertainty with limited resources, increasing operating expenditure and growing expectation among citizens towards high quality, swift and reliable public services. Acknowledging the prospect of innovation as a tool for achieving high-performance public sector, the Malaysian New Economic Model launched in the year 2011 intensified government commitment to foster innovation in the public sector. Since 2011 various initiatives have been implemented, however little is known about the performance of public sector innovation in Malaysia. Hence, by applying the national innovation system theory as a pillar, the formulated research objectives were focused on measuring the level of innovation capabilities, wider public sector condition for innovation, innovation activity, and innovation performance as well as to examine the relationship between the four constructs with innovation performance as a dependent variable. For that purpose, 1,000 sets of self-administrated survey questionnaires were distributed to heads of units and divisions of 22 Federal Ministry and Central Agencies in the administrative, security, social and economic sector. Based on 456 returned questionnaires, the descriptive analysis found that innovation capabilities, wider sector condition, innovation activities and innovation performance were rated by respondents at moderately high level. Based on Structural Equation Modelling, innovation performance was found to be influenced by innovation capability, wider sector condition for innovation and innovation activity. In addition, the analysis also found innovation activity to be the most important construct that influences innovation performance. The implication of the study concluded that the innovation policy implemented in the public sector of Malaysia sparked motivation to innovate and resulted in various forms of innovation. However, the overall achievements were not as well as they were expected to be. Thus, the study suggested for the formulation of a dedicated policy to strengthen innovation capability, wider public sector condition for innovation and innovation activity of the Malaysian public sector. Furthermore, strategic intervention needs to be focused on innovation activity as the construct plays an important role in determining the innovation performance. The success of public sector innovation implementation will not only benefit the citizens, but will also spearhead the competitiveness and sustainability of the country.

Keywords: public sector, innovation, performance, innovation policy

Procedia PDF Downloads 279
444 Nurse-Reported Perceptions of Medication Safety in Private Hospitals in Gauteng Province.

Authors: Madre Paarlber, Alwiena Blignaut

Abstract:

Background: Medication administration errors remains a global patient safety problem targeted by the WHO (World Health Organization), yet research on this matter is sparce within the South African context. Objective: The aim was to explore and describe nurses’ (medication administrators) perceptions regarding medication administration safety-related culture, incidence, causes, and reporting in the Gauteng Province of South Africa, and to determine any relationships between perceived variables concerned with medication safety (safety culture, incidences, causes, reporting of incidences, and reasons for non-reporting). Method: A quantitative research design was used through which self-administered online surveys were sent to 768 nurses (medication administrators) (n=217). The response rate was 28.26%. The survey instrument was synthesised from the Agency of Healthcare Research and Quality (AHRQ) Hospital Survey on Patient Safety Culture, the Registered Nurse Forecasting (RN4CAST) survey, a survey list prepared from a systematic review aimed at generating a comprehensive list of medication administration error causes and the Medication Administration Error Reporting Survey from Wakefield. Exploratory and confirmatory factor analyses were used to determine the validity and reliability of the survey. Descriptive and inferential statistical data analysis were used to analyse quantitative data. Relationships and correlations were identified between items, subscales and biographic data by using Spearmans’ Rank correlations, T-Tests and ANOVAs (Analysis of Variance). Nurses reported on their perceptions of medication administration safety-related culture, incidence, causes, and reporting in the Gauteng Province. Results: Units’ teamwork deemed satisfactory, punitive responses to errors accentuated. “Crisis mode” working, concerns regarding mistake recording and long working hours disclosed as impacting patient safety. Overall medication safety graded mostly positively. Work overload, high patient-nurse ratios, and inadequate staffing implicated as error-inducing. Medication administration errors were reported regularly. Fear and administrative response to errors effected non-report. Non-report of errors’ reasons was affected by non-punitive safety culture. Conclusions: Medication administration safety improvement is contingent on fostering a non-punitive safety culture within units. Anonymous medication error reporting systems and auditing nurses’ workload are recommended in the quest of improved medication safety within Gauteng Province private hospitals.

Keywords: incidence, medication administration errors, medication safety, reporting, safety culture

Procedia PDF Downloads 50
443 Modeling the Impact of Aquaculture in Wetland Ecosystems Using an Integrated Ecosystem Approach: Case Study of Setiu Wetlands, Malaysia

Authors: Roseliza Mat Alipiah, David Raffaelli, J. C. R. Smart

Abstract:

This research is a new approach as it integrates information from both environmental and social sciences to inform effective management of the wetlands. A three-stage research framework was developed for modelling the drivers and pressures imposed on the wetlands and their impacts to the ecosystem and the local communities. Firstly, a Bayesian Belief Network (BBN) was used to predict the probability of anthropogenic activities affecting the delivery of different key wetland ecosystem services under different management scenarios. Secondly, Choice Experiments (CEs) were used to quantify the relative preferences which key wetland stakeholder group (aquaculturists) held for delivery of different levels of these key ecosystem services. Thirdly, a Multi-Criteria Decision Analysis (MCDA) was applied to produce an ordinal ranking of the alternative management scenarios accounting for their impacts upon ecosystem service delivery as perceived through the preferences of the aquaculturists. This integrated ecosystem management approach was applied to a wetland ecosystem in Setiu, Terengganu, Malaysia which currently supports a significant level of aquaculture activities. This research has produced clear guidelines to inform policy makers considering alternative wetland management scenarios: Intensive Aquaculture, Conservation or Ecotourism, in addition to the Status Quo. The findings of this research are as follows: The BBN revealed that current aquaculture activity is likely to have significant impacts on water column nutrient enrichment, but trivial impacts on caged fish biomass, especially under the Intensive Aquaculture scenario. Secondly, the best fitting CE models identified several stakeholder sub-groups for aquaculturists, each with distinct sets of preferences for the delivery of key ecosystem services. Thirdly, the MCDA identified Conservation as the most desirable scenario overall based on ordinal ranking in the eyes of most of the stakeholder sub-groups. Ecotourism and Status Quo scenarios were the next most preferred and Intensive Aquaculture was the least desirable scenario. The methodologies developed through this research provide an opportunity for improving planning and decision making processes that aim to deliver sustainable management of wetland ecosystems in Malaysia.

Keywords: Bayesian belief network (BBN), choice experiments (CE), multi-criteria decision analysis (MCDA), aquaculture

Procedia PDF Downloads 288
442 Connecting MRI Physics to Glioma Microenvironment: Comparing Simulated T2-Weighted MRI Models of Fixed and Expanding Extracellular Space

Authors: Pamela R. Jackson, Andrea Hawkins-Daarud, Cassandra R. Rickertsen, Kamala Clark-Swanson, Scott A. Whitmire, Kristin R. Swanson

Abstract:

Glioblastoma Multiforme (GBM), the most common primary brain tumor, often presents with hyperintensity on T2-weighted or T2-weighted fluid attenuated inversion recovery (T2/FLAIR) magnetic resonance imaging (MRI). This hyperintensity corresponds with vasogenic edema, however there are likely many infiltrating tumor cells within the hyperintensity as well. While MRIs do not directly indicate tumor cells, MRIs do reflect the microenvironmental water abnormalities caused by the presence of tumor cells and edema. The inherent heterogeneity and resulting MRI features of GBMs complicate assessing disease response. To understand how hyperintensity on T2/FLAIR MRI may correlate with edema in the extracellular space (ECS), a multi-compartmental MRI signal equation which takes into account tissue compartments and their associated volumes with input coming from a mathematical model of glioma growth that incorporates edema formation was explored. The reasonableness of two possible extracellular space schema was evaluated by varying the T2 of the edema compartment and calculating the possible resulting T2s in tumor and peripheral edema. In the mathematical model, gliomas were comprised of vasculature and three tumor cellular phenotypes: normoxic, hypoxic, and necrotic. Edema was characterized as fluid leaking from abnormal tumor vessels. Spatial maps of tumor cell density and edema for virtual tumors were simulated with different rates of proliferation and invasion and various ECS expansion schemes. These spatial maps were then passed into a multi-compartmental MRI signal model for generating simulated T2/FLAIR MR images. Individual compartments’ T2 values in the signal equation were either from literature or estimated and the T2 for edema specifically was varied over a wide range (200 ms – 9200 ms). T2 maps were calculated from simulated images. T2 values based on simulated images were evaluated for regions of interest (ROIs) in normal appearing white matter, tumor, and peripheral edema. The ROI T2 values were compared to T2 values reported in literature. The expanding scheme of extracellular space is had T2 values similar to the literature calculated values. The static scheme of extracellular space had a much lower T2 values and no matter what T2 was associated with edema, the intensities did not come close to literature values. Expanding the extracellular space is necessary to achieve simulated edema intensities commiserate with acquired MRIs.

Keywords: extracellular space, glioblastoma multiforme, magnetic resonance imaging, mathematical modeling

Procedia PDF Downloads 229
441 Variable Mapping: From Bibliometrics to Implications

Authors: Przemysław Tomczyk, Dagmara Plata-Alf, Piotr Kwiatek

Abstract:

Literature review is indispensable in research. One of the key techniques used in it is bibliometric analysis, where one of the methods is science mapping. The classic approach that dominates today in this area consists of mapping areas, keywords, terms, authors, or citations. This approach is also used in relation to the review of literature in the field of marketing. The development of technology has resulted in the fact that researchers and practitioners use the capabilities of software available on the market for this purpose. The use of science mapping software tools (e.g., VOSviewer, SciMAT, Pajek) in recent publications involves the implementation of a literature review, and it is useful in areas with a relatively high number of publications. Despite this well-grounded science mapping approach having been applied in the literature reviews, performing them is a painstaking task, especially if authors would like to draw precise conclusions about the studied literature and uncover potential research gaps. The aim of this article is to identify to what extent a new approach to science mapping, variable mapping, takes advantage of the classic science mapping approach in terms of research problem formulation and content/thematic analysis for literature reviews. To perform the analysis, a set of 5 articles on customer ideation was chosen. Next, the analysis of key words mapping results in VOSviewer science mapping software was performed and compared with the variable map prepared manually on the same articles. Seven independent expert judges (management scientists on different levels of expertise) assessed the usability of both the stage of formulating, the research problem, and content/thematic analysis. The results show the advantage of variable mapping in the formulation of the research problem and thematic/content analysis. First, the ability to identify a research gap is clearly visible due to the transparent and comprehensive analysis of the relationships between the variables, not only keywords. Second, the analysis of relationships between variables enables the creation of a story with an indication of the directions of relationships between variables. Demonstrating the advantage of the new approach over the classic one may be a significant step towards developing a new approach to the synthesis of literature and its reviews. Variable mapping seems to allow scientists to build clear and effective models presenting the scientific achievements of a chosen research area in one simple map. Additionally, the development of the software enabling the automation of the variable mapping process on large data sets may be a breakthrough change in the field of conducting literature research.

Keywords: bibliometrics, literature review, science mapping, variable mapping

Procedia PDF Downloads 117
440 Casusation and Criminal Responsibility

Authors: László Schmidt

Abstract:

“Post hoc ergo propter hoc” means after it, therefore because of it. In other words: If event Y followed event X, then event Y must have been caused by event X. The question of causation has long been a central theme in philosophical thought, and many different theories have been put forward. However, causality is an essentially contested concept (ECC), as it has no universally accepted definition and is used differently in everyday, scientific, and legal thinking. In the field of law, the question of causality arises mainly in the context of establishing legal liability: in criminal law and in the rules of civil law on liability for damages arising either from breach of contract or from tort. In the study some philosophical theories of causality will be presented and how these theories correlate with legal causality. It’s quite interesting when philosophical abstractions meet the pragmatic demands of jurisprudence. In Hungarian criminal judicial practice the principle of equivalence of conditions is the generally accepted and applicable standard of causation, where all necessary conditions are considered equivalent and thus a cause. The idea is that without the trigger, the subsequent outcome would not have occurred; all the conditions that led to the subsequent outcome are equivalent. In the case where the trigger that led to the result is accompanied by an additional intervening cause, including an accidental one, independent of the perpetrator, the causal link is not broken, but at most the causal link becomes looser. The importance of the intervening causes in the outcome should be given due weight in the imposition of the sentence. According to court practice if the conduct of the offender sets in motion the causal process which led to the result, it does not exclude his criminal liability and does not interrupt the causal process if other factors, such as the victim's illness, may have contributed to it. The concausa does not break the chain of causation, i.e. the existence of a causal link establish the criminal liability of the offender. Courts also adjudicates that if an act is a cause of the result if the act cannot be omitted without the result being omitted. This essentially assumes a hypothetical elimination procedure, i.e. the act must be omitted in thought and then examined to see whether the result would still occur or whether it would be omitted. On the substantive side, the essential condition for establishing the offence is that the result must be demonstrably connected with the activity committed. The provision on the assessment of the facts beyond reasonable doubt must also apply to the causal link: that is to say, the uncertainty of the causal link between the conduct and the result of the offence precludes the perpetrator from being held liable for the result. Sometimes, however, the courts do not specify in the reasons for their judgments what standard of causation they apply, i.e. on what basis they establish the existence of (legal) causation.

Keywords: causation, Hungarian criminal law, responsibility, philosophy of law

Procedia PDF Downloads 35
439 Media Impression and Its Impact on Foreign Policy Making: A Study of India-China Relations

Authors: Rosni Lakandri

Abstract:

With the development of science and technology, there has been a complete transformation in the domain of information technology. Particularly after the Second World War and Cold War period, the role of media and communication technology in shaping the political, economic, socio-cultural proceedings across the world has been tremendous. It performs as a channel between the governing bodies of the state and the general masses. As we have seen the international community constantly talking about the onset of Asian Century, India and China happens to be the major player in this. Both have the civilization history, both are neighboring countries, both are witnessing a huge economic growth and, important of all, both are considered the rising powers of Asia. Not negating the fact that both countries have gone to war with each other in 1962 and the common people and even the policy makers of both the sides view each other till now from this prism. A huge contribution to this perception of people goes to the media coverage of both sides, even if there are spaces of cooperation which they share, the negative impacts of media has tended to influence the people’s opinion and government’s perception about each other. Therefore, analysis of media’s impression in both the countries becomes important in order to know their effect on the larger implications of foreign policy towards each other. It is usually said that media not only acts as the information provider but also acts as ombudsman to the government. They provide a kind of check and balance to the governments in taking proper decisions for the people of the country but in attempting to answer this hypothesis we have to analyze does the media really helps in shaping the political landscape of any country? Therefore, this study rests on the following questions; 1.How do China and India depict each other through their respective News media? 2.How much and what influences they make on the policy making process of each country? How do they shape the public opinion in both the countries? In order to address these enquiries, the study employs both primary and secondary sources available, and in generating data and other statistical information, primary sources like reports, government documents, and cartography, agreements between the governments have been used. Secondary sources like books, articles and other writings collected from various sources and opinion from visual media sources like news clippings, videos in this topic are also included as a source of on ground information as this study is not based on field study. As the findings suggest in case of China and India, media has certainly affected people’s knowledge about the political and diplomatic issues at the same time has affected the foreign policy making of both the countries. They have considerable impact on the foreign policy formulation and we can say there is some mediatization happening in foreign policy issues in both the countries.

Keywords: China, foreign policy, India, media, public opinion

Procedia PDF Downloads 150
438 Winter – Not Spring - Climate Drives Annual Adult Survival in Common Passerines: A Country-Wide, Multi-Species Modeling Exercise

Authors: Manon Ghislain, Timothée Bonnet, Olivier Gimenez, Olivier Dehorter, Pierre-Yves Henry

Abstract:

Climatic fluctuations affect the demography of animal populations, generating changes in population size, phenology, distribution and community assemblages. However, very few studies have identified the underlying demographic processes. For short-lived species, like common passerine birds, are these changes generated by changes in adult survival or in fecundity and recruitment? This study tests for an effect of annual climatic conditions (spring and winter) on annual, local adult survival at very large spatial (a country, 252 sites), temporal (25 years) and biological (25 species) scales. The Constant Effort Site ringing has allowed the collection of capture - mark - recapture data for 100 000 adult individuals since 1989, over metropolitan France, thus documenting annual, local survival rates of the most common passerine birds. We specifically developed a set of multi-year, multi-species, multi-site Bayesian models describing variations in local survival and recapture probabilities. This method allows for a statistically powerful hierarchical assessment (global versus species-specific) of the effects of climate variables on survival. A major part of between-year variations in survival rate was common to all species (74% of between-year variance), whereas only 26% of temporal variation was species-specific. Although changing spring climate is commonly invoked as a cause of population size fluctuations, spring climatic anomalies (mean precipitation or temperature for March-August) do not impact adult survival: only 1% of between-year variation of species survival is explained by spring climatic anomalies. However, for sedentary birds, winter climatic anomalies (North Atlantic Oscillation) had a significant, quadratic effect on adult survival, birds surviving less during intermediate years than during more extreme years. For migratory birds, we do not detect an effect of winter climatic anomalies (Sahel Rainfall). We will analyze the life history traits (migration, habitat, thermal range) that could explain a different sensitivity of species to winter climate anomalies. Overall, we conclude that changes in population sizes for passerine birds are unlikely to be the consequences of climate-driven mortality (or emigration) in spring but could be induced by other demographic parameters, like fecundity.

Keywords: Bayesian approach, capture-recapture, climate anomaly, constant effort sites scheme, passerine, seasons, survival

Procedia PDF Downloads 298
437 Evaluation of Electrophoretic and Electrospray Deposition Methods for Preparing Graphene and Activated Carbon Modified Nano-Fibre Electrodes for Hydrogen/Vanadium Flow Batteries and Supercapacitors

Authors: Barun Chakrabarti, Evangelos Kalamaras, Vladimir Yufit, Xinhua Liu, Billy Wu, Nigel Brandon, C. T. John Low

Abstract:

In this work, we perform electrophoretic deposition of activated carbon on a number of substrates to prepare symmetrical coin cells for supercapacitor applications. From several recipes that involve the evaluation of a few solvents such as isopropyl alcohol, N-Methyl-2-pyrrolidone (NMP), or acetone to binders such as polyvinylidene fluoride (PVDF) and charging agents such as magnesium chloride, we display a working means for achieving supercapacitors that can achieve 100 F/g in a consistent manner. We then adapt this EPD method to deposit reduced graphene oxide on SGL 10AA carbon paper to achieve cathodic materials for testing in a hydrogen/vanadium flow battery. In addition, a self-supported hierarchical carbon nano-fibre is prepared by means of electrospray deposition of an iron phthalocyanine solution onto a temporary substrate followed by carbonisation to remove heteroatoms. This process also induces a degree of nitrogen doping on the carbon nano-fibres (CNFs), which allows its catalytic performance to improve significantly as detailed in other publications. The CNFs are then used as catalysts by attaching them to graphite felt electrodes facing the membrane inside an all-vanadium flow battery (Scribner cell using serpentine flow distribution channels) and efficiencies as high as 60% is noted at high current densities of 150 mA/cm². About 20 charge and discharge cycling show that the CNF catalysts consistently perform better than pristine graphite felt electrodes. Following this, we also test the CNF as an electro-catalyst in the hydrogen/vanadium flow battery (cathodic side as mentioned briefly in the first paragraph) facing the membrane, based upon past studies from our group. Once again, we note consistently good efficiencies of 85% and above for CNF modified graphite felt electrodes in comparison to 60% for pristine felts at low current density of 50 mA/cm² (this reports 20 charge and discharge cycles of the battery). From this preliminary investigation, we conclude that the CNFs may be used as catalysts for other systems such as vanadium/manganese, manganese/manganese and manganese/hydrogen flow batteries in the future. We are generating data for such systems at present, and further publications are expected.

Keywords: electrospinning, carbon nano-fibres, all-vanadium redox flow battery, hydrogen-vanadium fuel cell, electrocatalysis

Procedia PDF Downloads 287
436 Characteristics of the Rocks Glacier Deposits in the Southern Carpathians, Romania

Authors: Petru Urdea

Abstract:

As a distinct part of the mountain system, the rock glacier system is a particularly periglacial debris system. Being an open system, it works in a manner of interconnection with others subsystems like glacial, cliffs, rocky slopes sand talus slope subsystems, which are sources of sediments. One characteristic is that for long periods of time it is like a storage unit for debris, and ice, and temporary for snow and water. In the Southern Carpathians 306 rock glaciers were identified. The vast majority of these rock glaciers, are talus rock glaciers, 74%, and 26%, are debris rock glaciers. In the area occupied by granites and granodiorites are present, 49% of all the rock glaciers, representing 61% of the area occupied by Southern Carpathians rock glaciers. This lithological dependence also leaves its mark on the specifics of the deposits, everything bearing the imprint of the particular way the rocks respond to the physical weathering processes, all in a periglacial regime. If in the domain of granites and granodiorites the blocks are large, - of metric order, even 10 m3 - , in the domain of the metamorphic rocks only gneisses can cut similar sizes. Amphibolites, amphibolitic schists, micaschists, sericite-chlorite schists and phyllites crop out in much smaller blocks, of decimetric order, mostly in the form of slabs. In the case of rock glaciers made up of large blocks, with a strcture of open-works type, the density and volume of voids between the blocks is greater, the smaller debris generating more compact structures with fewer voids. All these influences the thermal regime, associated with a certain type of air circulation during the seasons and the emergence of permafrost formation conditions. The rock glaciers are fed by rock falls, rock avalanches, debris flows, avalanches, so that the structure is heterogeneous, which is also reflected in the detailed topography of the rock glaciers. This heterogeneity is also influenced by the spatial assembly of the rock bodies in the supply area and, an element that cannot be omitted, the behavior of the rocks during periglacial weathering. The production of small gelifracts determines the filling of voids and the appearance of more compact structures, with effects on the creep process. In general, surface deposits are coarser, those in depth are finer, their characteristics being detectable by applying geophysical methods. The electrical tomography (ERT) and georadar (GPR) investigations carried out in the Făgăraş Mountains, Retezat and the Parâng Mountains, each with a different lithological specificity, allowed the identification of some differentiations, including the presence of permafrost bodies.

Keywords: rock glaciers deposits, structure, lithology, permafrost, Southern Carpathians, Romania

Procedia PDF Downloads 20
435 Comparison of Cardiovascular and Metabolic Responses Following In-Water and On-Land Jump in Postmenopausal Women

Authors: Kuei-Yu Chien, Nai-Wen Kan, Wan-Chun Wu, Guo-Dong Ma, Shu-Chen Chen

Abstract:

Purpose: The purpose of this study was to investigate the responses of systolic blood pressure (SBP), diastolic blood pressure (DBP), heart rate (HR), rating of perceived exertion (RPE) and lactate following continued high-intensity interval exercise in water and on land. The results of studies can be an exercise program design reference for health care and fitness professionals. Method: A total of 20 volunteer postmenopausal women was included in this study. The inclusion criteria were: duration of menopause > 1 year; and sedentary lifestyle, defined as engaging in moderate-intensity exercise less than three times per week, or less than 20 minutes per day. Participants need to visit experimental place three times. The first time visiting, body composition was performed and participant filled out the questionnaire. Participants were assigned randomly to the exercise environment (water or land) in second and third time visiting. Water exercise testing was under water of trochanter level. In continuing jump testing, each movement consisted 10-second maximum volunteer jump for two sets. 50% heart rate reserve dynamic resting (walking or running) for one minute was within each set. SBP, DBP, HR, RPE of whole body/thigh (RPEW/RPET) and lactate were performed at pre and post testing. HR, RPEW, and RPET were monitored after 1, 2, and 10 min of exercise testing. SBP and DBP were performed after 10 and 30 min of exercise testing. Results: The responses of SBP and DBP after exercise testing in water were higher than those on land. Lactate levels after exercise testing in water were lower than those on land. The responses of RPET were lower than those on land post exercise 1 and 2 minutes. The heart rate recovery in water was faster than those on land at post exercise 5 minutes. Conclusion: This study showed water interval jump exercise induces higher cardiovascular responses with lower RPE responses and lactate levels than on-land jumps exercise in postmenopausal women. Fatigue is one of the major reasons to obstruct exercise behavior. Jump exercise could enhance cardiorespiratory fitness, the lower-extremity power, strength, and bone mass. There are several health benefits to the middle to older adults. This study showed that water interval jumping could be more relaxed and not tried to reach the same land-based cardiorespiratory exercise intensity.

Keywords: interval exercise, power, recovery, fatigue

Procedia PDF Downloads 406
434 Evaluation of the CRISP-DM Business Understanding Step: An Approach for Assessing the Predictive Power of Regression versus Classification for the Quality Prediction of Hydraulic Test Results

Authors: Christian Neunzig, Simon Fahle, Jürgen Schulz, Matthias Möller, Bernd Kuhlenkötter

Abstract:

Digitalisation in production technology is a driver for the application of machine learning methods. Through the application of predictive quality, the great potential for saving necessary quality control can be exploited through the data-based prediction of product quality and states. However, the serial use of machine learning applications is often prevented by various problems. Fluctuations occur in real production data sets, which are reflected in trends and systematic shifts over time. To counteract these problems, data preprocessing includes rule-based data cleaning, the application of dimensionality reduction techniques, and the identification of comparable data subsets to extract stable features. Successful process control of the target variables aims to centre the measured values around a mean and minimise variance. Competitive leaders claim to have mastered their processes. As a result, much of the real data has a relatively low variance. For the training of prediction models, the highest possible generalisability is required, which is at least made more difficult by this data availability. The implementation of a machine learning application can be interpreted as a production process. The CRoss Industry Standard Process for Data Mining (CRISP-DM) is a process model with six phases that describes the life cycle of data science. As in any process, the costs to eliminate errors increase significantly with each advancing process phase. For the quality prediction of hydraulic test steps of directional control valves, the question arises in the initial phase whether a regression or a classification is more suitable. In the context of this work, the initial phase of the CRISP-DM, the business understanding, is critically compared for the use case at Bosch Rexroth with regard to regression and classification. The use of cross-process production data along the value chain of hydraulic valves is a promising approach to predict the quality characteristics of workpieces. Suitable methods for leakage volume flow regression and classification for inspection decision are applied. Impressively, classification is clearly superior to regression and achieves promising accuracies.

Keywords: classification, CRISP-DM, machine learning, predictive quality, regression

Procedia PDF Downloads 140
433 A Study on Computational Fluid Dynamics (CFD)-Based Design Optimization Techniques Using Multi-Objective Evolutionary Algorithms (MOEA)

Authors: Ahmed E. Hodaib, Mohamed A. Hashem

Abstract:

In engineering applications, a design has to be as fully perfect as possible in some defined case. The designer has to overcome many challenges in order to reach the optimal solution to a specific problem. This process is called optimization. Generally, there is always a function called “objective function” that is required to be maximized or minimized by choosing input parameters called “degrees of freedom” within an allowed domain called “search space” and computing the values of the objective function for these input values. It becomes more complex when we have more than one objective for our design. As an example for Multi-Objective Optimization Problem (MOP): A structural design that aims to minimize weight and maximize strength. In such case, the Pareto Optimal Frontier (POF) is used, which is a curve plotting two objective functions for the best cases. At this point, a designer should make a decision to choose the point on the curve. Engineers use algorithms or iterative methods for optimization. In this paper, we will discuss the Evolutionary Algorithms (EA) which are widely used with Multi-objective Optimization Problems due to their robustness, simplicity, suitability to be coupled and to be parallelized. Evolutionary algorithms are developed to guarantee the convergence to an optimal solution. An EA uses mechanisms inspired by Darwinian evolution principles. Technically, they belong to the family of trial and error problem solvers and can be considered global optimization methods with a stochastic optimization character. The optimization is initialized by picking random solutions from the search space and then the solution progresses towards the optimal point by using operators such as Selection, Combination, Cross-over and/or Mutation. These operators are applied to the old solutions “parents” so that new sets of design variables called “children” appear. The process is repeated until the optimal solution to the problem is reached. Reliable and robust computational fluid dynamics solvers are nowadays commonly utilized in the design and analyses of various engineering systems, such as aircraft, turbo-machinery, and auto-motives. Coupling of Computational Fluid Dynamics “CFD” and Multi-Objective Evolutionary Algorithms “MOEA” has become substantial in aerospace engineering applications, such as in aerodynamic shape optimization and advanced turbo-machinery design.

Keywords: mathematical optimization, multi-objective evolutionary algorithms "MOEA", computational fluid dynamics "CFD", aerodynamic shape optimization

Procedia PDF Downloads 251
432 Urban Livelihoods and Climate Change: Adaptation Strategies for Urban Poor in Douala, Cameroon

Authors: Agbortoko Manyigbe Ayuk Nkem, Eno Cynthia Osuh

Abstract:

This paper sets to examine the relationship between climate change and urban livelihood through a vulnerability assessment of the urban poor in Douala. Urban development in Douala places priority towards industrial and city-centre development with little focus on the urban poor in terms of housing units and areas of sustenance. With the high rate of urbanisation and increased land prices, the urban poor are forced to occupy marginal lands which are mainly wetlands, wastelands and along abandoned neighbourhoods prone to natural hazards. Due to climate change and its effects, these wetlands are constantly flooded thereby destroying homes, properties, and crops. Also, most of these urban dwellers have found solace in urban agriculture as a means for survival. However, since agriculture in tropical regions like Cameroon depends largely on seasonal rainfall, the changes in rainfall pattern has led to misplaced periods for crop planting and a huge wastage of resources as rainfall becomes very unreliable with increased temperature levels. Data for the study was obtained from both primary and secondary sources. Secondary sources included published materials related to climate change and vulnerability. Primary data was obtained through focus-group discussions with some urban farmers while a stratified sampling of residents within marginal lands was done. Each stratum was randomly sampled to obtain information on different stressors related to climate change and their effect on livelihood. Findings proved that the high rate of rural-urban migration into Douala has led to increased prevalence of the urban poor and their vulnerability to climate change as evident in their constant fight against flood from unexpected sea level rise and irregular rainfall pattern for urban agriculture. The study also proved that women were most vulnerable as they depended solely on urban agriculture and its related activities like retailing agricultural products in different urban markets which to them serves as a main source of income in the attainment of basic needs for the family. Adaptation measures include the constant use of sand bags, raised makeshifts as well as cultivation along streams, planting after evidence of constant rainfall has become paramount for sustainability.

Keywords: adaptation, Douala, Cameroon, climate change, development, livelihood, vulnerability

Procedia PDF Downloads 287
431 Changes in Skin Microbiome Diversity According to the Age of Xian Women

Authors: Hanbyul Kim, Hye-Jin Kin, Taehun Park, Woo Jun Sul, Susun An

Abstract:

Skin is the largest organ of the human body and can provide the diverse habitat for various microorganisms. The ecology of the skin surface selects distinctive sets of microorganisms and is influenced by both endogenous intrinsic factors and exogenous environmental factors. The diversity of the bacterial community in the skin also depends on multiple host factors: gender, age, health status, location. Among them, age-related changes in skin structure and function are attributable to combinations of endogenous intrinsic factors and exogenous environmental factors. Skin aging is characterized by a decrease in sweat, sebum and the immune functions thus resulting in significant alterations in skin surface physiology including pH, lipid composition, and sebum secretion. The present study gives a comprehensive clue on the variation of skin microbiota and the correlations between ages by analyzing and comparing the metagenome of skin microbiome using Next Generation Sequencing method. Skin bacterial diversity and composition were characterized and compared between two different age groups: younger (20 – 30y) and older (60 - 70y) Xian, Chinese women. A total of 73 healthy women meet two conditions: (I) living in Xian, China; (II) maintaining healthy skin status during the period of this study. Based on Ribosomal Database Project (RDP) database, skin samples of 73 participants were enclosed with ten most abundant genera: Chryseobacterium, Propionibacterium, Enhydrobacter, Staphylococcus and so on. Although these genera are the most predominant genus overall, each genus showed different proportion in each group. The most dominant genus, Chryseobacterium was more present relatively in Young group than in an old group. Similarly, Propionibacterium and Enhydrobacter occupied a higher proportion of skin bacterial composition of the young group. Staphylococcus, in contrast, inhabited more in the old group. The beta diversity that represents the ratio between regional and local species diversity showed significantly different between two age groups. Likewise, The Principal Coordinate Analysis (PCoA) values representing each phylogenetic distance in the two-dimensional framework using the OTU (Operational taxonomic unit) values of the samples also showed differences between the two groups. Thus, our data suggested that the composition and diversification of skin microbiomes in adult women were largely affected by chronological and physiological skin aging.

Keywords: next generation sequencing, age, Xian, skin microbiome

Procedia PDF Downloads 148
430 A Method to Evaluate and Compare Web Information Extractors

Authors: Patricia Jiménez, Rafael Corchuelo, Hassan A. Sleiman

Abstract:

Web mining is gaining importance at an increasing pace. Currently, there are many complementary research topics under this umbrella. Their common theme is that they all focus on applying knowledge discovery techniques to data that is gathered from the Web. Sometimes, these data are relatively easy to gather, chiefly when it comes from server logs. Unfortunately, there are cases in which the data to be mined is the data that is displayed on a web document. In such cases, it is necessary to apply a pre-processing step to first extract the information of interest from the web documents. Such pre-processing steps are performed using so-called information extractors, which are software components that are typically configured by means of rules that are tailored to extracting the information of interest from a web page and structuring it according to a pre-defined schema. Paramount to getting good mining results is that the technique used to extract the source information is exact, which requires to evaluate and compare the different proposals in the literature from an empirical point of view. According to Google Scholar, about 4 200 papers on information extraction have been published during the last decade. Unfortunately, they were not evaluated within a homogeneous framework, which leads to difficulties to compare them empirically. In this paper, we report on an original information extraction evaluation method. Our contribution is three-fold: a) this is the first attempt to provide an evaluation method for proposals that work on semi-structured documents; the little existing work on this topic focuses on proposals that work on free text, which has little to do with extracting information from semi-structured documents. b) It provides a method that relies on statistically sound tests to support the conclusions drawn; the previous work does not provide clear guidelines or recommend statistically sound tests, but rather a survey that collects many features to take into account as well as related work; c) We provide a novel method to compute the performance measures regarding unsupervised proposals; otherwise they would require the intervention of a user to compute them by using the annotations on the evaluation sets and the information extracted. Our contributions will definitely help researchers in this area make sure that they have advanced the state of the art not only conceptually, but from an empirical point of view; it will also help practitioners make informed decisions on which proposal is the most adequate for a particular problem. This conference is a good forum to discuss on our ideas so that we can spread them to help improve the evaluation of information extraction proposals and gather valuable feedback from other researchers.

Keywords: web information extractors, information extraction evaluation method, Google scholar, web

Procedia PDF Downloads 245
429 A Study on the Magnetic and Submarine Geology Structure of TA22 Seamount in Lau Basin, Tonga

Authors: Soon Young Choi, Chan Hwan Kim, Chan Hong Park, Hyung Rae Kim, Myoung Hoon Lee, Hyeon-Yeong Park

Abstract:

We performed the marine magnetic, bathymetry and seismic survey at the TA22 seamount (in the Lau basin, SW Pacific) for finding the submarine hydrothermal deposits in October 2009. We acquired magnetic and bathymetry data sets by suing Overhouser Proton Magnetometer SeaSPY (Marine Magnetics Co.), Multi-beam Echo Sounder EM120 (Kongsberg Co.). We conducted the data processing to obtain detailed seabed topography, magnetic anomaly, reduction to the pole (RTP) and magnetization. Based on the magnetic properties result, we analyzed submarine geology structure of TA22 seamount with post-processed seismic profile. The detailed bathymetry of the TA22 seamount showed the left and right crest parts that have caldera features in each crest central part. The magnetic anomaly distribution of the TA22 seamount regionally displayed high magnetic anomalies in northern part and the low magnetic anomalies in southern part around the caldera features. The RTP magnetic anomaly distribution of the TA22 seamount presented commonly high magnetic anomalies in the each caldera central part. Also, it represented strong anomalies at the inside of caldera rather than outside flank of the caldera. The magnetization distribution of the TA22 seamount showed the low magnetization zone in the center of each caldera, high magnetization zone in the southern and northern east part. From analyzed the seismic profile map, The TA22 seamount area is showed for the inferred small mounds inside each caldera central part and it assumes to make possibility of sills by the magma in cases of the right caldera. Taking into account all results of this study (bathymetry, magnetic anomaly, RTP, magnetization, seismic profile) with rock samples at the left caldera area in 2009 survey, we suppose the possibility of hydrothermal deposits at mounds in each caldera central part and at outside flank of the caldera representing the low magnetization zone. We expect to have the better results by combined modeling from this study data with the other geological data (ex. detailed gravity, 3D seismic, petrologic study results and etc).

Keywords: detailed bathymetry, magnetic anomaly, seamounts, seismic profile, SW Pacific

Procedia PDF Downloads 393
428 Applications of Multi-Path Futures Analyses for Homeland Security Assessments

Authors: John Hardy

Abstract:

A range of future-oriented intelligence techniques is commonly used by states to assess their national security and develop strategies to detect and manage threats, to develop and sustain capabilities, and to recover from attacks and disasters. Although homeland security organizations use future's intelligence tools to generate scenarios and simulations which inform their planning, there have been relatively few studies of the methods available or their applications for homeland security purposes. This study presents an assessment of one category of strategic intelligence techniques, termed Multi-Path Futures Analyses (MPFA), and how it can be applied to three distinct tasks for the purpose of analyzing homeland security issues. Within this study, MPFA are categorized as a suite of analytic techniques which can include effects-based operations principles, general morphological analysis, multi-path mapping, and multi-criteria decision analysis techniques. These techniques generate multiple pathways to potential futures and thereby generate insight into the relative influence of individual drivers of change, the desirability of particular combinations of pathways, and the kinds of capabilities which may be required to influence or mitigate certain outcomes. The study assessed eighteen uses of MPFA for homeland security purposes and found that there are five key applications of MPFA which add significant value to analysis. The first application is generating measures of success and associated progress indicators for strategic planning. The second application is identifying homeland security vulnerabilities and relationships between individual drivers of vulnerability which may amplify or dampen their effects. The third application is selecting appropriate resources and methods of action to influence individual drivers. The fourth application is prioritizing and optimizing path selection preferences and decisions. The fifth application is informing capability development and procurement decisions to build and sustain homeland security organizations. Each of these applications provides a unique perspective of a homeland security issue by comparing a range of potential future outcomes at a set number of intervals and by contrasting the relative resource requirements, opportunity costs, and effectiveness measures of alternative courses of action. These findings indicate that MPFA enhances analysts’ ability to generate tangible measures of success, identify vulnerabilities, select effective courses of action, prioritize future pathway preferences, and contribute to ongoing capability development in homeland security assessments.

Keywords: homeland security, intelligence, national security, operational design, strategic intelligence, strategic planning

Procedia PDF Downloads 136
427 The Acquisition of /r/ By Setswana-Learning Children

Authors: Keneilwe Matlhaku

Abstract:

Crosslinguistic studies (theoretical and clinical) have shown delays and significant misarticulation in the acquisition of the rhotics. This article provides a detailed analysis of the early development of the rhotic phoneme, an apical trill /r/, by monolingual Setswana (Tswana S30) children of age ranges between 1 and 4 years. The data display the following trends: (1) late acquisition of /r/; (2) a wide range of substitution patterns involving this phoneme (i.e., gliding, coronal stopping, affrication, deletion, lateralization, as well as, substitution to a dental and uvular fricative). The primary focus of the article is on the potential origins of these variations of /r/, even within the same language. Our data comprises naturalistic longitudinal audio recordings of 6 children (2 males and 4 females) whose speech was recorded in their homes over a period of 4 months with no or only minimal disruptions in their daily environments. Phon software (Rose et al. 2013; Rose & MacWhinney 2014) was used to carry out the orthographic and phonetic transcriptions of the children’s data. Phon also enabled the generation of the children’s phonological inventories for comparison with adult target IPA forms. We explain the children’s patterns through current models of phonological emergence (MacWhinney 2015) as well as McAllister Byun, Inkelas & Rose (2016); Rose et al., (2022), which highlight the perceptual and articulatory factors influencing the development of sounds and sound classes. We highlight how the substitution patterns observed in the data can be captured through a consideration of the auditory properties of the target speech sounds, combined with an understanding of the types of articulatory gestures involved in the production of these sounds. These considerations, in turn, highlight some of the most central aspects of the challenges faced by the child toward learning these auditory-articulatory mappings. We provide a cross-linguistic survey of the acquisition of rhotic consonants in a sample of related and unrelated languages in which we show that the variability and volatility in the substitution patterns of /r/ is also brought about by the properties of the children’s ambient languages. Beyond theoretical issues, this article sets an initial foundation for developing speech-language pathology materials and services for Setswana learning children, an emerging area of public service in Botswana.

Keywords: rhotic, apical trill, Phon, phonological emergence, auditory, articulatory, mapping

Procedia PDF Downloads 30
426 Genomic Prediction Reliability Using Haplotypes Defined by Different Methods

Authors: Sohyoung Won, Heebal Kim, Dajeong Lim

Abstract:

Genomic prediction is an effective way to measure the abilities of livestock for breeding based on genomic estimated breeding values, statistically predicted values from genotype data using best linear unbiased prediction (BLUP). Using haplotypes, clusters of linked single nucleotide polymorphisms (SNPs), as markers instead of individual SNPs can improve the reliability of genomic prediction since the probability of a quantitative trait loci to be in strong linkage disequilibrium (LD) with markers is higher. To efficiently use haplotypes in genomic prediction, finding optimal ways to define haplotypes is needed. In this study, 770K SNP chip data was collected from Hanwoo (Korean cattle) population consisted of 2506 cattle. Haplotypes were first defined in three different ways using 770K SNP chip data: haplotypes were defined based on 1) length of haplotypes (bp), 2) the number of SNPs, and 3) k-medoids clustering by LD. To compare the methods in parallel, haplotypes defined by all methods were set to have comparable sizes; in each method, haplotypes defined to have an average number of 5, 10, 20 or 50 SNPs were tested respectively. A modified GBLUP method using haplotype alleles as predictor variables was implemented for testing the prediction reliability of each haplotype set. Also, conventional genomic BLUP (GBLUP) method, which uses individual SNPs were tested to evaluate the performance of the haplotype sets on genomic prediction. Carcass weight was used as the phenotype for testing. As a result, using haplotypes defined by all three methods showed increased reliability compared to conventional GBLUP. There were not many differences in the reliability between different haplotype defining methods. The reliability of genomic prediction was highest when the average number of SNPs per haplotype was 20 in all three methods, implying that haplotypes including around 20 SNPs can be optimal to use as markers for genomic prediction. When the number of alleles generated by each haplotype defining methods was compared, clustering by LD generated the least number of alleles. Using haplotype alleles for genomic prediction showed better performance, suggesting improved accuracy in genomic selection. The number of predictor variables was decreased when the LD-based method was used while all three haplotype defining methods showed similar performances. This suggests that defining haplotypes based on LD can reduce computational costs and allows efficient prediction. Finding optimal ways to define haplotypes and using the haplotype alleles as markers can provide improved performance and efficiency in genomic prediction.

Keywords: best linear unbiased predictor, genomic prediction, haplotype, linkage disequilibrium

Procedia PDF Downloads 138
425 Assessment of Water Reuse Potential in a Metal Finishing Factory

Authors: Efe Gumuslu, Guclu Insel, Gülten Yuksek, Nilay Sayi Ucar, Emine Ubay Cokgor, Tuğba Olmez Hanci, Didem Okutman Tas, Fatoş Germirli Babuna, Derya Firat Ertem, Ökmen Yildirim, Özge Erturan, Betül Kirci

Abstract:

Although water reclamation and reuse are inseparable parts of sustainable production concept all around the world, current levels of reuse constitute only a small fraction of the total volume of industrial effluents. Nowadays, within the perspective of serious climate change, wastewater reclamation and reuse practices should be considered as a requirement. Industrial sector is one of the largest users of water sources. The OECD Environmental Outlook to 2050 predicts that global water demand for manufacturing will increase by 400% from 2000 to 2050 which is much larger than any other sector. Metal finishing industry is one of the industries that requires high amount of water during the manufacturing. Therefore, actions regarding the improvement of wastewater treatment and reuse should be undertaken on both economic and environmental sustainability grounds. Process wastewater can be reused for more purposes if the appropriate treatment systems are installed to treat the wastewater to the required quality level. Recent studies showed that membrane separation techniques may help in solving the problem of attaining a suitable quality of water that allows being recycled back to the process. The metal finishing factory where this study is conducted is one of the biggest white-goods manufacturers in Turkey. The sheet metal parts used in the cookers production have to be exposed to surface pre-treatment processes composed of degreasing, rinsing, nanoceramics coating and deionization rinsing processes, consecutively. The wastewater generating processes in the factory are enamel coating, painting and styrofoam processes. In the factory, the main source of water is the well water. While some part of the well water is directly used in the processes after passing through resin treatment, some portion of it is directed to the reverse osmosis treatment to obtain required water quality for enamel coating and painting processes. In addition to these processes another important source of water that can be considered as a potential water source is rainwater (3660 tons/year). In this study, process profiles as well as pollution profiles were assessed by a detailed quantitative and qualitative characterization of the wastewater sources generated in the factory. Based on the preliminary results the main water sources that can be considered for reuse in the processes were determined as painting and styrofoam processes.

Keywords: enamel coating, painting, reuse, wastewater

Procedia PDF Downloads 372
424 Development of a Novel Clinical Screening Tool, Using the BSGE Pain Questionnaire, Clinical Examination and Ultrasound to Predict the Severity of Endometriosis Prior to Laparoscopic Surgery

Authors: Marlin Mubarak

Abstract:

Background: Endometriosis is a complex disabling disease affecting young females in the reproductive period mainly. The aim of this project is to generate a diagnostic model to predict severity and stage of endometriosis prior to Laparoscopic surgery. This will help to improve the pre-operative diagnostic accuracy of stage 3 & 4 endometriosis and as a result, refer relevant women to a specialist centre for complex Laparoscopic surgery. The model is based on the British Society of Gynaecological Endoscopy (BSGE) pain questionnaire, clinical examination and ultrasound scan. Design: This is a prospective, observational, study, in which women completed the BSGE pain questionnaire, a BSGE requirement. Also, as part of the routine preoperative assessment patient had a routine ultrasound scan and when recto-vaginal and deep infiltrating endometriosis was suspected an MRI was performed. Setting: Luton & Dunstable University Hospital. Patients: Symptomatic women (n = 56) scheduled for laparoscopy due to pelvic pain. The age ranged between 17 – 52 years of age (mean 33.8 years, SD 8.7 years). Interventions: None outside the recognised and established endometriosis centre protocol set up by BSGE. Main Outcome Measure(s): Sensitivity and specificity of endometriosis diagnosis predicted by symptoms based on BSGE pain questionnaire, clinical examinations and imaging. Findings: The prevalence of diagnosed endometriosis was calculated to be 76.8% and the prevalence of advanced stage was 55.4%. Deep infiltrating endometriosis in various locations was diagnosed in 32/56 women (57.1%) and some had DIE involving several locations. Logistic regression analysis was performed on 36 clinical variables to create a simple clinical prediction model. After creating the scoring system using variables with P < 0.05, the model was applied to the whole dataset. The sensitivity was 83.87% and specificity 96%. The positive likelihood ratio was 20.97 and the negative likelihood ratio was 0.17, indicating that the model has a good predictive value and could be useful in predicting advanced stage endometriosis. Conclusions: This is a hypothesis-generating project with one operator, but future proposed research would provide validation of the model and establish its usefulness in the general setting. Predictive tools based on such model could help organise the appropriate investigation in clinical practice, reduce risks associated with surgery and improve outcome. It could be of value for future research to standardise the assessment of women presenting with pelvic pain. The model needs further testing in a general setting to assess if the initial results are reproducible.

Keywords: deep endometriosis, endometriosis, minimally invasive, MRI, ultrasound.

Procedia PDF Downloads 350
423 Application of Geosynthetics for the Recovery of Located Road on Geological Failure

Authors: Rideci Farias, Haroldo Paranhos

Abstract:

The present work deals with the use of drainage geo-composite as a deep drainage and geogrid element to reinforce the base of the body of the landfill destined to the road pavement on geological faults in the stretch of the TO-342 Highway, between the cities of Miracema and Miranorte, in the State of Tocantins / TO, Brazil, which for many years was the main link between TO-010 and BR-153, after the city of Palmas, also in the state of Tocantins / TO, Brazil. For this application, geotechnical and geological studies were carried out by means of SPT percussion drilling, drilling and rotary drilling, to understand the problem, identifying the type of faults, filling material and the definition of the water table. According to the geological and geotechnical studies carried out, the area where the route was defined, passes through a zone of longitudinal fault to the runway, with strong breaking / fracturing, with presence of voids, intense alteration and with advanced argilization of the rock and with the filling up parts of the faults by organic and compressible soils leachate from other horizons. This geology presents as a geotechnical aggravating agent a medium of high hydraulic load and very low resistance to penetration. For more than 20 years, the region presented constant excessive deformations in the upper layers of the pavement, which after routine services of regularization, reconformation, re-compaction of the layers and application of the asphalt coating. The faults were quickly propagated to the surface of the asphalt pavement, generating a longitudinal shear, forming steps (unevenness), close to 40 cm, causing numerous accidents and discomfort to the drivers, since the geometric positioning was in a horizontal curve. Several projects were presented to the region's highway department to solve the problem. Due to the need for partial closure of the runway, the short time for execution, the use of geosynthetics was proposed and the most adequate solution for the problem was taken into account the movement of existing geological faults and the position of the water level in relation to several Layers of pavement and failure. In order to avoid any flow of water in the body of the landfill and in the filling material of the faults, a drainage curtain solution was used, carried out at 4.0 meters depth, with drainage geo-composite and as reinforcement element and inhibitor of the possible A geogrid of 200 kN / m of resistance was inserted at the base of the reconstituted landfill. Recent evaluations, after 13 years of application of the solution, show the efficiency of the technique used, supported by the geotechnical studies carried out in the area.

Keywords: geosynthetics, geocomposite, geogrid, road, recovery, geological failure

Procedia PDF Downloads 167
422 A Review of Gas Hydrate Rock Physics Models

Authors: Hemin Yuan, Yun Wang, Xiangchun Wang

Abstract:

Gas hydrate is drawing attention due to the fact that it has an enormous amount all over the world, which is almost twice the conventional hydrocarbon reserves, making it a potential alternative source of energy. It is widely distributed in permafrost and continental ocean shelves, and many countries have launched national programs for investigating the gas hydrate. Gas hydrate is mainly explored through seismic methods, which include bottom simulating reflectors (BSR), amplitude blanking, and polarity reverse. These seismic methods are effective at finding the gas hydrate formations but usually contain large uncertainties when applying to invert the micro-scale petrophysical properties of the formations due to lack of constraints. Rock physics modeling links the micro-scale structures of the rocks to the macro-scale elastic properties and can work as effective constraints for the seismic methods. A number of rock physics models have been proposed for gas hydrate modeling, which addresses different mechanisms and applications. However, these models are generally not well classified, and it is confusing to determine the appropriate model for a specific study. Moreover, since the modeling usually involves multiple models and steps, it is difficult to determine the source of uncertainties. To solve these problems, we summarize the developed models/methods and make four classifications of the models according to the hydrate micro-scale morphology in sediments, the purpose of reservoir characterization, the stage of gas hydrate generation, and the lithology type of hosting sediments. Some sub-categories may overlap each other, but they have different priorities. Besides, we also analyze the priorities of different models, bring up the shortcomings, and explain the appropriate application scenarios. Moreover, by comparing the models, we summarize a general workflow of the modeling procedure, which includes rock matrix forming, dry rock frame generating, pore fluids mixing, and final fluid substitution in the rock frame. These procedures have been widely used in various gas hydrate modeling and have been confirmed to be effective. We also analyze the potential sources of uncertainties in each modeling step, which enables us to clearly recognize the potential uncertainties in the modeling. In the end, we explicate the general problems of the current models, including the influences of pressure and temperature, pore geometry, hydrate morphology, and rock structure change during gas hydrate dissociation and re-generation. We also point out that attenuation is also severely affected by gas hydrate in sediments and may work as an indicator to map gas hydrate concentration. Our work classifies rock physics models of gas hydrate into different categories, generalizes the modeling workflow, analyzes the modeling uncertainties and potential problems, which can facilitate the rock physics characterization of gas hydrate bearding sediments and provide hints for future studies.

Keywords: gas hydrate, rock physics model, modeling classification, hydrate morphology

Procedia PDF Downloads 152
421 Bioinformatics High Performance Computation and Big Data

Authors: Javed Mohammed

Abstract:

Right now, bio-medical infrastructure lags well behind the curve. Our healthcare system is dispersed and disjointed; medical records are a bit of a mess; and we do not yet have the capacity to store and process the crazy amounts of data coming our way from widespread whole-genome sequencing. And then there are privacy issues. Despite these infrastructure challenges, some researchers are plunging into bio medical Big Data now, in hopes of extracting new and actionable knowledge. They are doing delving into molecular-level data to discover bio markers that help classify patients based on their response to existing treatments; and pushing their results out to physicians in novel and creative ways. Computer scientists and bio medical researchers are able to transform data into models and simulations that will enable scientists for the first time to gain a profound under-standing of the deepest biological functions. Solving biological problems may require High-Performance Computing HPC due either to the massive parallel computation required to solve a particular problem or to algorithmic complexity that may range from difficult to intractable. Many problems involve seemingly well-behaved polynomial time algorithms (such as all-to-all comparisons) but have massive computational requirements due to the large data sets that must be analyzed. High-throughput techniques for DNA sequencing and analysis of gene expression have led to exponential growth in the amount of publicly available genomic data. With the increased availability of genomic data traditional database approaches are no longer sufficient for rapidly performing life science queries involving the fusion of data types. Computing systems are now so powerful it is possible for researchers to consider modeling the folding of a protein or even the simulation of an entire human body. This research paper emphasizes the computational biology's growing need for high-performance computing and Big Data. It illustrates this article’s indispensability in meeting the scientific and engineering challenges of the twenty-first century, and how Protein Folding (the structure and function of proteins) and Phylogeny Reconstruction (evolutionary history of a group of genes) can use HPC that provides sufficient capability for evaluating or solving more limited but meaningful instances. This article also indicates solutions to optimization problems, and benefits Big Data and Computational Biology. The article illustrates the Current State-of-the-Art and Future-Generation Biology of HPC Computing with Big Data.

Keywords: high performance, big data, parallel computation, molecular data, computational biology

Procedia PDF Downloads 360
420 The Dark Side of the Fight against Organised Crime

Authors: Ana M. Prieto del Pino

Abstract:

As is well known, UN Convention against Illicit Traffic in Narcotic Drugs and Psychotropic Substances (1988) was a landmark regarding the seizure of proceeds of crime. Depriving criminals of the profits from their activity became a priority at an international level in the fight against organised crime. Enabling confiscation of proceeds of illicit traffic in narcotic drugs and psychotropic substances, criminalising money laundering and confiscating the proceeds thereof are the three measures taken in order to achieve that purpose. The beginning of 21st century brought the declaration of war on corruption and on the illicit enjoyment of the profits thereof onto the international scene. According to the UN Convention against Transnational Organised Crime (2000), States Parties should adopt the necessary measures to enable the confiscation of proceeds of crime derived from offences (or property of equivalent value) and property, equipment and other instrumentalities used in offences covered by that Convention. The UN Convention against Corruption (2003) states asset recovery explicitly as a fundamental principle and sets forth measures aiming at the direct recovery of property through international cooperation in confiscation. Furthermore, European legislation has made many significant strides forward in less than twenty years concerning money laundering, confiscation, and asset recovery. Crime does not pay, let there be no doubt about it. Nevertheless, we must be very careful not to sing out of tune with individual rights and legal guarantees. On the one hand, innocent individuals and businesses must be protected, since they should not pay for the guilty ones’ faults. On the other hand, the rule of law must be preserved and not be tossed aside regarding those who have carried out criminal activities. An in-depth analysis of judicial decisions on money laundering and confiscation of proceeds of crime issued by European national courts and by the European Court of Human Rights in the last decade has been carried out from a human rights, legal guarantees and criminal law basic principles’ perspective. The undertaken study has revealed the violation of the right to property, of the proportionality principle legal and the infringement of basic principles of states’ domestic substantive and procedural criminal law systems. The most relevant ones have to do with the punishment of money laundering committed through negligence, non-conviction based confiscation and a too-far reaching interpretation of the notion of ‘proceeds of crime’. Almost everything in life has a bright and a dark side. Confiscation of criminal proceeds and asset recovery are not an exception to this rule.

Keywords: confiscation, human rights, money laundering, organized crime

Procedia PDF Downloads 138
419 The Challenge of Assessing Social AI Threats

Authors: Kitty Kioskli, Theofanis Fotis, Nineta Polemi

Abstract:

The European Union (EU) directive Artificial Intelligence (AI) Act in Article 9 requires that risk management of AI systems includes both technical and human oversight, while according to NIST_AI_RFM (Appendix C) and ENISA AI Framework recommendations, claim that further research is needed to understand the current limitations of social threats and human-AI interaction. AI threats within social contexts significantly affect the security and trustworthiness of the AI systems; they are interrelated and trigger technical threats as well. For example, lack of explainability (e.g. the complexity of models can be challenging for stakeholders to grasp) leads to misunderstandings, biases, and erroneous decisions. Which in turn impact the privacy, security, accountability of the AI systems. Based on the NIST four fundamental criteria for explainability it can also classify the explainability threats into four (4) sub-categories: a) Lack of supporting evidence: AI systems must provide supporting evidence or reasons for all their outputs. b) Lack of Understandability: Explanations offered by systems should be comprehensible to individual users. c) Lack of Accuracy: The provided explanation should accurately represent the system's process of generating outputs. d) Out of scope: The system should only function within its designated conditions or when it possesses sufficient confidence in its outputs. Biases may also stem from historical data reflecting undesired behaviors. When present in the data, biases can permeate the models trained on them, thereby influencing the security and trustworthiness of the of AI systems. Social related AI threats are recognized by various initiatives (e.g., EU Ethics Guidelines for Trustworthy AI), standards (e.g. ISO/IEC TR 24368:2022 on AI ethical concerns, ISO/IEC AWI 42105 on guidance for human oversight of AI systems) and EU legislation (e.g. the General Data Protection Regulation 2016/679, the NIS 2 Directive 2022/2555, the Directive on the Resilience of Critical Entities 2022/2557, the EU AI Act, the Cyber Resilience Act). Measuring social threats, estimating the risks to AI systems associated to these threats and mitigating them is a research challenge. In this paper it will present the efforts of two European Commission Projects (FAITH and THEMIS) from the HorizonEurope programme that analyse the social threats by building cyber-social exercises in order to study human behaviour, traits, cognitive ability, personality, attitudes, interests, and other socio-technical profile characteristics. The research in these projects also include the development of measurements and scales (psychometrics) for human-related vulnerabilities that can be used in estimating more realistically the vulnerability severity, enhancing the CVSS4.0 measurement.

Keywords: social threats, artificial Intelligence, mitigation, social experiment

Procedia PDF Downloads 57
418 A Descriptive Study of Turkish Straits System on Dynamics of Environmental Factors Causing Maritime Accidents

Authors: Gizem Kodak, Alper Unal, Birsen Koldemir, Tayfun Acarer

Abstract:

Turkish Straits System which consists of Istanbul Strait (Bosphorus), Canakkale Strait (Dardanelles) and the Marmara Sea has a strategical location on international maritime as it is a unique waterway between the Mediterranean Sea, Black Sea and the Aegean Sea. Thus, this area has great importance since it is the only waterway between Black Sea countries and the rest of the World. Turkish Straits System has dangerous environmental factors hosts more vessel every day through developing World trade and this situation results in expanding accident risks day by day. Today, a lot of precautions have been taken to ensure safe navigation and to prevent maritime accidents, and international standards are followed to avoid maritime accidents. Despite this, the environmental factors that affect this area, trigger the maritime accidents and threaten the vessels with new accidents risks in different months with different hazards. This descriptive study consists of temporal and spatial analyses of environmental factors causing maritime accidents. This study also aims at contributing to safety navigation including monthly and regionally characteristics of variables. In this context, two different data sets are created consisting of environmental factors and accidents. This descriptive study on the accidents between 2001 and 2017 the mentioned region also studies the months and places of the accidents with environmental factor variables. Environmental factor variables are categorized as dynamic and static factors. Dynamic factors are appointed as meteorological and oceanographical while static factors are appointed as geological factors that threaten safety navigation with geometrical restricts. The variables that form dynamic factors are approached meteorological as wind direction, wind speed, wave altitude and visibility. The circulations and properties of the water mass on the system are studied as oceanographical properties. At the end of the study, the efficient meteorological and oceanographical parameters on the region are presented monthly and regionally. By this way, we acquired the monthly, seasonal and regional distributions of the accidents. Upon the analyses that are done; The Turkish Straits System that connects the Black Sea countries with the other countries and which is one of the most important parts of the world trade; is analyzed on temporal and spatial dimensions on the reasons of the accidents and have been presented as environmental factor dynamics causing maritime accidents.

Keywords: descriptive study, environmental factors, maritime accidents, statistics

Procedia PDF Downloads 197
417 Building Environmental Citizenship in Spain: Urban Movements and Ecologist Protest in Las Palmas De Gran Canaria, 1970-1983

Authors: Juan Manuel Brito-Diaz

Abstract:

The emergence of urban environmentalism in Spain is related to the processes of economic transformation and growing urbanization that occurred during the end of the Franco regime and the democratic transition. This paper analyzes the urban environmental mobilizations and their impacts as relevant democratizing agents in the processes of political change in cities. It’s an under-researched topic and studies on environmental movements in Spain have paid little attention to it. This research takes as its starting point the close link between democratization and environmentalism, since it considers that environmental conflicts are largely a consequence of democratic problems, and that the impacts of environmental movements are directly linked to the democratization. The study argues that the environmental movements that emerged in Spain at the end of the dictatorship and the democratic transition are an important part of the broad and complex associative fabric that promoted the democratization process. The research focuses on investigating the environmental protest in Las Palmas de Gran Canaria—the most important city in the Canary Islands—between 1970 and 1983, concurrently with the last local governments of the dictatorship and the first democratic city councils. As it is a case study, it opens up the possibility to ask multiple specific questions and assess each of the responses obtained. Although several research methodologies have been applied, such as the analysis of historical archives documentation or oral history interviews, mainly a very widespread methodology in the sociology of social movements, although very little used by social historians, has been used: the Protest Event Analysis (PEA). This methodology, which consists of generating a catalog of protest events by coding data around previously established variables, has allowed me to map, analyze and interpret the occurrence of protests over time and space, and associated factors, through content analysis. For data collection, news from local newspapers have provided a large enough sample to analyze the properties of social protest -frequency, size, demands, forms, organizers, etc.—and relate them to another type of information related to political structures and mobilization repertoires, encouraging the establishment of connections between the protest and the political impacts of urban movements. Finally, the study argues that the environmental movements of this period were essential to the construction of the new democratic city in Spain, not only because they established the issues of sustainability and urban environmental justice on the public agenda, but also because they proposed that conflicts derived from such matters should ultimately be resolved through public deliberation and citizen participation.

Keywords: democratization, environmental movements, political impacts, social movements

Procedia PDF Downloads 178