Search results for: 20th general election
1040 New Advanced Medical Software Technology Challenges and Evolution of the Regulatory Framework in Expert Software, Artificial Intelligence, and Machine Learning
Authors: Umamaheswari Shanmugam, Silvia Ronchi, Radu Vornicu
Abstract:
Software, artificial intelligence, and machine learning can improve healthcare through innovative and advanced technologies that are able to use the large amount and variety of data generated during healthcare services every day. As we read the news, over 500 machine learning or other artificial intelligence medical devices have now received FDA clearance or approval, the first ones even preceding the year 2000. One of the big advantages of these new technologies is the ability to get experience and knowledge from real-world use and to continuously improve their performance. Healthcare systems and institutions can have a great benefit because the use of advanced technologies improves the same time efficiency and efficacy of healthcare. Software-defined as a medical device, is stand-alone software that is intended to be used for patients for one or more of these specific medical intended uses: - diagnosis, prevention, monitoring, prediction, prognosis, treatment or alleviation of a disease, any other health conditions, replacing or modifying any part of a physiological or pathological process–manage the received information from in vitro specimens derived from the human samples (body) and without principal main action of its principal intended use by pharmacological, immunological or metabolic definition. Software qualified as medical devices must comply with the general safety and performance requirements applicable to medical devices. These requirements are necessary to ensure high performance and quality and also to protect patients’ safety. The evolution and the continuous improvement of software used in healthcare must take into consideration the increase in regulatory requirements, which are becoming more complex in each market. The gap between these advanced technologies and the new regulations is the biggest challenge for medical device manufacturers. Regulatory requirements can be considered a market barrier, as they can delay or obstacle the device approval, but they are necessary to ensure performance, quality, and safety, and at the same time, they can be a business opportunity if the manufacturer is able to define in advance the appropriate regulatory strategy. The abstract will provide an overview of the current regulatory framework, the evolution of the international requirements, and the standards applicable to medical device software in the potential market all over the world.Keywords: artificial intelligence, machine learning, SaMD, regulatory, clinical evaluation, classification, international requirements, MDR, 510k, PMA, IMDRF, cyber security, health care systems.
Procedia PDF Downloads 921039 Surge in U. S. Citizens Expatriation: Testing Structual Equation Modeling to Explain the Underlying Policy Rational
Authors: Marco Sewald
Abstract:
Comparing present to past the numbers of Americans expatriating U. S. citizenship have risen. Even though these numbers are small compared to the immigrants, U. S. citizens expatriations have historically been much lower, making the uptick worrisome. In addition, the published lists and numbers from the U.S. government seems incomplete, with many not counted. Different branches of the U. S. government report different numbers and no one seems to know exactly how big the real number is, even though the IRS and the FBI both track and/or publish numbers of Americans who renounce. Since there is no single explanation, anecdotal evidence suggests this uptick is caused by global tax law and increased compliance burdens imposed by the U.S. lawmakers on U.S. citizens abroad. Within a research project the question arose about the reasons why a constant growing number of U.S. citizens are expatriating – the answers are believed helping to explain the underlying governmental policy rational, leading to such activities. While it is impossible to locate former U.S. citizens to conduct a survey on the reasons and the U.S. government is not commenting on the reasons given within the process of expatriation, the chosen methodology is Structural Equation Modeling (SEM), in the first step by re-using current surveys conducted by different researchers within the population of U. S. citizens residing abroad during the last years. Surveys questioning the personal situation in the context of tax, compliance, citizenship and likelihood to repatriate to the U. S. In general SEM allows: (1) Representing, estimating and validating a theoretical model with linear (unidirectional or not) relationships. (2) Modeling causal relationships between multiple predictors (exogenous) and multiple dependent variables (endogenous). (3) Including unobservable latent variables. (4) Modeling measurement error: the degree to which observable variables describe latent variables. Moreover SEM seems very appealing since the results can be represented either by matrix equations or graphically. Results: the observed variables (items) of the construct are caused by various latent variables. The given surveys delivered a high correlation and it is therefore impossible to identify the distinct effect of each indicator on the latent variable – which was one desired result. Since every SEM comprises two parts: (1) measurement model (outer model) and (2) structural model (inner model), it seems necessary to extend the given data by conducting additional research and surveys to validate the outer model to gain the desired results.Keywords: expatriation of U. S. citizens, SEM, structural equation modeling, validating
Procedia PDF Downloads 2221038 Identifying and Understand Pragmatic Failures in Portuguese Foreign Language by Chinese Learners in Macau
Authors: Carla Lopes
Abstract:
It is clear nowadays that the proper performance of different speech acts is one of the most difficult obstacles that a foreign language learner has to overcome to be considered communicatively competent. This communication presents the results of an investigation on the pragmatic performance of Portuguese Language students at the University of Macau. The research discussed herein is based on a survey consisting of fourteen speaking situations to which the participants must respond in writing, and that includes different types of speech acts: apology, response to a compliment, refusal, complaint, disagreement and the understanding of the illocutionary force of indirect speech acts. The responses were classified in a five levels Likert scale (quantified from 1 to 5) according to their suitability for the particular situation. In general terms, we can summarize that about 45% of the respondents' answers were pragmatically competent, 10 % were acceptable and 45 % showed weaknesses at socio-pragmatic competence level. Given that the linguistic deviations were not taken into account, we can conclude that the faults are of cultural origin. It is natural that in the presence of orthogonal cultures, such as Chinese and Portuguese, there are failures of this type, barely solved in the four years of the undergraduate program. The target population, native speakers of Cantonese or Mandarin, make their first contact with the English language before joining the Bachelor of Portuguese Language. An analysis of the socio - pragmatic failures in the respondents’ answers suggests the conclusion that many of them are due to the lack of cultural knowledge. They try to compensate for this either using their native culture or resorting to a Western culture that they consider close to the Portuguese, that is the English or US culture, previously studied, and also widely present in the media and on the internet. This phenomenon, known as 'pragmatic transfer', can result in a linguistic behavior that may be considered inauthentic or pragmatically awkward. The resulting speech act is grammatically correct but is not pragmatically feasible, since it is not suitable to the culture of the target language, either because it does not exist or because the conditions of its use are in fact different. Analysis of the responses also supports the conclusion that these students present large deviations from the expected and stereotyped behavior of Chinese students. We can speculate while this linguistic behavior is the consequence of the Macao globalization that culturally casts the students, makes them more open, and distinguishes them from the typical Chinese students.Keywords: Portuguese foreign language, pragmatic failures, pragmatic transfer, pragmatic competence
Procedia PDF Downloads 2121037 The Influence of the Soil in the Vegetation of the Luki Biosphere Reserve in the Democratic Republic of Congo
Authors: Sarah Okende
Abstract:
It is universally recognized that the forests of the Congo Basin remain a common good and a complex ecosystem, and insufficiently known. Historically and throughout the world, forests have been valued for the multiple products and benefits they provide. In addition to their major role in the conservation of global biodiversity and in the fight against climate change, these forests also have an essential role in the regional and global ecology. This is particularly the case of the Luki Biosphere Reserve, a highly diversified evergreen Guinean-Congolese rainforest. Despite the efforts of sustainable management of the said reserve, the understanding of the place occupied by the soil under the influence of the latter does not seem to be an interesting subject for the general public or even scientists. The Luki biosphere reserve is located in the west of the DRC, more precisely in the south-east of Mayombe Congolais, in the province of Bas-Congo. The vegetation of the Luki Biosphere Reserve is very heterogeneous and diversified. It ranges from grassy formations to semi-evergreen dense humid forests, passing through edaphic formations on hydromorphic soils (aquatic and semi-aquatic vegetation; messicole and segetal vegetation; gascaricole vegetation; young secondary forests with Musanga cercropioides, Xylopia aethiopica, Corynanthe paniculata; mature secondary forests with Terminalia superba and Hymenostegia floribunda; primary forest with Prioria balsamifera; climax forests with Gilbertiodendron dewevrei, and Gilletiodendron kisantuense). Field observations and reading of previous and up-to-date work carried out in the Luki biosphere reserve are the methodological approaches for this study, the aim of which is to show the impact of soil types in determining the varieties of vegetation. The results obtained prove that the four different types of soil present (purplish red soils, developing on amphibolites; red soils, developed on gneisses; yellow soils occurring on gneisses and quartzites; and alluvial soils, developed on recent alluvium) have a major influence apart from other environmental factors on the determination of different facies of the vegetation of the Luki Biosphere Reserve. In conclusion, the Luki Biosphere Reserve is characterized by a wide variety of biotopes determined by the nature of the soil, the relief, the microclimates, the action of man, or the hydrography. Overall management (soil, biodiversity) in the Luki Biosphere Reserve is important for maintaining the ecological balance.Keywords: soil, biodiversity, forest, Luki, rainforest
Procedia PDF Downloads 841036 The Impact of Task Type and Group Size on Dialogue Argumentation between Students
Authors: Nadia Soledad Peralta
Abstract:
Within the framework of socio-cognitive interaction, argumentation is understood as a psychological process that supports and induces reasoning and learning. Most authors emphasize the great potential of argumentation to negotiate with contradictions and complex decisions. So argumentation is a target for researchers who highlight the importance of social and cognitive processes in learning. In the context of social interaction among university students, different types of arguments are analyzed according to group size (dyads and triads) and the type of task (reading of frequency tables, causal explanation of physical phenomena, the decision regarding moral dilemma situations, and causal explanation of social phenomena). Eighty-nine first-year social sciences students of the National University of Rosario participated. Two groups were formed from the results of a pre-test that ensured the heterogeneity of points of view between participants. Group 1 consisted of 56 participants (performance in dyads, total: 28), and group 2 was formed of 33 participants (performance in triads, total: 11). A quasi-experimental design was performed in which effects of the two variables (group size and type of task) on the argumentation were analyzed. Three types of argumentation are described: authentic dialogical argumentative resolutions, individualistic argumentative resolutions, and non-argumentative resolutions. The results indicate that individualistic arguments prevail in dyads. That is, although people express their own arguments, there is no authentic argumentative interaction. Given that, there are few reciprocal evaluations and counter-arguments in dyads. By contrast, the authentically dialogical argument prevails in triads, showing constant feedback between participants’ points of view. It was observed that, in general, the type of task generates specific types of argumentative interactions. However, it is possible to emphasize that the authentically dialogic arguments predominate in the logical tasks, whereas the individualists or pseudo-dialogical are more frequent in opinion tasks. Nerveless, these relationships between task type and argumentative mode are best clarified in an interactive analysis based on group size. Finally, it is important to stress the value of dialogical argumentation in educational domains. Argumentative function not only allows a metacognitive reflection about their own point of view but also allows people to benefit from exchanging points of view in interactive contexts.Keywords: sociocognitive interaction, argumentation, university students, size of the grup
Procedia PDF Downloads 841035 Visualization of Chinese Genealogies with Digital Technology: A Case of Genealogy of Wu Clan in the Village of Gaoqian
Authors: Huiling Feng, Jihong Liang, Xiaodong Gong, Yongjun Xu
Abstract:
Recording history is a tradition in ancient China. A record of a dynasty makes a dynastic history; a record of a locality makes a chorography, and a record of a clan makes a genealogy – the three combined together depicts a complete national history of China both macroscopically and microscopically, with genealogy serving as the foundation. Genealogy in ancient China traces back to a family tree or pedigrees in the early and medieval historical times. After Song Dynasty, the civilian society gradually emerged, and the Emperor had to allow people from the same clan to live together and hold the ancestor worship activities, thence compilation of genealogy became popular in the society. Since then, genealogies, regarded as important as ancestor and religious temples in a traditional villages even today, have played a primary role in identification of a clan and maintain local social order. Chinese genealogies are rich in their documentary materials. Take the Genealogy of Wu Clan in Gaoqian as an example. Gaoqian is a small village in Xianju County of Zhejiang Province. The Genealogy of Wu Clan in Gaoqian is composed of a whole set of materials from Foreword to Family Trees, Family Rules, Family Rituals, Family Graces and Glories, Ode to An ancestor’s Portrait, Manual for the Ancestor Temple, documents for great men in the clan, works written by learned men in the clan, the contracts concerning landed property, even notes on tombs and so on. Literally speaking, the genealogy, with detailed information from every aspect recorded in stylistic rules, is indeed the carrier of the entire culture of a clan. However, due to their scarcity in number and difficulties in reading, genealogies seldom fall into the horizons of common people. This paper, focusing on the case of the Genealogy of Wu Clan in the Village of Gaoqian, intends to reproduce a digital Genealogy by use of ICTs, through an in-depth interpretation of the literature and field investigation in Gaoqian Village. Based on this, the paper goes further to explore the general methods in transferring physical genealogies to digital ones and ways in visualizing the clanism culture embedded in the genealogies with a combination of digital technologies such as software in family trees, multimedia narratives, animation design, GIS application and e-book creators.Keywords: clanism culture, multimedia narratives, genealogy of Wu Clan, GIS
Procedia PDF Downloads 2221034 “I” on the Web: Social Penetration Theory Revised
Authors: Dr. Dionysis Panos Dpt. Communication, Internet Studies Cyprus University of Technology
Abstract:
The widespread use of New Media and particularly Social Media, through fixed or mobile devices, has changed in a staggering way our perception about what is “intimate" and "safe" and what is not, in interpersonal communication and social relationships. The distribution of self and identity-related information in communication now evolves under new and different conditions and contexts. Consequently, this new framework forces us to rethink processes and mechanisms, such as what "exposure" means in interpersonal communication contexts, how the distinction between the "private" and the "public" nature of information is being negotiated online, how the "audiences" we interact with are understood and constructed. Drawing from an interdisciplinary perspective that combines sociology, communication psychology, media theory, new media and social networks research, as well as from the empirical findings of a longitudinal comparative research, this work proposes an integrative model for comprehending mechanisms of personal information management in interpersonal communication, which can be applied to both types of online (Computer-Mediated) and offline (Face-To-Face) communication. The presentation is based on conclusions drawn from a longitudinal qualitative research study with 458 new media users from 24 countries for almost over a decade. Some of these main conclusions include: (1) There is a clear and evidenced shift in users’ perception about the degree of "security" and "familiarity" of the Web, between the pre- and the post- Web 2.0 era. The role of Social Media in this shift was catalytic. (2) Basic Web 2.0 applications changed dramatically the nature of the Internet itself, transforming it from a place reserved for “elite users / technical knowledge keepers" into a place of "open sociability” for anyone. (3) Web 2.0 and Social Media brought about a significant change in the concept of “audience” we address in interpersonal communication. The previous "general and unknown audience" of personal home pages, converted into an "individual & personal" audience chosen by the user under various criteria. (4) The way we negotiate the nature of 'private' and 'public' of the Personal Information, has changed in a fundamental way. (5) The different features of the mediated environment of online communication and the critical changes occurred since the Web 2.0 advance, lead to the need of reconsideration and updating the theoretical models and analysis tools we use in our effort to comprehend the mechanisms of interpersonal communication and personal information management. Therefore, is proposed here a new model for understanding the way interpersonal communication evolves, based on a revision of social penetration theory.Keywords: new media, interpersonal communication, social penetration theory, communication exposure, private information, public information
Procedia PDF Downloads 3741033 Monolithic Integrated GaN Resonant Tunneling Diode Pair with Picosecond Switching Time for High-speed Multiple-valued Logic System
Authors: Fang Liu, JiaJia Yao, GuanLin Wu, ZuMaoLi, XueYan Yang, HePeng Zhang, ZhiPeng Sun, JunShuai Xue
Abstract:
The explosive increasing needs of data processing and information storage strongly drive the advancement of the binary logic system to multiple-valued logic system. Inherent negative differential resistance characteristic, ultra-high-speed switching time, and robust anti-irradiation capability make III-nitride resonant tunneling diode one of the most promising candidates for multi-valued logic devices. Here we report the monolithic integration of GaN resonant tunneling diodes in series to realize multiple negative differential resistance regions, obtaining at least three stable operating states. A multiply-by-three circuit is achieved by this combination, increasing the frequency of the input triangular wave from f0 to 3f0. The resonant tunneling diodes are grown by plasma-assistedmolecular beam epitaxy on free-standing c-plane GaN substrates, comprising double barriers and a single quantum well both at the atomic level. Device with a peak current density of 183kA/cm² in conjunction with a peak-to-valley current ratio (PVCR) of 2.07 is observed, which is the best result reported in nitride-based resonant tunneling diodes. Microwave oscillation event at room temperature was discovered with a fundamental frequency of 0.31GHz and an output power of 5.37μW, verifying the high repeatability and robustness of our device. The switching behavior measurement was successfully carried out, featuring rise and fall times in the order of picoseconds, which can be used in high-speed digital circuits. Limited by the measuring equipment and the layer structure, the switching time can be further improved. In general, this article presents a novel nitride device with multiple negative differential regions driven by the resonant tunneling mechanism, which can be used in high-speed multiple value logic field with reduced circuit complexity, demonstrating a new solution of nitride devices to break through the limitations of binary logic.Keywords: GaN resonant tunneling diode, negative differential resistance, multiple-valued logic system, switching time, peak-to-valley current ratio
Procedia PDF Downloads 1021032 Understanding How Posting and Replying Behaviors in Social Media Differentiate the Social Capital Cultivation Capabilities of Users
Authors: Jung Lee
Abstract:
This study identifies how the cultivation capabilities of social capital influence the overall attitudes of social media users and how these influences differ across user groups. First, the cultivation capabilities of social capital are identified from three aspects, namely, social capital accessibility, potentiality and sensitivity. These three types of social capital acquisition capabilities collectively represent how the social media users perceive the social media environment in terms of possibilities for social capital creation. These three capabilities are hypothesized to influence social media satisfaction and continuing use intention. Next, two essential activities in social media are identified, namely, posting and replying, to categorise social media users based on behavioral patterns. Various social media activities consist of the combinations of these two basic activities. Posting represents the broadcasting aspect of social media, whereas replying represents the communicative aspect of social media. We categorize users into four from communicators to observers by using these two behaviors to develop usage pattern matrix. By applying the usage pattern matrix to the capability model, we argue that posting behavior generally has a positive moderating effect on the attitudes of social media users, whereas replying behavior occasionally exhibits the negative moderating effect. These different moderating effects of posting and replying behavior are explained based on the different levels of social capital sensitivity and expectation of individuals. When a person is highly expecting social capital from social media, he or she would post actively. However, when one is highly sensitive to social capital, he or she would actively respond and reply to postings of other people because such an act would create a longer and more interactive relationship. A total of 512 social media users are invited to answer the survey. They were asked about their attitudes toward the social media and how they expect social capital through this practice. They were asked to check their general social media usage pattern for user categorization. Result confirmed that most of the hypotheses were supported. Three types of social capital cultivation capabilities are significant determinants of social media attitudes, and two social media activities (i.e., posting and replying) exhibited different moderating effects on attitudes. This study provides following discussions. First, three types of social capital cultivation capabilities were identified. Despite the numerous concerns about social media, such as whether it is a decent and real environment that produces social capital, this study confirms that people explicitly expect and experience social capital values from social media. Second, posting and replying activities are two building blocks of social media activities. These two activities are useful in explaining different the attitudes of social media users and predict future usage.Keywords: social media, social capital, social media satisfaction, social media use intention
Procedia PDF Downloads 1921031 The Relations Between Hans Kelsen’s Concept of Law and the Theory of Democracy
Authors: Monika Zalewska
Abstract:
Hans Kelsen was a versatile legal thinker whose achievements in the fields of legal theory, international law, and the theory of democracy are remarkable. All of the fields tackled by Kelsen are regarded as part of his “pure theory of law.” While the link between international law and Kelsen’s pure theory of law is apparent, the same cannot be said about the link between the theory of democracy and his pure theory of law. On the contrary, the general thinking concerning Kelsen’s thought is that it can be used to legitimize authoritarian regimes. The aim of this presentation is to address this concern by identifying the common ground between Kelsen’s pure theory of law and his theory of democracy and to show that they are compatible in a way that his pure theory of law and authoritarianism cannot be. The conceptual analysis of the purity of Kelsen’s theory and his goal of creating ideology-free legal science hints at how Kelsen’s pure theory of law and the theory of democracy are brought together. The presentation will first demonstrate that these two conceptions have common underlying values and meta-ethical convictions. Both are founded on relativism and a rational worldview, and the aim of both is peaceful co-existence. Second, it will be demonstrated that the separation of law and morality provides the maximum space for deliberation within democratic processes. The conclusion of this analysis is that striking similarities exist between Kelsen’s legal theory and his theory of democracy. These similarities are grounded in the Enlightenment tradition and its values, including rationality, a scientific worldview, tolerance, and equality. This observation supports the claim that, for Kelsen, legal positivism and the theory of democracy are not two separate theories but rather stem from the same set of values and from Kelsen’s relativistic worldview. Furthermore, three main issues determine Kelsen’s orientation toward a positivistic and democratic outlook. The first, which is associated with personality type, is the distinction between absolutism and relativism. The second, which is associated with the values that Kelsen favors in the social order, is peace. The third is legality, which creates the necessary condition for democracy to thrive and reveals that democracy is capable of fulfilling Kelsen’s ideal of law at its fullest. The first two categories exist in the background of Kelsen’s pure theory of law, while the latter is an inherent part of Kelsen’s concept of law. The analysis of the text concerning natural law doctrine and democracy indicates that behind the technical language of Kelsen’s pure theory of law is a strong concern with the trends that appeared after World War I. Despite his rigorous scientific mind, Kelsen was deeply humanistic. He tried to create a powerful intellectual weapon to provide strong arguments for peaceful coexistence and a rational outlook in Europe. The analysis provided by this presentation facilitates a broad theoretical, philosophical, and political understanding of Kelsen’s perspectives and, consequently, urges a strong endorsement of Kelsen’s approach to constitutional democracy.Keywords: hans kelsen, democracy, legal positivism, pure theory of law
Procedia PDF Downloads 1101030 Development of Vertically Integrated 2D Lake Victoria Flow Models in COMSOL Multiphysics
Authors: Seema Paul, Jesper Oppelstrup, Roger Thunvik, Vladimir Cvetkovic
Abstract:
Lake Victoria is the second largest fresh water body in the world, located in East Africa with a catchment area of 250,000 km², of which 68,800 km² is the actual lake surface. The hydrodynamic processes of the shallow (40–80 m deep) water system are unique due to its location at the equator, which makes Coriolis effects weak. The paper describes a St.Venant shallow water model of Lake Victoria developed in COMSOL Multiphysics software, a general purpose finite element tool for solving partial differential equations. Depth soundings taken in smaller parts of the lake were combined with recent more extensive data to resolve the discrepancies of the lake shore coordinates. The topography model must have continuous gradients, and Delaunay triangulation with Gaussian smoothing was used to produce the lake depth model. The model shows large-scale flow patterns, passive tracer concentration and water level variations in response to river and tracer inflow, rain and evaporation, and wind stress. Actual data of precipitation, evaporation, in- and outflows were applied in a fifty-year simulation model. It should be noted that the water balance is dominated by rain and evaporation and model simulations are validated by Matlab and COMSOL. The model conserves water volume, the celerity gradients are very small, and the volume flow is very slow and irrotational except at river mouths. Numerical experiments show that the single outflow can be modelled by a simple linear control law responding only to mean water level, except for a few instances. Experiments with tracer input in rivers show very slow dispersion of the tracer, a result of the slow mean velocities, in turn, caused by the near-balance of rain with evaporation. The numerical and hydrodynamical model can evaluate the effects of wind stress which is exerted by the wind on the lake surface that will impact on lake water level. Also, model can evaluate the effects of the expected climate change, as manifest in changes to rainfall over the catchment area of Lake Victoria in the future.Keywords: bathymetry, lake flow and steady state analysis, water level validation and concentration, wind stress
Procedia PDF Downloads 2271029 An Experimental Determination of the Limiting Factors Governing the Operation of High-Hydrogen Blends in Domestic Appliances Designed to Burn Natural Gas
Authors: Haiqin Zhou, Robin Irons
Abstract:
The introduction of hydrogen into local networks may, in many cases, require the initial operation of those systems on natural gas/hydrogen blends, either because of a lack of sufficient hydrogen to allow a 100% conversion or because existing infrastructure imposes limitations on the % hydrogen that can be burned before the end-use technologies are replaced. In many systems, the largest number of end-use technologies are small-scale but numerous appliances used for domestic and industrial heating and cooking. In such a scenario, it is important to understand exactly how much hydrogen can be introduced into these appliances before their performance becomes unacceptable and what imposes that limitation. This study seeks to explore a range of significantly higher hydrogen blends and a broad range of factors that might limit operability or environmental acceptability. We will present tests from a burner designed for space heating and optimized for natural gas as an increasing % of hydrogen blends (increasing from 25%) were burned and explore the range of parameters that might govern the acceptability of operation. These include gaseous emissions (particularly NOx and unburned carbon), temperature, flame length, stability and general operational acceptability. Results will show emissions, Temperature, and flame length as a function of thermal load and percentage of hydrogen in the blend. The relevant application and regulation will ultimately determine the acceptability of these values, so it is important to understand the full operational envelope of the burners in question through the sort of extensive parametric testing we have carried out. The present dataset should represent a useful data source for designers interested in exploring appliance operability. In addition to this, we present data on two factors that may be absolutes in determining allowable hydrogen percentages. The first of these is flame blowback. Our results show that, for our system, the threshold between acceptable and unacceptable performance lies between 60 and 65% mol% hydrogen. Another factor that may limit operation, and which would be important in domestic applications, is the acoustic performance of these burners. We will describe a range of operational conditions in which hydrogen blend burners produce a loud and invasive ‘screech’. It will be important for equipment designers and users to find ways to avoid this or mitigate it if performance is to be deemed acceptable.Keywords: blends, operational, domestic appliances, future system operation.
Procedia PDF Downloads 311028 Assessment of Genetic Variability of Potato Genotypes for Proline Under Salt Stress Conditions
Authors: Elchin Hajiyev, Afet Memmedova Dadash, Sabina Hajiyeva, Aynur Karimova, Ramiz Aliyev
Abstract:
Although potatoes have a wide distribution range, the yield potential of varieties varies greatly depending on the region. Our country is made up of agricultural regions with very different environmental characteristics.In this case, we cannot expect the introduced varieties to show the same adaptation to the different conditions of our country. For this reason, in our country, varieties with high general adaptability should be used, rather than varieties with special adaptability in certain areas. Soil salinization has become a global problem.Increased salinity has a serious impact on food security by reducing plant productivity. Plants have protective mechanisms of adaptation to salt stress, such as the synthesis of physiologically active substances, resistance to antioxidant stress and oxidation of membrane lipids. One of these substances is free proline. Our study revealed genetic variation in proline accumulation among samples exposed to stress factors.Changes in proline content under stress conditions were studied in 50 samples. There was wide variation across all treatments.The amount of proline varied between 7.2–37.7 μM/g under salinity conditions.The lowest rate was in the SF33 genotype (1.5 times more than the control (2.5 μM/g)).The highest level of proline under the influence of salt stress was in the SF45 genotype (7.25 times higher than the control (32.5 μM/g)). Our studies have found that the protective system reacts differently to the influence of stress factors. According to the results obtained on the amount of proline, adaptation mechanisms must be more actively activated to maintain metabolism and ensure viability in sensitive forms under the influence of stress factors. At high doses of the salt stressor, a tenfold increase in proline compared to the control indicates significant damage to the plant organism as a result of stress.To prevent damage to the body, the antioxidant system needs to quickly mobilize and work at full capacity in adverse conditions. An increase in the dose of the stress factor salt in our study caused a greater increase in the amount of free proline in plant tissues. Considering the functions of proline as an osmoprotector and antioxidant, it was found that increasing its amount is aimed at protecting the plant from the acute effects of stressors.Keywords: genetic variability, potato, genotypes, proline, stress
Procedia PDF Downloads 531027 Hui as Religious over Ethnic Identity: A Case Study of Muslim Ethnic Interaction in Central Northwest China
Authors: Hugh Battye
Abstract:
In recent years, Muslim identity in China has strengthened against the backdrop of a worldwide Islamic revival. One discussion arising from this has been focused around the Hui, an ethnicity created by the Communist government in the 1950s covering the Chinese speaking 'Sino-Muslims' as opposed to those with their own language. While the term Hui in Chinese has traditionally meant 'Muslim', the strengthening of Hui identity in recent decades has led to a debate among scholars as to whether this identity is primarily ethnically or religiously driven. This article looks at the case of a mixed ethnic community in rural Gansu Province, Central Northwest China, which not only contains the official Hui ethnicity but also members of the smaller Muslim Salar and Bonan minority groups. In analyzing the close interaction between these groups, the paper will argue that, despite government attempts to promote the Hui as an ethnicity within its modern ethnic paradigm, in rural Gansu and the general region, Hui is still essentially seen as a religious identity. Having provided an overview of the historical evolution of the Hui ethnonym in China and presented the views of some of the important scholars involved in the discussion, the paper will then offer its findings based on participant observation and survey work in Gansu. The results will show that, firstly, for the local Muslims, religious identity clearly dominates ethnic identity. On the ground, the term Hui continues to be used as a catch-all term for Muslims, whether they belong to the official 'Hui' nationality or not, and against this backdrop, the ethnic importance of being 'Hui', 'Bonan' or 'Salar' within the Muslim community itself is by contrast minimal. Secondly, however, this local Muslim solidarity is not at present pointing towards some kind of national pan-ethnic Islamic movement that could potentially set itself up in opposition to the Chinese government; rather it is better seen as part of an ongoing negotiation by local Muslims with the state in the context of its ascribed ethnic categories. The findings of this study in a region where many of the Muslims are more conservative in their beliefs is not necessarily replicated in other contexts, such as in urban areas and in eastern and southern China, and hence reification of the term Hui as one idea extending all across China should be avoided, whether in terms of a united religious 'ummah' or of a real or imagined 'ethnic group.' Rather, this localized case study seeks to demonstrate ways in which Muslims of rural Central Northwest China are 'being Hui,' as a contribution to the broader discussion on what it means to be Muslim and Chinese in the reform era.Keywords: China, ethnicity, Hui, identity, Muslims
Procedia PDF Downloads 1271026 Effect of Cutting Tools and Working Conditions on the Machinability of Ti-6Al-4V Using Vegetable Oil-Based Cutting Fluids
Authors: S. Gariani, I. Shyha
Abstract:
Cutting titanium alloys are usually accompanied with low productivity, poor surface quality, short tool life and high machining costs. This is due to the excessive generation of heat at the cutting zone and difficulties in heat dissipation due to relatively low heat conductivity of this metal. The cooling applications in machining processes are crucial as many operations cannot be performed efficiently without cooling. Improving machinability, increasing productivity, enhancing surface integrity and part accuracy are the main advantages of cutting fluids. Conventional fluids such as mineral oil-based, synthetic and semi-synthetic are the most common cutting fluids in the machining industry. Although, these cutting fluids are beneficial in the industries, they pose a great threat to human health and ecosystem. Vegetable oils (VOs) are being investigated as a potential source of environmentally favourable lubricants, due to a combination of biodegradability, good lubricous properties, low toxicity, high flash points, low volatility, high viscosity indices and thermal stability. Fatty acids of vegetable oils are known to provide thick, strong, and durable lubricant films. These strong lubricating films give the vegetable oil base stock a greater capability to absorb pressure and high load carrying capacity. This paper details preliminary experimental results when turning Ti-6Al-4V. The impact of various VO-based cutting fluids, cutting tool materials, working conditions was investigated. The full factorial experimental design was employed involving 24 tests to evaluate the influence of process variables on average surface roughness (Ra), tool wear and chip formation. In general, Ra varied between 0.5 and 1.56 µm and Vasco1000 cutting fluid presented comparable performance with other fluids in terms of surface roughness while uncoated coarse grain WC carbide tool achieved lower flank wear at all cutting speeds. On the other hand, all tools tips were subjected to uniform flank wear during whole cutting trails. Additionally, formed chip thickness ranged between 0.1 and 0.14 mm with a noticeable decrease in chip size when higher cutting speed was used.Keywords: cutting fluids, turning, Ti-6Al-4V, vegetable oils, working conditions
Procedia PDF Downloads 2791025 From Abraham to Average Man: Game Theoretic Analysis of Divine Social Relationships
Authors: Elizabeth Latham
Abstract:
Billions of people worldwide profess some feeling of psychological or spiritual connection with the divine. The majority of them attribute this personal connection to the God of the Christian Bible. The objective of this research was to discover what could be known about the exact social nature of these relationships and to see if they mimic the interactions recounted in the bible; if a worldwide majority believes that the Christian Bible is a true account of God’s interactions with mankind, it is reasonable to assume that the interactions between God and the aforementioned people would be similar to the ones in the bible. This analysis required the employment of an unusual method of biblical analysis: Game Theory. Because the research focused on documented social interaction between God and man in scripture, it was important to go beyond text-analysis methods. We used stories from the New Revised Standard Version of the bible to set up “games” using economics-style matrices featuring each player’s motivations and possible courses of action, modeled after interactions in the Old and New Testaments between the Judeo-Christian God and some mortal person. We examined all relevant interactions for the objectives held by each party and their strategies for obtaining them. These findings were then compared to similar “games” created based on interviews with people subscribing to different levels of Christianity who ranged from barely-practicing to clergymen. The range was broad so as to look for a correlation between scriptural knowledge and game-similarity to the bible. Each interview described a personal experience someone believed they had with God and matrices were developed to describe each one as social interaction: a “game” to be analyzed quantitively. The data showed that in most cases, the social features of God-man interactions in the modern lives of people were like those present in the “games” between God and man in the bible. This similarity was referred to in the study as “biblical faith” and it alone was a fascinating finding with many implications. The even more notable finding, however, was that the amount of game-similarity present did not correlate with the amount of scriptural knowledge. Each participant was also surveyed on family background, political stances, general education, scriptural knowledge, and those who had biblical faith were not necessarily the ones that knew the bible best. Instead, there was a high degree of correlation between biblical faith and family religious observance. It seems that to have a biblical psychological relationship with God, it is more important to have a religious family than to have studied scripture, a surprising insight with massive implications on the practice and preservation of religion.Keywords: bible, Christianity, game theory, social psychology
Procedia PDF Downloads 1571024 Optimization of Dez Dam Reservoir Operation Using Genetic Algorithm
Authors: Alireza Nikbakht Shahbazi, Emadeddin Shirali
Abstract:
Since optimization issues of water resources are complicated due to the variety of decision making criteria and objective functions, it is sometimes impossible to resolve them through regular optimization methods or, it is time or money consuming. Therefore, the use of modern tools and methods is inevitable in resolving such problems. An accurate and essential utilization policy has to be determined in order to use natural resources such as water reservoirs optimally. Water reservoir programming studies aim to determine the final cultivated land area based on predefined agricultural models and water requirements. Dam utilization rule curve is also provided in such studies. The basic information applied in water reservoir programming studies generally include meteorological, hydrological, agricultural and water reservoir related data, and the geometric characteristics of the reservoir. The system of Dez dam water resources was simulated applying the basic information in order to determine the capability of its reservoir to provide the objectives of the performed plan. As a meta-exploratory method, genetic algorithm was applied in order to provide utilization rule curves (intersecting the reservoir volume). MATLAB software was used in order to resolve the foresaid model. Rule curves were firstly obtained through genetic algorithm. Then the significance of using rule curves and the decrease in decision making variables in the system was determined through system simulation and comparing the results with optimization results (Standard Operating Procedure). One of the most essential issues in optimization of a complicated water resource system is the increasing number of variables. Therefore a lot of time is required to find an optimum answer and in some cases, no desirable result is obtained. In this research, intersecting the reservoir volume has been applied as a modern model in order to reduce the number of variables. Water reservoir programming studies has been performed based on basic information, general hypotheses and standards and applying monthly simulation technique for a statistical period of 30 years. Results indicated that application of rule curve prevents the extreme shortages and decrease the monthly shortages.Keywords: optimization, rule curve, genetic algorithm method, Dez dam reservoir
Procedia PDF Downloads 2671023 Study of Oxidative Processes in Blood Serum in Patients with Arterial Hypertension
Authors: Laura M. Hovsepyan, Gayane S. Ghazaryan, Hasmik V. Zanginyan
Abstract:
Hypertension (HD) is the most common cardiovascular pathology that causes disability and mortality in the working population. Most often, heart failure (HF), which is based on myocardial remodeling, leads to death in hypertension. Recently, endothelial dysfunction (EDF) or a violation of the functional state of the vascular endothelium has been assigned a significant role in the structural changes in the myocardium and the occurrence of heart failure in patients with hypertension. It has now been established that tissues affected by inflammation form increased amounts of superoxide radical and NO, which play a significant role in the development and pathogenesis of various pathologies. They mediate inflammation, modify proteins and damage nucleic acids. The aim of this work was to study the processes of oxidative modification of proteins (OMP) and the production of nitric oxide in hypertension. In the experimental work, the blood of 30 donors and 33 patients with hypertension was used. For the quantitative determination of OMP products, the based on the reaction of the interaction of oxidized amino acid residues of proteins and 2,4-dinitrophenylhydrazine (DNPH) with the formation of 2,4-dinitrophenylhydrazones, the amount of which was determined spectrophotometrically. The optical density of the formed carbonyl derivatives of dinitrophenylhydrazones was recorded at different wavelengths: 356 nm - aliphatic ketone dinitrophenylhydrazones (KDNPH) of neutral character; 370 nm - aliphatic aldehyde dinirophenylhydrazones (ADNPH) of neutral character; 430 nm - aliphatic KDNFG of the main character; 530 nm - basic aliphatic ADNPH. Nitric oxide was determined by photometry using Grace's solution. Adsorption was measured on a Thermo Scientific Evolution 201 SF at a wavelength of 546 nm. Thus, the results of the studies showed that in patients with arterial hypertension, an increased level of nitric oxide in the blood serum is observed, and there is also a tendency to an increase in the intensity of oxidative modification of proteins at a wavelength of 270 nm and 363 nm, which indicates a statistically significant increase in aliphatic aldehyde and ketone dinitrophenylhydrazones. The increase in the intensity of oxidative modification of blood plasma proteins in the studied patients, revealed by us, actually reflects the general direction of free radical processes and, in particular, the oxidation of proteins throughout the body. A decrease in the activity of the antioxidant system also leads to a violation of protein metabolism. The most important consequence of the oxidative modification of proteins is the inactivation of enzymes.Keywords: hypertension (HD), oxidative modification of proteins (OMP), nitric oxide (NO), oxidative stress
Procedia PDF Downloads 1111022 Cercarial Diversity in Freshwater Snails from Selected Freshwater Bodies and Its Implication for Veterinary and Public Health in Kaduna State, Nigeria
Authors: Fatima Muhammad Abdulkadir, D. B. Maikaje, Y. A. Umar
Abstract:
A study conducted to determine cercariae diversity and prevalence of trematode infection in freshwater snails from six freshwater bodies selected by systematic random sampling in Kaduna State was carried from January 2013 to December 2013. Freshwater snails and cercariae harvested from the study sites were morphologically identified. A total of 23,823 freshwater snails were collected from the six freshwater bodies: Bagoma dam, Gimbawa dam, Kangimi dam, Kubacha dam, Manchok water intake and Saminaka water intake. The observed freshwater snail species were: Melanoides tuberculata, Biomphalaria pfeifferi, Bulinus globosus, Lymnaea natalensis, Physa sp., Cleopatra bulimoides, Bellamya unicolor and Lanistes varicus. The freshwater snails were exposed to artificial bright light from a 100 Watt electric bulb in the laboratory to induce cercarial shedding. Of the total freshwater snails collected, 10.55% released one or more types of cercariae. Seven morphological types of cercariae were shed by six freshwater snail species namely: Brevifurcate-apharyngeate distome, Amphistome, Gymnocephalus, Longifurcate-pharyngeate monostome, Longifurcate-pharyngeate distome, Echinostome and Xiphidio cercariae. Infection was monotype in most of the freshwater snails collected; however, Physa species presented a mixed infection with Gymnocephalus and Longifurcate-pharyngeate distome cercariae. B. globosus and B. pfeifferi were the most preferred intermediate hosts with the prevalence of 13.48% and 13.46%, respectively. The diversity and prevalence of cercariae varied among the six freshwater bodies with Manchok water intake having the highest infestation (14.3%) and the least recorded in Kangimi dam (3.9%). There was a correlation trend between the number of freshwater snails and trematode infection with Manchok exhibiting the highest and Bagoma none. The highest cercarial diversity was observed in B. pfeifferi and B. globosus with four morphotypes each, and the lowest was in M. tuberculata with one morphotype. The general distribution of freshwater snails and the trematode cercariae they shed suggests the risk of human and animals to trematodiasis in Manchok community. Public health education to raise awareness on individual and communal action that may control snail breeding sites, prevent transmission and provide access to treatment should be intensified.Keywords: Cercariae, diversity, freshwater snails, prevalence, trematodiasis
Procedia PDF Downloads 2371021 Risk Assessment on New Bio-Composite Materials Made from Water Resource Recovery
Authors: Arianna Nativio, Zoran Kapelan, Jan Peter van der Hoek
Abstract:
Bio-composite materials are becoming increasingly popular in various applications, such as the automotive industry. Usually, bio-composite materials are made from natural resources recovered from plants, now, a new type of bio-composite material has begun to be produced in the Netherlands. This material is made from resources recovered from drinking water treatments (calcite), wastewater treatment (cellulose), and material from surface water management (aquatic plants). Surface water, raw drinking water, and wastewater can be contaminated with pathogens and chemical compounds. Therefore, it would be valuable to develop a framework to assess, monitor, and control the potential risks. Indeed, the goal is to define the major risks in terms of human health, quality of materials, and environment associated with the production and application of these new materials. This study describes the general risk assessment framework, starting with a qualitative risk assessment. The qualitative risk analysis was carried out by using the HAZOP methodology for the hazard identification phase. The HAZOP methodology is logical and structured and able to identify the hazards in the first stage of the design when hazards and associated risks are not well known. The identified hazards were analyzed to define the potential associated risks, and then these were evaluated by using the qualitative Event Tree Analysis. ETA is a logical methodology used to define the consequences for a specific hazardous incidents, evaluating the failure modes of safety barriers and dangerous intermediate events that lead to the final scenario (risk). This paper shows the effectiveness of combining of HAZOP and qualitative ETA methodologies for hazard identification and risk mapping. Then, key risks were identified, and a quantitative framework was developed based on the type of risks identified, such as QMRA and QCRA. These two models were applied to assess human health risks due to the presence of pathogens and chemical compounds such as heavy metals into the bio-composite materials. Thus, due to these contaminations, the bio-composite product, during its application, might release toxic substances into the environment leading to a negative environmental impact. Therefore, leaching tests are going to be planned to simulate the application of these materials into the environment and evaluate the potential leaching of inorganic substances, assessing environmental risk.Keywords: bio-composite, risk assessment, water reuse, resource recovery
Procedia PDF Downloads 1101020 Forecasting Regional Data Using Spatial Vars
Authors: Taisiia Gorshkova
Abstract:
Since the 1980s, spatial correlation models have been used more often to model regional indicators. An increasingly popular method for studying regional indicators is modeling taking into account spatial relationships between objects that are part of the same economic zone. In 2000s the new class of model – spatial vector autoregressions was developed. The main difference between standard and spatial vector autoregressions is that in the spatial VAR (SpVAR), the values of indicators at time t may depend on the values of explanatory variables at the same time t in neighboring regions and on the values of explanatory variables at time t-k in neighboring regions. Thus, VAR is a special case of SpVAR in the absence of spatial lags, and the spatial panel data model is a special case of spatial VAR in the absence of time lags. Two specifications of SpVAR were applied to Russian regional data for 2000-2017. The values of GRP and regional CPI are used as endogenous variables. The lags of GRP, CPI and the unemployment rate were used as explanatory variables. For comparison purposes, the standard VAR without spatial correlation was used as “naïve” model. In the first specification of SpVAR the unemployment rate and the values of depending variables, GRP and CPI, in neighboring regions at the same moment of time t were included in equations for GRP and CPI respectively. To account for the values of indicators in neighboring regions, the adjacency weight matrix is used, in which regions with a common sea or land border are assigned a value of 1, and the rest - 0. In the second specification the values of depending variables in neighboring regions at the moment of time t were replaced by these values in the previous time moment t-1. According to the results obtained, when inflation and GRP of neighbors are added into the model both inflation and GRP are significantly affected by their previous values, and inflation is also positively affected by an increase in unemployment in the previous period and negatively affected by an increase in GRP in the previous period, which corresponds to economic theory. GRP is not affected by either the inflation lag or the unemployment lag. When the model takes into account lagged values of GRP and inflation in neighboring regions, the results of inflation modeling are practically unchanged: all indicators except the unemployment lag are significant at a 5% significance level. For GRP, in turn, GRP lags in neighboring regions also become significant at a 5% significance level. For both spatial and “naïve” VARs the RMSE were calculated. The minimum RMSE are obtained via SpVAR with lagged explanatory variables. Thus, according to the results of the study, it can be concluded that SpVARs can accurately model both the actual values of macro indicators (particularly CPI and GRP) and the general situation in the regionsKeywords: forecasting, regional data, spatial econometrics, vector autoregression
Procedia PDF Downloads 1431019 Evaluating Radiation Dose for Interventional Radiologists Performing Spine Procedures
Authors: Kholood A. Baron
Abstract:
While radiologist numbers specialized in spine interventional procedures are limited in Kuwait, the number of patients demanding these procedures is increasing rapidly. Due to this high demand, the workload of radiologists is increasing, which might represent a radiation exposure concern. During these procedures, the doctor’s hands are in very close proximity to the main radiation beam/ if not within it. The aim of this study is to measure the radiation dose for radiologists during several interventional procedures for the spine. Methods: Two doctors carrying different workloads were included. (DR1) was performing procedures in the morning and afternoon shifts, while (DR2) was performing procedures in the morning shift only. Comparing the radiation exposures that the hand of each doctor is receiving will assess radiation safety and help to set up workload regulations for radiologists carrying a heavy schedule of such procedures. Entrance Skin Dose (ESD) was measured via TLD (ThermoLuminescent Dosimetry) placed at the right wrist of the radiologists. DR1 was covering the morning shift in one hospital (Mubarak Al-Kabeer Hospital) and the afternoon shift in another hospital (Dar Alshifa Hospital). The TLD chip was placed in his gloves during the 2 shifts for a whole week. Since DR2 was covering the morning shift only in Al Razi Hospital, he wore the TLD during the morning shift for a week. It is worth mentioning that DR1 was performing 4-5 spine procedures/day in the morning and the same number in the afternoon and DR2 was performing 5-7 procedures/day. This procedure was repeated for 4 consecutive weeks in order to calculate the ESD value that a hand receives in a month. Results: In general, radiation doses that the hand received in a week ranged from 0.12 to 1.12 mSv. The ESD values for DR1 for the four consecutive weeks were 1.12, 0.32, 0.83, 0.22 mSv, thus for a month (4 weeks), this equals 2.49 mSv and calculated to be 27.39 per year (11 months-since each radiologist have 45 days of leave in each year). For DR2, the weekly ESD values are 0.43, 0.74, 0.12, 0.61 mSv, and thus, for a month, this equals 1.9 mSv, and for a year, this equals 20.9 mSv /year. These values are below the standard level and way below the maximum limit of 500 mSv per year (set by ICRP = International Council of Radiation Protection). However, it is worth mentioning that DR1 was a senior consultant and hence needed less fluoro-time during each procedure. This is evident from the low ESD values of the second week (0.32) and the fourth week (0.22), even though he was performing nearly 10-12 procedures in a day /5 days a week. These values were lower or in the same range as those for DR2 (who was a junior consultant). This highlighted the importance of increasing the radiologist's skills and awareness of fluoroscopy time effect. In conclusion, the radiation dose that radiologists received during spine interventional radiology in our setting was below standard dose limits.Keywords: radiation protection, interventional radiology dosimetry, ESD measurements, radiologist radiation exposure
Procedia PDF Downloads 591018 Recurrent Fevers with Weight Gain - Possible Rapid onset Obesity with Hypoventilation, Hypothalamic Dysfunction and Autonomic Dysregulation Syndrome
Authors: Lee Rui, Rajeev Ramachandran
Abstract:
The approach to recurrent fevers in the paediatric or adolescent age group is not a straightforward one. Causes range from infectious diseases to rheumatological conditions to endocrinopathies, and are usually accompanied by weight loss rather than weight gain. We present an interesting case of a 16-year-old girl brought by her mother to the General Pediatrics Clinic for concerns of recurrent fever paired with significant weight gain over 1.5 years, with no identifiable cause found despite extensive work-up by specialists ranging from Rheumatologists to Oncologists. This case provides a learning opportunity on the approach to weight gain paired with persistent fevers in a paediatric population, one which is not commonly encountered and prompts further evaluation and consideration of less common diagnoses. In a span of 2 years, the girl’s weight had increased from 55 kg at 13 years old (75th centile) to 73.9 kg at 16 years old (>97th centile). About 1 year into her rapid weight gain, she started developing recurrent fevers of documented temperatures > 37.5 – 38.6 every 2-3 days, resulting in school absenteeism when she was sent home after temperature-taking in school found her to be febrile. The rapid onset of weight gain paired with unexplained fevers prompted the treating physician to consider the diagnosis of ROHHAD syndrome. Rapid onset obesity with hypoventilation, hypothalamic dysfunction and autonomic dysregulation (ROHHAD) syndrome is a rare disorder first described in 2007. It is characterized by dysfunction of the autonomic and endocrine system, characterized by hyperphagia and rapid-onset weight gain. This rapid weight gain is classically followed by hypothalamic manifestations with neuroendocrine deficiencies, hypo-ventilatory breathing abnormalities, and autonomic dysregulation. ROHHAD is challenging to diagnose with and diagnosis is made based mostly on clinical judgement. However if truly diagnosed, the condition is characterized by high morbidity and mortality rates. Early recognition of sleep disorders breathing and targeted therapeutic interventions helps limit morbidity and mortality associated with ROHHAD syndrome. This case poses an interesting diagnostic challenge and a diagnosis of ROHHAD has to be considered, given the serious complications that can come with disease progression while conditions such as Munchausen’s or drug fever remain as diagnoses of exclusion until we have exhausted all other possible conditions.Keywords: pediatrics, endocrine, weight gain, recurrent fever, adolescent
Procedia PDF Downloads 1071017 Television Is Useful in Promoting Safe Sexual Practices to Student Populations: A Mixed-Methods Questionnaire Exploring the Impact of Channel Four’s ‘It’s a Sin (2021)’
Authors: Betsy H. Edwards
Abstract:
Background: Public Health England recognises unprotected sex and consequent transmission of sexually transmitted infections (STIs) as significant problems within student populations. Government surveys show that 50% of sexually-active young adults engage in unprotected sex with new partners, with 10% never using condoms. The recent Channel Four mini-series ‘It’s a Sin’ dramatises the 1980s AIDS epidemic and has been praised for its educational value and for promoting safe sexual practices to its viewers. This mixed-methods questionnaire study aims to investigate whether the series can change attitudes towards safe sex in student populations, can promote the use of condoms in student populations, and whether television, in general, is a useful tool for promoting health education. Methods: A questionnaire, created on Microsoft Forms, was distributed to students at the University of Birmingham via Facebook groups between September 2021 and May 2022. To consent, participants had to be aged 18 or over, a student at the university, have seen the entire series of ‘It’s a Sin’, and read the study information. Data was confidentially stored within the University’s secured OneDrive in accordance with the study’s approved ethics application. Quantitative questions measured participants’ attitudes and behaviours using Likert scales. Qualitative data was analysed using thematic analysis. Quantitative Results: 78 students completed the questionnaire. 43 participants (55%) felt that the series ‘It’s a Sin’ promoted safe sex. 74 participants (96%) and 31 participants (39%) said they were ‘very likely’ or ‘likely’ to use condoms with a casual partner during penetrative sex and oral sex respectively. 27 participants (35%) felt that watching ‘It’s a Sin’ made them more likely to use condoms; of these 27 participants, all were ‘very likely’ or ‘likely’ to use condoms during penetrative sex, and 9 were ‘very likely’ or ‘likely’ to during oral sex. 49 participants (63%) and 53 participants (68%) felt that television is a good way to provide health education and to promote healthy behaviours respectively. Qualitative Results: 56 participants (72%) gave reasons why the series had been associated with an increased uptake in HIV testing. Three themes emerged: increased education and attention, decreased stigmatisation, and relatability of characters on screen. Conclusions: This study suggests that the series ‘It’s a Sin’ can influence attitudes towards and the uptake of safe sexual practices. It would be useful for further research - using larger, randomised samples - to explore impacts upon populations lesser-educated about sexual health, who potentially have more to gain from watching series such as ‘It’s a Sin’.Keywords: GUM, It's a sin, media, sexual health, students, television, tv
Procedia PDF Downloads 981016 Review of the Safety of Discharge on the First Postoperative Day Following Carotid Surgery: A Retrospective Analysis
Authors: John Yahng, Hansraj Riteesh Bookun
Abstract:
Objective: This was a retrospective cross-sectional study evaluating the safety of discharge on the first postoperative day following carotid surgery - principally carotid endarterectomy. Methods: Between January 2010 to October 2017, 252 patients with mean age of 72 years, underwent carotid surgery by seven surgeons. Their medical records were consulted and their operative as well as complication timelines were databased. Descriptive statistics were used to analyse pooled responses and our indicator variables. The statistical package used was STATA 13. Results: There were 183 males (73%) and the comorbid burden was as follows: ischaemic heart disease (54%), diabetes (38%), hypertension (92%), stage 4 kidney impairment (5%) and current or ex-smoking (77%). The main indications were transient ischaemic attacks (42%), stroke (31%), asymptomatic carotid disease (16%) and amaurosis fugax (8%). 247 carotid endarterectomies (109 with patch arterioplasty, 88 with eversion and transection technique, 50 with endarterectomy only) were performed. 2 carotid bypasses, 1 embolectomy, 1 thrombectomy with patch arterioplasty and 1 excision of a carotid body tumour were also performed. 92% of the cases were performed under general anaesthesia. A shunt was used in 29% of cases. The mean length of stay was 5.1 ± 3.7days with the range of 2 to 22 days. No patient was discharged on day 1. The mean time from admission to surgery was 1.4 ± 2.8 days, ranging from 0 to 19 days. The mean time from surgery to discharge was 2.7 ± 2.0 days with the of range 0 to 14 days. 36 complications were encountered over this period, with 12 failed repairs (5 major strokes, 2 minor strokes, 3 transient ischaemic attacks, 1 cerebral bleed, 1 occluded graft), 11 bleeding episodes requiring a return to the operating theatre, 5 adverse cardiac events, 3 cranial nerve injuries, 2 respiratory complications, 2 wound complications and 1 acute kidney injury. There were no deaths. 17 complications occurred on postoperative day 0, 11 on postoperative day 1, 6 on postoperative day 2 and 2 on postoperative day 3. 78% of all complications happened before the second postoperative day. Out of the complications which occurred on the second or third postoperative day, 4 (1.6%) were bleeding episodes, 1 (0.4%) failed repair , 1 respiratory complication (0.4%) and 1 wound complication (0.4%). Conclusion: Although it has been common practice to discharge patients on the second postoperative day following carotid endarterectomy, we find here that discharge on the first operative day is safe. The overall complication rate is low and most complications are captured before the second postoperative day. We suggest that patients having an uneventful first 24 hours post surgery be discharged on the first day. This should reduce hospital length of stay and the health economic burden.Keywords: carotid, complication, discharge, surgery
Procedia PDF Downloads 1661015 An Investigation into the Influence of Compression on 3D Woven Preform Thickness and Architecture
Authors: Calvin Ralph, Edward Archer, Alistair McIlhagger
Abstract:
3D woven textile composites continue to emerge as an advanced material for structural applications and composite manufacture due to their bespoke nature, through thickness reinforcement and near net shape capabilities. When 3D woven preforms are produced, they are in their optimal physical state. As 3D weaving is a dry preforming technology it relies on compression of the preform to achieve the desired composite thickness, fibre volume fraction (Vf) and consolidation. This compression of the preform during manufacture results in changes to its thickness and architecture which can often lead to under-performance or changes of the 3D woven composite. Unlike traditional 2D fabrics, the bespoke nature and variability of 3D woven architectures makes it difficult to know exactly how each 3D preform will behave during processing. Therefore, the focus of this study is to investigate the effect of compression on differing 3D woven architectures in terms of structure, crimp or fibre waviness and thickness as well as analysing the accuracy of available software to predict how 3D woven preforms behave under compression. To achieve this, 3D preforms are modelled and compression simulated in Wisetex with varying architectures of binder style, pick density, thickness and tow size. These architectures have then been woven with samples dry compression tested to determine the compressibility of the preforms under various pressures. Additional preform samples were manufactured using Resin Transfer Moulding (RTM) with varying compressive force. Composite samples were cross sectioned, polished and analysed using microscopy to investigate changes in architecture and crimp. Data from dry fabric compression and composite samples were then compared alongside the Wisetex models to determine accuracy of the prediction and identify architecture parameters that can affect the preform compressibility and stability. Results indicate that binder style/pick density, tow size and thickness have a significant effect on compressibility of 3D woven preforms with lower pick density allowing for greater compression and distortion of the architecture. It was further highlighted that binder style combined with pressure had a significant effect on changes to preform architecture where orthogonal binders experienced highest level of deformation, but highest overall stability, with compression while layer to layer indicated a reduction in fibre crimp of the binder. In general, simulations showed a relative comparison to experimental results; however, deviation is evident due to assumptions present within the modelled results.Keywords: 3D woven composites, compression, preforms, textile composites
Procedia PDF Downloads 1361014 Stems of Prunus avium: An Unexplored By-product with Great Bioactive Potential
Authors: Luís R. Silva, Fábio Jesus, Catarina Bento, Ana C. Gonçalves
Abstract:
Over the last few years, the traditional medicine has gained ground at nutritional and pharmacological level. The natural products and their derivatives have great importance in several drugs used in modern therapeutics. Plant-based systems continue to play an essential role in primary healthcare. Additionally, the utilization of their plant parts, such as leaves, stems and flowers as nutraceutical and pharmaceutical products, can add a high value in the natural products market, not just by the nutritional value due to the significant levels of phytochemicals, but also by to the high benefit for the producers and manufacturers business. Stems of Prunus avium L. are a byproduct resulting from the processing of cherry, and have been consumed over the years as infusions and decoctions due to its bioactive properties, being used as sedative, diuretic and draining, to relief of renal stones, edema and hypertension. In this work, we prepared a hydroethanolic and infusion extracts from stems of P. avium collected in Fundão Region (Portugal), and evaluate the phenolic profile by LC/DAD, antioxidant capacity, α-glucosidase inhibitory activity and protection of human erythrocytes against oxidative damage. The LC-DAD analysis allowed to the identification of 19 phenolic compounds, catechin and 3-O-caffolquinic acid were the main ones. In a general way, hydroethanolic extract proved to be more active than infusion. This extract had the best antioxidant activity against DPPH• (IC50=22.37 ± 0.28 µg/mL) and superoxide radical (IC50=13.93 ± 0.30 µg/mL). Furthermore, it was the most active concerning inhibition of hemoglobin oxidation (IC50=13.73 ± 0.67 µg/mL), hemolysis (IC50=1.49 ± 0.18 µg/mL) and lipid peroxidation (IC50=26.20 ± 0.38 µg/mL) on human erythrocytes. On the other hand, infusion revealed to be more efficient towards α-glucosidase inhibitory activity (IC50=3.18 ± 0.23 µg/mL) and against nitric oxide radical (IC50=99.99 ± 1.89 µg/mL). The Sweet cherry sector is very important in Fundão Region (Portugal), and taking profit from the great wastes produced during processing of the cherry to produce added-value products, such as food supplements cannot be ignored. Our results demonstrate that P. avium stems possesses remarkable antioxidant and free radical scavenging properties. It is therefore, suggest, that P. avium stems can be used as a natural antioxidant with high potential to prevent or slow the progress of human diseases mediated by oxidative stress.Keywords: stems, Prunus avium, phenolic compounds, biological potential
Procedia PDF Downloads 2981013 Review of the Model-Based Supply Chain Management Research in the Construction Industry
Authors: Aspasia Koutsokosta, Stefanos Katsavounis
Abstract:
This paper reviews the model-based qualitative and quantitative Operations Management research in the context of Construction Supply Chain Management (CSCM). Construction industry has been traditionally blamed for low productivity, cost and time overruns, waste, high fragmentation and adversarial relationships. The construction industry has been slower than other industries to employ the Supply Chain Management (SCM) concept and develop models that support the decision-making and planning. However the last decade there is a distinct shift from a project-based to a supply-based approach of construction management. CSCM comes up as a new promising management tool of construction operations and improves the performance of construction projects in terms of cost, time and quality. Modeling the Construction Supply Chain (CSC) offers the means to reap the benefits of SCM, make informed decisions and gain competitive advantage. Different modeling approaches and methodologies have been applied in the multi-disciplinary and heterogeneous research field of CSCM. The literature review reveals that a considerable percentage of CSC modeling accommodates conceptual or process models which discuss general management frameworks and do not relate to acknowledged soft OR methods. We particularly focus on the model-based quantitative research and categorize the CSCM models depending on their scope, mathematical formulation, structure, objectives, solution approach, software used and decision level. Although over the last few years there has been clearly an increase of research papers on quantitative CSC models, we identify that the relevant literature is very fragmented with limited applications of simulation, mathematical programming and simulation-based optimization. Most applications are project-specific or study only parts of the supply system. Thus, some complex interdependencies within construction are neglected and the implementation of the integrated supply chain management is hindered. We conclude this paper by giving future research directions and emphasizing the need to develop robust mathematical optimization models for the CSC. We stress that CSC modeling needs a multi-dimensional, system-wide and long-term perspective. Finally, prior applications of SCM to other industries have to be taken into account in order to model CSCs, but not without the consequential reform of generic concepts to match the unique characteristics of the construction industry.Keywords: construction supply chain management, modeling, operations research, optimization, simulation
Procedia PDF Downloads 5031012 Analyzing the Contamination of Some Food Crops Due to Mineral Deposits in Ondo State, Nigeria
Authors: Alexander Chinyere Nwankpa, Nneka Ngozi Nwankpa
Abstract:
In Nigeria, the Federal government is trying to make sure that everyone has access to enough food that is nutritiously adequate and safe. But in the southwest of Nigeria, notably in Ondo State, the most valuable minerals such as oil and gas, bitumen, kaolin, limestone talc, columbite, tin, gold, coal, and phosphate are abundant. Therefore, some regions of Ondo State are now linked to large quantities of natural radioactivity as a result of the mineral presence. In this work, the baseline radioactivity levels in some of the most important food crops in Ondo State were analyzed, allowing for the prediction of probable radiological health impacts. To this effect, maize (Zea mays), yam (Dioscorea alata) and cassava (Manihot esculenta) tubers were collected from the farmlands in the State because they make up the majority of food's nutritional needs. Ondo State was divided into eight zones in order to provide comprehensive coverage of the research region. At room temperature, the maize (Zea mays), yam (Dioscorea alata), and cassava (Manihot esculenta) samples were dried until they reached a consistent weight. They were pulverized, homogenized, and 250 g packed in a 1-liter Marinelli beaker and kept for 28 days to achieve secular equilibrium. The activity concentrations of Radium-226 (Ra-226), Thorium-232 (Th-232), and Potassium-40 (K-40) were determined in the food samples using Gamma-ray spectrometry. Firstly, the Hyper Pure Germanium detector was calibrated using standard radioactive sources. The gamma counting, which lasted for 36000s for each sample, was carried out in the Centre for Energy Research and Development, Obafemi Awolowo University, Ile-Ife, Nigeria. The mean activity concentration of Ra-226, Th-232 and K-40 for yam were 1.91 ± 0.10 Bq/kg, 2.34 ± 0.21 Bq/kg and 48.84 ± 3.14 Bq/kg, respectively. The content of the radionuclides in maize gave a mean value of 2.83 ± 0.21 Bq/kg for Ra-226, 2.19 ± 0.07 Bq/kg for Th-232 and 41.11 ± 2.16 Bq/kg for K-40. The mean activity concentrations in cassava were 2.52 ± 0.31 Bq/kg for Ra-226, 1.94 ± 0.21 Bq/kg for Th-232 and 45.12 ± 3.31 Bq/kg for K-40. The average committed effective doses in zones 6-8 were 0.55 µSv/y for the consumption of yam, 0.39 µSv/y for maize, and 0.49 µSv/y for cassava. These values are higher than the annual dose guideline of 0.35 µSv/y for the general public. Therefore, the values obtained in this work show that there is radiological contamination of some foodstuffs consumed in some parts of Ondo State. However, we recommend that systematic and appropriate methods also need to be established for the measurement of gamma-emitting radionuclides since these constitute important contributors to the internal exposure of man through ingestion, inhalation, or wound on the body.Keywords: contamination, environment, radioactivity, radionuclides
Procedia PDF Downloads 1051011 Screening for Non-hallucinogenic Neuroplastogens as Drug Candidates for the Treatment of Anxiety, Depression, and Posttraumatic Stress Disorder
Authors: Jillian M. Hagel, Joseph E. Tucker, Peter J. Facchini
Abstract:
With the aim of establishing a holistic approach for the treatment of central nervous system (CNS) disorders, we are pursuing a drug development program rapidly progressing through discovery and characterization phases. The drug candidates identified in this program are referred to as neuroplastogens owing to their ability to mediate neuroplasticity, which can be beneficial to patients suffering from anxiety, depression, or posttraumatic stress disorder. These and other related neuropsychiatric conditions are associated with the onset of neuronal atrophy, which is defined as a reduction in the number and/or productivity of neurons. The stimulation of neuroplasticity results in an increase in the connectivity between neurons and promotes the restoration of healthy brain function. We have synthesized a substantial catalogue of proprietary indolethylamine derivatives based on the general structures of serotonin (5-hydroxytryptamine) and psychedelic molecules such as N,N-dimethyltryptamine (DMT) and psilocin (4-hydroxy-DMT) that function as neuroplastogens. A primary objective in our screening protocol is the identification of derivatives associated with a significant reduction in hallucination, which will allow administration of the drug at a dose that induces neuroplasticity and triggers other efficacious outcomes in the treatment of targeted CNS disorders but which does not cause a psychedelic response in the patient. Both neuroplasticity and hallucination are associated with engagement of the 5HT2A receptor, requiring drug candidates differentially coupled to these two outcomes at a molecular level. We use novel and proprietary artificial intelligence algorithms to predict the mode of binding to the 5HT2A receptor, which has been shown to correlate with the hallucinogenic response. Hallucination is tested using the mouse head-twitch response model, whereas mouse marble-burying and sucrose preference assays are used to evaluate anxiolytic and anti-depressive potential. Neuroplasticity is assays using dendritic outgrowth assays and cell-based ELISA analysis. Pharmacokinetics and additional receptor-binding analyses also contribute the selection of lead candidates. A summary of the program is presented.Keywords: neuroplastogen, non-hallucinogenic, drug development, anxiety, depression, PTSD, indolethylamine derivatives, psychedelic-inspired, 5-HT2A receptor, computational chemistry, head-twitch response behavioural model, neurite outgrowth assay
Procedia PDF Downloads 140