Search results for: yield point
844 Governance Models of Higher Education Institutions
Authors: Zoran Barac, Maja Martinovic
Abstract:
Higher Education Institutions (HEIs) are a special kind of organization, with its unique purpose and combination of actors. From the societal point of view, they are central institutions in the society that are involved in the activities of education, research, and innovation. At the same time, their societal function derives complex relationships between involved actors, ranging from students, faculty and administration, business community and corporate partners, government agencies, to the general public. HEIs are also particularly interesting as objects of governance research because of their unique public purpose and combination of stakeholders. Furthermore, they are the special type of institutions from an organizational viewpoint. HEIs are often described as “loosely coupled systems” or “organized anarchies“ that implies the challenging nature of their governance models. Governance models of HEIs describe roles, constellations, and modes of interaction of the involved actors in the process of strategic direction and holistic control of institutions, taking into account each particular context. Many governance models of the HEIs are primarily based on the balance of power among the involved actors. Besides the actors’ power and influence, leadership style and environmental contingency could impact the governance model of an HEI. Analyzing them through the frameworks of institutional and contingency theories, HEI governance models originate as outcomes of their institutional and contingency adaptation. HEIs tend to fit to institutional context comprised of formal and informal institutional rules. By fitting to institutional context, HEIs are converging to each other in terms of their structures, policies, and practices. On the other hand, contingency framework implies that there is no governance model that is suitable for all situations. Consequently, the contingency approach begins with identifying contingency variables that might impact a particular governance model. In order to be effective, the governance model should fit to contingency variables. While the institutional context creates converging forces on HEI governance actors and approaches, contingency variables are the causes of divergence of actors’ behavior and governance models. Finally, an HEI governance model is a balanced adaptation of the HEIs to the institutional context and contingency variables. It also encompasses roles, constellations, and modes of interaction of involved actors influenced by institutional and contingency pressures. Actors’ adaptation to the institutional context brings benefits of legitimacy and resources. On the other hand, the adaptation of the actors’ to the contingency variables brings high performance and effectiveness. HEI governance models outlined and analyzed in this paper are collegial, bureaucratic, entrepreneurial, network, professional, political, anarchical, cybernetic, trustee, stakeholder, and amalgam models.Keywords: governance, governance models, higher education institutions, institutional context, situational context
Procedia PDF Downloads 336843 Festival Gamification: Conceptualization and Scale Development
Authors: Liu Chyong-Ru, Wang Yao-Chin, Huang Wen-Shiung, Tang Wan-Ching
Abstract:
Although gamification has been concerned and applied in the tourism industry, limited literature could be found in tourism academy. Therefore, to contribute knowledge in festival gamification, it becomes essential to start by establishing a Festival Gamification Scale (FGS). This study defines festival gamification as the extent of a festival to involve game elements and game mechanisms. Based on self-determination theory, this study developed an FGS. Through the multi-study method, in study one, five FGS dimensions were sorted through literature review, followed by twelve in-depth interviews. A total of 296 statements were extracted from interviews and were later narrowed down to 33 items under six dimensions. In study two, 226 survey responses were collected from a cycling festival for exploratory factor analysis, resulting in twenty items under five dimensions. In study three, 253 survey responses were obtained from a marathon festival for confirmatory factor analysis, resulting in the final sixteen items under five dimensions. Then, results of criterion-related validity confirmed the positive effects of these five dimensions on flow experience. In study four, for examining the model extension of the developed five-dimensional 16-item FGS, which includes dimensions of relatedness, mastery, competence, fun, and narratives, cross-validation analysis was performed using 219 survey responses from a religious festival. For the tourism academy, the FGS could further be applied in other sub-fields such as destinations, theme parks, cruise trips, or resorts. The FGS serves as a starting point for examining the mechanism of festival gamification in changing tourists’ attitudes and behaviors. Future studies could work on follow-up studies of FGS by testing outcomes of festival gamification or examining moderating effects of enhancing outcomes of festival gamification. On the other hand, although the FGS has been tested in cycling, marathon, and religious festivals, the research settings are all in Taiwan. Cultural differences of FGS is another further direction for contributing knowledge in festival gamification. This study also contributes to several valuable practical implications. First, this FGS could be utilized in tourist surveys for evaluating the extent of gamification of a festival. Based on the results of the performance assessment by FGS, festival management organizations and festival planners could learn the relative scores among dimensions of FGS, and plan for future improvement of gamifying the festival. Second, the FGS could be applied in positioning a gamified festival. Festival management organizations and festival planners could firstly consider the features and types of their festival, and then gamify their festival based on investing resources in key FGS dimensions.Keywords: festival gamification, festival tourism, scale development, self-determination theory
Procedia PDF Downloads 147842 Synthetic Classicism: A Machine Learning Approach to the Recognition and Design of Circular Pavilions
Authors: Federico Garrido, Mostafa El Hayani, Ahmed Shams
Abstract:
The exploration of the potential of artificial intelligence (AI) in architecture is still embryonic, however, its latent capacity to change design disciplines is significant. 'Synthetic Classism' is a research project that questions the underlying aspects of classically organized architecture not just in aesthetic terms but also from a geometrical and morphological point of view, intending to generate new architectural information using historical examples as source material. The main aim of this paper is to explore the uses of artificial intelligence and machine learning algorithms in architectural design while creating a coherent narrative to be contained within a design process. The purpose is twofold: on one hand, to develop and train machine learning algorithms to produce architectural information of small pavilions and on the other, to synthesize new information from previous architectural drawings. These algorithms intend to 'interpret' graphical information from each pavilion and then generate new information from it. The procedure, once these algorithms are trained, is the following: parting from a line profile, a synthetic 'front view' of a pavilion is generated, then using it as a source material, an isometric view is created from it, and finally, a top view is produced. Thanks to GAN algorithms, it is also possible to generate Front and Isometric views without any graphical input as well. The final intention of the research is to produce isometric views out of historical information, such as the pavilions from Sebastiano Serlio, James Gibbs, or John Soane. The idea is to create and interpret new information not just in terms of historical reconstruction but also to explore AI as a novel tool in the narrative of a creative design process. This research also challenges the idea of the role of algorithmic design associated with efficiency or fitness while embracing the possibility of a creative collaboration between artificial intelligence and a human designer. Hence the double feature of this research, both analytical and creative, first by synthesizing images based on a given dataset and then by generating new architectural information from historical references. We find that the possibility of creatively understand and manipulate historic (and synthetic) information will be a key feature in future innovative design processes. Finally, the main question that we propose is whether an AI could be used not just to create an original and innovative group of simple buildings but also to explore the possibility of fostering a novel architectural sensibility grounded on the specificities on the architectural dataset, either historic, human-made or synthetic.Keywords: architecture, central pavilions, classicism, machine learning
Procedia PDF Downloads 140841 Students with Severe Learning Disabilities in Mainstream Classes: A Study of Comprehensions amongst School Staff and Parents Built on Observations and Interviews in a Phenomenological Framework
Authors: Inger Eriksson, Lisbeth Ohlsson, Jeremias Rosenqvist
Abstract:
Ingress: Focus in the study is directed towards phenomena and concepts of segregation, integration, and inclusion of students attending a special school form in Sweden, namely compulsory school for pupils with learning disabilities (in Swedish 'särskola') as an alternative to mainstream compulsory school. Aim: The aim of the study is to examine the school situation for students attending särskola from a historical perspective focussing the 1980s, 1990s and the 21st century, from an integration perspective, and from a perspective of power. Procedure: Five sub-studies are reported, where integration and inclusion are looked into by observation studies and interviews with school leaders, teachers, special and remedial teachers, psychologists, coordinators, and parents in the special schools/särskola. In brief, the study about special school students attending mainstream classes from 1998 takes its point of departure in the idea that all knowledge development takes place in a social context. A special interest is taken in the school’s role for integration generally, and the role of special education particularly and on whose conditions the integration is taking place – the special school students' or the other students,' or may be equally, in the class. Pedagogical and social conditions for so called individually integrated special school students in elementary school classes were studied in eleven classes. Results: The findings are interpreted in a power perspective supported by Foucault and relationally by Vygotsky. The main part of the data consists of extensive descriptions of the eleven cases, here called integration situations. Conclusions: In summary, this study suggests that the possibilities for a special school student to get into the class community and fellowship and thereby be integrated with the class are to a high degree dependant on to what extent the student can take part in the pedagogical processes. The pedagogical situation for the special school student is affected not only by the class teacher and the support and measures undertaken but also by the other students in the class as they, in turn, are affected by how the special school student is acting. This mutual impact, which constitutes the integration process in itself, might result in a true integration if the special school student attains the status of being accepted on his/her own terms not only being cared for or cherished by some classmates. A special school student who is not accepted even on the terms of the class will often experience severe problems in the contacts with classmates and the school situation might thus be a mere placement.Keywords: integration/inclusion, mainstream school, power, special school students
Procedia PDF Downloads 248840 Ways Management of Foods Not Served to Consumers in Food Service Sector
Authors: Marzena Tomaszewska, Beata Bilska, Danuta Kolozyn-Krajewska
Abstract:
Food loss and food waste are a global problem of the modern economy. The research undertaken aimed to analyze how food is handled in catering establishments when it comes to food waste and to demonstrate main ways of management with foods/dishes not served to consumers. A survey study was conducted from January to June 2019. The selection of catering establishments participating in the study was deliberate. The study included establishments located only in Mazowieckie Voivodeship (Poland). 42 completed questionnaires were collected. In some questions, answers were based on a 5-point scale of 1 to 5 (from 'always'/'every day' to 'never'). The survey also included closed questions with a suggested cafeteria of answers. The respondents stated that in their workplaces, dishes served cold and hot ready meals are discarded every day or almost every day (23.7% and 20.5% of answers respectively). A procedure most frequently used for dealing with dishes not served to consumers on a given day is their storage at a cool temperature until the following day. In the research, 1/5 of respondents admitted that consumers 'always' or 'usually' leave uneaten meals on their plates, and over 41% 'sometimes' do so. It was found additionally that food not used in food service sector is most often thrown into a public container for rubbish. Most often thrown into the public container (with communal trash) were: expired products (80.0%), plate waste (80.0%), and inedible products (fruit and vegetable peels, egg shells) (77.5%). Most frequently into the container dedicated only for food waste were thrown out used deep-frying oil (62.5%). 10% of respondents indicated that inedible products in their workplaces is allocate for animal feeds. Food waste in the food service sector still remains an insufficiently studied issue, as owners of these objects are often unwilling to disclose data pertaining to the subject. Incorrect ways of management with foods not served to consumers were observed. There is the need to develop the educational activities for employees and management in the context of food waste management in the food service sector. This publication has been developed under the contract with the National Center for Research and Development No Gospostrateg1/385753/1/NCBR/2018 for carrying out and funding of a project implemented as part of the 'The social and economic development of Poland in the conditions of globalizing markets - GOSPOSTRATEG' program entitled 'Developing a system for monitoring wasted food and an effective program to rationalize losses and reduce food wastage' (acronym PROM).Keywords: food waste, inedible products, plate waste, used deep-frying oil
Procedia PDF Downloads 119839 Seasonal Short-Term Effect of Air Pollution on Cardiovascular Mortality in Belgium
Authors: Natalia Bustos Sierra, Katrien Tersago
Abstract:
It is currently proven that both extremes of temperature are associated with increased mortality and that air pollution is associated with temperature. This relationship is complex, and in countries with important seasonal variations in weather such as Belgium, some effects can appear as non-significant when the analysis is done over the entire year. We, therefore, analyzed the effect of short-term outdoor air pollution exposure on cardiovascular mortality during the warmer and colder months separately. We used daily cardiovascular deaths from acute cardiovascular diagnostics according to the International Classification of Diseases, 10th Revision (ICD-10: I20-I24, I44-I49, I50, I60-I66) during the period 2008-2013. The environmental data were population-weighted concentrations of particulates with an aerodynamic diameter less than 10 µm (PM₁₀) and less than 2.5 µm (PM₂.₅) (daily average), nitrogen dioxide (NO₂) (daily maximum of the hourly average) and ozone (O₃) (daily maximum of the 8-hour running mean). A Generalized linear model was applied adjusting for the confounding effect of season, temperature, dew point temperature, the day of the week, public holidays and the incidence of influenza-like illness (ILI) per 100,000 inhabitants. The relative risks (RR) were calculated for an increase of one interquartile range (IQR) of the air pollutant (μg/m³). These were presented for the four hottest months (June, July, August, September) and coldest months (November, December, January, February) in Belgium. We applied both individual lag model and unconstrained distributed lag model methods. The cumulative effect of a four-day exposure (day of exposure and three consecutive days) was calculated from the unconstrained distributed lag model. The IQR for PM₁₀, PM₂.₅, NO₂, and O₃ were respectively 8.2, 6.9, 12.9 and 25.5 µg/m³ during warm months and 18.8, 17.6, 18.4 and 27.8 µg/m³ during cold months. The association with CV mortality was statistically significant for the four pollutants during warm months and only for NO₂ during cold months. During the warm months, the cumulative effect of an IQR increase of ozone for the age groups 25-64, 65-84 and 85+ was 1.066 (95%CI: 1.002-1.135), 1.041 (1.008-1.075) and 1.036 (1.013-1.058) respectively. The cumulative effect of an IQR increase of NO₂ for the age group 65-84 was 1.066 (1.020-1.114) during warm months and 1.096 (1.030-1.166) during cold months. The cumulative effect of an IQR increase of PM₁₀ during warm months reached 1.046 (1.011-1.082) and 1.038 (1.015-1.063) for the age groups 65-84 and 85+ respectively. Similar results were observed for PM₂.₅. The short-term effect of air pollution on cardiovascular mortality is greater during warm months for lower pollutant concentrations compared to cold months. Spending more time outside during warm months increases population exposure to air pollution and can, therefore, be a confounding factor for this association. Age can also affect the length of time spent outdoors and the type of physical activity exercised. This study supports the deleterious effect of air pollution on cardiovascular mortality (CV) which varies according to season and age groups in Belgium. Public health measures should, therefore, be adapted to seasonality.Keywords: air pollution, cardiovascular, mortality, season
Procedia PDF Downloads 165838 Social Skills as a Significant Aspect of a Successful Start of Compulsory Education
Authors: Eva Šmelová, Alena Berčíková
Abstract:
The issue of school maturity and readiness of a child for a successful start of compulsory education is one of the long-term monitored areas, especially in the context of education and psychology. In the context of the curricular reform in the Czech Republic, the issue has recently gained importance. Analyses of research in this area suggest a lack of a broader overview of indicators informing about the current level of children’s school maturity and school readiness. Instead, various studies address partial issues. Between 2009 and 2013 a research study was performed at the Faculty of Education, Palacký University Olomouc (Czech Republic) focusing on children’s maturity and readiness for compulsory education. In this study, social skills were of marginal interest; the main focus was on the mental area. This previous research is smoothly linked with the present study, the objective of which is to identify the level of school maturity and school readiness in selected characteristics of social skills as part of the adaptation process after enrolment in compulsory education. In this context, the following research question has been formulated: During the process of adaptation to the school environment, which social skills are weakened? The method applied was observation, for the purposes of which the authors developed a research tool – record sheet with 11 items – social skills that a child should have by the end of preschool education. The items were assessed by first-grade teachers at the beginning of the school year. The degree of achievement and intensity of the skills were assessed for each child using an assessment scale. In the research, the authors monitored a total of three independent variables (gender, postponement of school attendance, participation in inclusive education). The effect of these independent variables was monitored using 11 dependent variables. These variables are represented by the results achieved in selected social skills. Statistical data processing was assisted by the Computer Centre of Palacký University Olomouc. Statistical calculations were performed using SPSS v. 12.0 for Windows and STATISTICA: StatSoft STATISTICA CR, Cz (software system for data analysis). The research sample comprised 115 children. In their paper, the authors present the results of the research and at the same time point to possible areas of further investigation. They also highlight possible risks associated with weakened social skills.Keywords: compulsory education, curricular reform, educational diagnostics, pupil, school curriculum, school maturity, school readiness, social skills
Procedia PDF Downloads 251837 A Review of Atomization Mechanisms Used for Spray Flash Evaporation: Their Effectiveness and Proposal of Rotary Bell Atomizer for Flashing Application
Authors: Murad A. Channa, Mehdi Khiadani. Yasir Al-Abdeli
Abstract:
Considering the severity of water scarcity around the world and its widening at an alarming rate, practical improvements in desalination techniques need to be engineered at the earliest. Atomization is the major aspect of flashing phenomena, yet it has been paid less attention to until now. There is a need to test efficient ways of atomization for the flashing process. Flash evaporation together with reverse osmosis is also a commercially matured desalination technique commonly famous as Multi-stage Flash (MSF). Even though reverse osmosis is massively practical, it is not economical or sustainable compared to flash evaporation. However, flashing evaporation has its drawbacks as well such as lower efficiency of water production per higher consumption of power and time. Flash evaporation is simply the instant boiling of a subcooled liquid which is introduced as droplets in a well-maintained negative environment. This negative pressure inside the vacuum increases the temperature of the liquid droplets far above their boiling point, which results in the release of latent heat, and the liquid droplets turn into vapor which is collected to be condensed back into an impurity-free liquid in a condenser. Atomization is the main difference between pool and spray flash evaporation. Atomization is the heart of the flash evaporation process as it increases the evaporating surface area per drop atomized. Atomization can be categorized into many levels depending on its drop size, which again becomes crucial for increasing the droplet density (drop count) per given flow rate. This review comprehensively summarizes the selective results relating to the methods of atomization and their effectiveness on the evaporation rate from earlier works to date. In addition, the reviewers propose using centrifugal atomization for the flashing application, which brings several advantages viz ultra-fine droplets, uniform droplet density, and the swirling geometry of the spray with kinetically more energetic sprays during their flight. Finally, several challenges of using rotary bell atomizer (RBA) and RBA Sprays inside the chamber have been identified which will be explored in detail. A schematic of rotary bell atomizer (RBA) integration with the chamber has been designed. This powerful centrifugal atomization has the potential to increase potable water production in commercial multi-stage flash evaporators, where it would be preferably advantageous.Keywords: atomization, desalination, flash evaporation, rotary bell atomizer
Procedia PDF Downloads 84836 The Connection between De Minimis Rule and the Effect on Trade
Authors: Pedro Mario Gonzalez Jimenez
Abstract:
The novelties introduced by the last Notice on agreements of minor importance tighten the application of the ‘De minimis’ safe harbour in the European Union. However, the undetermined legal concept of effect on trade between the Member States becomes importance at the same time. Therefore, the current analysis that the jurist should carry out in the European Union to determine if an agreement appreciably restrict competition under Article 101 of the Treaty on the Functioning of the European Union is double. Hence, it is necessary to know how to balance the significance in competition and the significance in effect on trade between the Member States. It is a crucial issue due to the negative delimitation of restriction of competition affects the positive one. The methodology of this research is rather simple. Beginning with a historical approach to the ‘De Minimis Rule’, their main problems and uncertainties will be found. So, after the analysis of normative documents and the jurisprudence of the Court of Justice of the European Union some proposals of ‘Lege ferenda’ will be offered. These proposals try to overcome the contradictions and questions that currently exist in the European Union as a consequence of the current legal regime of agreements of minor importance. The main findings of this research are the followings: Firstly, the effect on trade is another way to analyze the importance of an agreement different from the ‘De minimis rule’. In point of fact, this concept is singularly adapted to go through agreements that have as object the prevention, restriction or distortion of competition, as it is observed in the most famous European Union case-law. Thanks to the effect on trade, as long as the proper requirements are met there is no a restriction of competition under article 101 of the Treaty on the Functioning of the European Union, even if the agreement had an anti-competitive object. These requirements are an aggregate market share lower than 5% on any of the relevant markets affected by the agreement and turnover lower than 40 million of Euros. Secondly, as the Notice itself says ‘it is also intended to give guidance to the courts and competition authorities of the Member States in their application of Article 101 of the Treaty, but it has no binding force for them’. This reality makes possible the existence of different statements among the different Member States and a confusing perception of what a restriction of competition is. Ultimately, damage on trade between the Member States could be observed for this reason. The main conclusion is that the significant effect on trade between Member States is irrelevant in agreements that restrict competition because of their effects but crucial in agreements that restrict competition because of their object. Thus, the Member States should propose the incorporation of a similar concept in their legal orders in order to apply the content of the Notice. Otherwise, the significance of the restrictive agreement on competition would not be properly assessed.Keywords: De minimis rule, effect on trade, minor importance agreements, safe harbour
Procedia PDF Downloads 182835 Historical Development of Negative Emotive Intensifiers in Hungarian
Authors: Martina Katalin Szabó, Bernadett Lipóczi, Csenge Guba, István Uveges
Abstract:
In this study, an exhaustive analysis was carried out about the historical development of negative emotive intensifiers in the Hungarian language via NLP methods. Intensifiers are linguistic elements which modify or reinforce a variable character in the lexical unit they apply to. Therefore, intensifiers appear with other lexical items, such as adverbs, adjectives, verbs, infrequently with nouns. Due to the complexity of this phenomenon (set of sociolinguistic, semantic, and historical aspects), there are many lexical items which can operate as intensifiers. The group of intensifiers are admittedly one of the most rapidly changing elements in the language. From a linguistic point of view, particularly interesting are a special group of intensifiers, the so-called negative emotive intensifiers, that, on their own, without context, have semantic content that can be associated with negative emotion, but in particular cases, they may function as intensifiers (e.g.borzasztóanjó ’awfully good’, which means ’excellent’). Despite their special semantic features, negative emotive intensifiers are scarcely examined in literature based on large Historical corpora via NLP methods. In order to become better acquainted with trends over time concerning the intensifiers, The exhaustively analysed a specific historical corpus, namely the Magyar TörténetiSzövegtár (Hungarian Historical Corpus). This corpus (containing 3 millions text words) is a collection of texts of various genres and styles, produced between 1772 and 2010. Since the corpus consists of raw texts and does not contain any additional information about the language features of the data (such as stemming or morphological analysis), a large amount of manual work was required to process the data. Thus, based on a lexicon of negative emotive intensifiers compiled in a previous phase of the research, every occurrence of each intensifier was queried, and the results were stored in a separate data frame. Then, basic linguistic processing (POS-tagging, lemmatization etc.) was carried out automatically with the ‘magyarlanc’ NLP-toolkit. Finally, the frequency and collocation features of all the negative emotive words were automatically analyzed in the corpus. Outcomes of the research revealed in detail how these words have proceeded through grammaticalization over time, i.e., they change from lexical elements to grammatical ones, and they slowly go through a delexicalization process (their negative content diminishes over time). What is more, it was also pointed out which negative emotive intensifiers are at the same stage in this process in the same time period. Giving a closer look to the different domains of the analysed corpus, it also became certain that during this process, the pragmatic role’s importance increases: the newer use expresses the speaker's subjective, evaluative opinion at a certain level.Keywords: historical corpus analysis, historical linguistics, negative emotive intensifiers, semantic changes over time
Procedia PDF Downloads 233834 Historiography of European Urbanism in the 20th Century in Slavic Languages
Authors: Aliaksandr Shuba, Max Welch Guerra, Martin Pekar
Abstract:
The research is dedicated to the Historiography of European urbanism in the 20th century with its critical analysis of transnational oriented sources in Slavic languages. The goal of this research was to give an overview of Slavic sources on this subject. In the research, historians, who wrote in influential historiographies on architecture and urbanism in the 20th century history in Slavic languages from Eastern, Central and South-eastern Europe, are analysed. The analysis of historiographies in Slavic languages includes diverse sources from around Europe with authors, who examined European Urbanism in the 20th century through a global prism of or their own perspectives. The main publications are from the second half of the 20th century and the early 21st century with Soviet and Post-Soviet discourses. The necessity to analyse Slavic sources was a result of historiography of urbanism establishment as a discipline in the 20th century and by the USSR, Czechslovak, and Yugoslavian academics, who created strong historiographic bases for a development of their urban historiographic schools for wide studies and analysis of architectural and urban ideas and projects with their history in the early 1970s. That is analyzed in this research within Slavic publications, which often have different perspectives and discourses to Anglo-Saxon, and these bibliographic sources can bring a diversity of new ideas in contemporary academic discourse of the European urban historiography. The publications in Slavic languages are analyzed according to the following aspects: where, when, which types, by whom, and to whom the sources were written. The critical analysis of essential sources on the Historiography of European urbanism in the 20th century with an accomplishment through their comparison and interpretation. The authors’ autonomy is analysed as a central point, along with the influence of the Communist Party and state control on the interpretation of the history of urbanism in Central, Eastern and South-eastern Europe with the main dominant topics and ideas from the second half of the 20th century. Cross-national Slavic Historiographic sources and their perspectives are compared to the main transnational Anglo-Saxon Historiographic topics as some of the dominant subjects are hypothetically similar and others have more local or national oriented directions. Some of the dominant subjects, topics, and subtopics are hypothetically similar, while the others have more local or national oriented directions because of the authors’ autonomy and influences of the Communist Party with the state control in Slavic Socialists countries that were illustrated in this research.Keywords: European urbanism, historiography, different perspectives, 20th century
Procedia PDF Downloads 174833 Bed Evolution under One-Episode Flushing in a Truck Sewer in Paris, France
Authors: Gashin Shahsavari, Gilles Arnaud-Fassetta, Alberto Campisano, Roberto Bertilotti, Fabien Riou
Abstract:
Sewer deposits have been identified as a major cause of dysfunctions in combined sewer systems regarding sewer management, which induces different negative consequents resulting in poor hydraulic conveyance, environmental damages as well as worker’s health. In order to overcome the problematics of sedimentation, flushing has been considered as the most operative and cost-effective way to minimize the sediments impacts and prevent such challenges. Flushing, by prompting turbulent wave effects, can modify the bed form depending on the hydraulic properties and geometrical characteristics of the conduit. So far, the dynamics of the bed-load during high-flow events in combined sewer systems as a complex environment is not well understood, mostly due to lack of measuring devices capable to work in the “hostile” in combined sewer system correctly. In this regards, a one-episode flushing issue from an opening gate valve with weir function was carried out in a trunk sewer in Paris to understanding its cleansing efficiency on the sediments (thickness: 0-30 cm). During more than 1h of flushing within 5 m distance in downstream of this flushing device, a maximum flowrate and a maximum level of water have been recorded at 5 m in downstream of the gate as 4.1 m3/s and 2.1 m respectively. This paper is aimed to evaluate the efficiency of this type of gate for around 1.1 km (from the point -50 m to +1050 m in downstream from the gate) by (i) determining bed grain-size distribution and sediments evolution through the sewer channel, as well as their organic matter content, and (ii) identifying sections that exhibit more changes in their texture after the flush. For the first one, two series of sampling were taken from the sewer length and then analyzed in laboratory, one before flushing and second after, at same points among the sewer channel. Hence, a non-intrusive sampling instrument has undertaken to extract the sediments smaller than the fine gravels. The comparison between sediments texture after the flush operation and the initial state, revealed the most modified zones by the flush effect, regarding the sewer invert slope and hydraulic parameters in the zone up to 400 m from the gate. At this distance, despite the increase of sediment grain-size rages, D50 (median grain-size) varies between 0.6 mm and 1.1 mm compared to 0.8 mm and 10 mm before and after flushing, respectively. Overall, regarding the sewer channel invert slope, results indicate that grains smaller than sands (< 2 mm) are more transported to downstream along about 400 m from the gate: in average 69% before against 38% after the flush with more dispersion of grain-sizes distributions. Furthermore, high effect of the channel bed irregularities on the bed material evolution has been observed after the flush.Keywords: bed-load evolution, combined sewer systems, flushing efficiency, sediments transport
Procedia PDF Downloads 403832 Rapid Plasmonic Colorimetric Glucose Biosensor via Biocatalytic Enlargement of Gold Nanostars
Authors: Masauso Moses Phiri
Abstract:
Frequent glucose monitoring is essential to the management of diabetes. Plasmonic enzyme-based glucose biosensors have the advantages of greater specificity, simplicity and rapidity. The aim of this study was to develop a rapid plasmonic colorimetric glucose biosensor based on biocatalytic enlargement of AuNS guided by GOx. Gold nanoparticles of 18 nm in diameter were synthesized using the citrate method. Using these as seeds, a modified seeded method for the synthesis of monodispersed gold nanostars was followed. Both the spherical and star-shaped nanoparticles were characterized using ultra-violet visible spectroscopy, agarose gel electrophoresis, dynamic light scattering, high-resolution transmission electron microscopy and energy-dispersive X-ray spectroscopy. The feasibility of a plasmonic colorimetric assay through growth of AuNS by silver coating in the presence of hydrogen peroxide was investigated by several control and optimization experiments. Conditions for excellent sensing such as the concentration of the detection solution in the presence of 20 µL AuNS, 10 mM of 2-(N-morpholino) ethanesulfonic acid (MES), ammonia and hydrogen peroxide were optimized. Using the optimized conditions, the glucose assay was developed by adding 5mM of GOx to the solution and varying concentrations of glucose to it. Kinetic readings, as well as color changes, were observed. The results showed that the absorbance values of the AuNS were blue shifting and increasing as the concentration of glucose was elevated. Control experiments indicated no growth of AuNS in the absence of GOx, glucose or molecular O₂. Increased glucose concentration led to an enhanced growth of AuNS. The detection of glucose was also done by naked-eye. The color development was near complete in ± 10 minutes. The kinetic readings which were monitored at 450 and 560 nm showed that the assay could discriminate between different concentrations of glucose by ± 50 seconds and near complete at ± 120 seconds. A calibration curve for the qualitative measurement of glucose was derived. The magnitude of wavelength shifts and absorbance values increased concomitantly with glucose concentrations until 90 µg/mL. Beyond that, it leveled off. The lowest amount of glucose that could produce a blue shift in the localized surface plasmon resonance (LSPR) absorption maxima was found to be 10 – 90 µg/mL. The limit of detection was 0.12 µg/mL. This enabled the construction of a direct sensitivity plasmonic colorimetric detection of glucose using AuNS that was rapid, sensitive and cost-effective with naked-eye detection. It has great potential for transfer of technology for point-of-care devices.Keywords: colorimetric, gold nanostars, glucose, glucose oxidase, plasmonic
Procedia PDF Downloads 153831 Decision-Making Process Based on Game Theory in the Process of Urban Transformation
Authors: Cemil Akcay, Goksun Yerlikaya
Abstract:
Buildings are the living spaces of people with an active role in every aspect of life in today's world. While some structures have survived from the early ages, most of the buildings that completed their lifetime have not transported to the present day. Nowadays, buildings that do not meet the social, economic, and safety requirements of the age return to life with a transformation process. This transformation is called urban transformation. Urban transformation is the renewal of the areas with a risk of disaster and the technological infrastructure required by the structure. The transformation aims to prevent damage to earthquakes and other disasters by rebuilding buildings that have completed their non-earthquake-resistant economic life. It is essential to decide on other issues related to conversion and transformation in places where most of the building stock should transform into the first-degree earthquake belt, such as Istanbul. In urban transformation, property owners, local authority, and contractor must deal at a common point. Considering that hundreds of thousands of property owners are sometimes in the areas of transformation, it is evident how difficult it is to make the deal and decide. For the optimization of these decisions, the use of game theory is foreseeing. The main problem in this study is that the urban transformation is carried out in place, or the building or buildings are transport to a different location. There are many stakeholders in the Istanbul University Cerrahpaşa Medical Faculty Campus, which is planned to be carried out in the process of urban transformation, was tried to solve the game theory applications. An analysis of the decisions given on a real urban transformation project and the logical suitability of decisions taken without the use of game theory were also supervised using game theory. In each step of this study, many decision-makers are classifying according to a specific logical sequence, and in the game trees that emerged as a result of this classification, Nash balances were tried to observe, and optimum decisions were determined. All decisions taken for this project have been subjected to two significant differentiated comparisons using game theory, and as decisions are taken without the use of game theory, and according to the results, solutions for the decision phase of the urban transformation process introduced. The game theory model developed from beginning to the end of the urban transformation process, particularly as a solution to the difficulty of making rational decisions in large-scale projects with many participants in the decision-making process. The use of a decision-making mechanism can provide an optimum answer to the demands of the stakeholders. In today's world for the construction sector, it is also seeing that the game theory is a non-surprising consequence of the fact that it is the most critical issues of planning and making the right decision in future years.Keywords: urban transformation, the game theory, decision making, multi-actor project
Procedia PDF Downloads 140830 3D Non-Linear Analyses by Using Finite Element Method about the Prediction of the Cracking in Post-Tensioned Dapped-End Beams
Authors: Jatziri Y. Moreno-Martínez, Arturo Galván, Israel Enrique Herrera Díaz, José Ramón Gasca Tirado
Abstract:
In recent years, for the elevated viaducts in Mexico City, a construction system based on precast/pre-stressed concrete elements has been used, in which the bridge girders are divided in two parts by imposing a hinged support in sections where the bending moments that are originated by the gravity loads in a continuous beam are minimal. Precast concrete girders with dapped ends are a representative sample of a behavior that has complex configurations of stresses that make them more vulnerable to cracking due to flexure–shear interaction. The design procedures for ends of the dapped girders are well established and are based primarily on experimental tests performed for different configurations of reinforcement. The critical failure modes that can govern the design have been identified, and for each of them, the methods for computing the reinforcing steel that is needed to achieve adequate safety against failure have been proposed. Nevertheless, the design recommendations do not include procedures for controlling diagonal cracking at the entrant corner under service loading. These cracks could cause water penetration and degradation because of the corrosion of the steel reinforcement. The lack of visual access to the area makes it difficult to detect this damage and take timely corrective actions. Three-dimensional non-linear numerical models based on Finite Element Method to study the cracking at the entrant corner of dapped-end beams were performed using the software package ANSYS v. 11.0. The cracking was numerically simulated by using the smeared crack approach. The concrete structure was modeled using three-dimensional solid elements SOLID65 capable of cracking in tension and crushing in compression. Drucker-Prager yield surface was used to include the plastic deformations. The longitudinal post-tension was modeled using LINK8 elements with multilinear isotropic hardening behavior using von Misses plasticity. The reinforcement was introduced with smeared approach. The numerical models were calibrated using experimental tests carried out in “Instituto de Ingeniería, Universidad Nacional Autónoma de México”. In these numerical models the characteristics of the specimens were considered: typical solution based on vertical stirrups (hangers) and on vertical and horizontal hoops with a post-tensioned steel which contributed to a 74% of the flexural resistance. The post-tension is given by four steel wires with a 5/8’’ (16 mm) diameter. Each wire was tensioned to 147 kN and induced an average compressive stress of 4.90 MPa on the concrete section of the dapped end. The loading protocol consisted on applying symmetrical loading to reach the service load (180 kN). Due to the good correlation between experimental and numerical models some additional numerical models were proposed by considering different percentages of post-tension in order to find out how much it influences in the appearance of the cracking in the reentrant corner of the dapped-end beams. It was concluded that the increasing of percentage of post-tension decreases the displacements and the cracking in the reentrant corner takes longer to appear. The authors acknowledge at “Universidad de Guanajuato, Campus Celaya-Salvatierra” and the financial support of PRODEP-SEP (UGTO-PTC-460) of the Mexican government. The first author acknowledges at “Instituto de Ingeniería, Universidad Nacional Autónoma de México”.Keywords: concrete dapped-end beams, cracking control, finite element analysis, postension
Procedia PDF Downloads 226829 Exploring the Potential of Mobile Learning in Distance Higher Education: A Case Study of the University of Jammu, Jammu, and Kashmir
Authors: Darshana Sharma
Abstract:
Distance Education has emerged as a viable alternative to serve the higher educational needs of the socially and economically disadvantaged people of the remote, rural areas of Jammu region. The University of Jammu is a National Accreditation, and Assessment Council accredited, A+ university and has been accorded graded autonomy by the University Grants Commission. It is a dual mode university offering academic programmes through the regular departments and through the Directorate of Distance Education. The Directorate of Distance Education, University of Jammu still uses printed study material as a mode of instructional delivery. The development of technologies has assured increased interaction and communication for distance learners throughout the distance open learning institutions. Though it is tempting and convenient to adopt technology already being used by others, it may not prove effective for the simple reason that two institutions may be unlike in some respect. The use of technology must be conceived in view of the needs of the learners; geographical socio-economic-cultural and technological contexts and financial, administrative and academic resources of the institution. Mobile learning (m-learning) is a novel approach to knowledge acquisition and dissemination and is gaining global attention. It has evolved as one of the useful channels of distance learning promoting interaction between learners and teachers. It is felt that the Directorate of Distance Education, University of Jammu also needs to adopt new technologies to provide more effective academic and information support to distance learners in order to keep them motivated and also to develop self-learning skills. The chief objective of the research on which this paper is based was to measure the opinion of the distance learners of the DDE, the University of Jammu about the merits of mobile learning. It also explores their preferences for implementing mobile learning. The survey research design of descriptive research has been used. The data was collected from 400 distance learners enrolled with undergraduate and post-graduate programmes using self-constructed questionnaire containing five-point Likert scale items arranging from strongly agree, agree, indifferent, disagree and strongly disagree. Percentages were used to analyze the data. The findings lead to conclude that mobile learning has a great potential for the DDE for reaching out to the rural, remotely located distance learners of the Jammu region and also to improve the teaching-learning environment. The paper also finds out the challenges in the implementation of mobile learning in the region and further makes suggestions for effective implementation of mobile learning in DDE, University of Jammu.Keywords: directorate of distance education, mobile learning, national accreditation and assessment council, university of Jammu
Procedia PDF Downloads 123828 Characterization of Polymorphic Forms of Rifaximin
Authors: Ana Carolina Kogawa, Selma Gutierrez Antonio, Hérida Regina Nunes Salgado
Abstract:
Rifaximin is an oral antimicrobial, gut - selective and not systemic with adverse effects compared to placebo. It is used for the treatment of hepatic encephalopathy, travelers diarrhea, irritable bowel syndrome, Clostridium difficile, ulcerative colitis and acute diarrhea. The crystalline form present in the rifaximin with minimal systemic absorption is α, being the amorphous form significantly different. Regulators are increasingly attention to polymorphisms. Polymorphs can change the form by altering the drug characteristics compromising the effectiveness and safety of the finished product. International Conference on Harmonization issued the ICH Guidance Q6A, which aim to improve the control of polymorphism in new and existing pharmaceuticals. The objective of this study was to obtain polymorphic forms of rifaximin employing recrystallization processes and characterize them by thermal analysis (thermogravimetry - TG and differential scanning calorimetry - DSC), X-ray diffraction, scanning electron microscopy and solubility test. Six polymorphic forms of rifaximin, designated I to VI were obtained by the crystallization process by evaporation of the solvent. The profiles of the TG curves obtained from polymorphic forms of rifaximin are similar to rifaximin and each other, however, the DTG are different, indicating different thermal behaviors. Melting temperature values of all the polymorphic forms were greater to that shown by the rifaximin, indicating the higher thermal stability of the obtained forms. The comparison of the diffractograms of the polymorphic forms of rifaximin with rifaximin α, β and γ constant in patent indicate that forms III, V and VI are formed by mixing polymorph β and α and form III is formed by polymorph β. The polymorphic form I is formed by polymorph β, but with a significant amount of amorphous material. Already, the polymorphic form II consists of polymorph γ, amorphous. In scanning electron microscope is possible to observe the heterogeneity of morphological characteristics of crystals of polymorphic forms among themselves and with rifaximin. The solubility of forms I and II was greater than the solubility of rifaximin, already, forms III, IV and V presented lower solubility than of rifaximin. Similarly, the bioavailability of the amorphous form of rifaximin is considered significantly higher than the form α, the polymorphic forms obtained in this work can not guarantee the excellent tolerability of the reference medicine. Therefore, studies like these are extremely important and they point to the need for greater requirements by the regulatory agencies competent about polymorphs analysis of the raw materials used in the manufacture of medicines marketed globally. These analyzes are not required in the majority of official compendia. Partnerships between industries, research centers and universities would be a viable way to consolidate researches in this area and contribute to improving the quality of solid drugs.Keywords: electronic microscopy, polymorphism, rifaximin, solubility, X-ray diffraction
Procedia PDF Downloads 664827 A Semi-Automated GIS-Based Implementation of Slope Angle Design Reconciliation Process at Debswana Jwaneng Mine, Botswana
Authors: K. Mokatse, O. M. Barei, K. Gabanakgosi, P. Matlhabaphiri
Abstract:
The mining of pit slopes is often associated with some level of deviation from design recommendations, and this may translate to associated changes in the stability of the excavated pit slopes. Therefore slope angle design reconciliations are essential for assessing and monitoring compliance of excavated pit slopes to accepted slope designs. These associated changes in slope stability may be reflected by changes in the calculated factors of safety and/or probabilities of failure. Reconciliations of as-mined and slope design profiles are conducted periodically to assess the implications of these deviations on pit slope stability. Currently, the slope design reconciliation process being implemented in Jwaneng Mine involves the measurement of as-mined and design slope angles along vertical sections cut along the established geotechnical design section lines on the GEOVIA GEMS™ software. Bench retentions are calculated as a percentage of the available catchment area, less over-mined and under-mined areas, to that of the designed catchment area. This process has proven to be both tedious and requires a lot of manual effort and time to execute. Consequently, a new semi-automated mine-to-design reconciliation approach that utilizes laser scanning and GIS-based tools is being proposed at Jwaneng Mine. This method involves high-resolution scanning of targeted bench walls, subsequent creation of 3D surfaces from point cloud data and the derivation of slope toe lines and crest lines on the Maptek I-Site Studio software. The toe lines and crest lines are then exported to the ArcGIS software where distance offsets between the design and actual bench toe lines and crest lines are calculated. Retained bench catchment capacity is measured as distances between the toe lines and crest lines on the same bench elevations. The assessment of the performance of the inter-ramp and overall slopes entails the measurement of excavated and design slope angles along vertical sections on the ArcGIS software. Excavated and design toe-to-toe or crest-to-crest slope angles are measured for inter-ramp stack slope reconciliations. Crest-to-toe slope angles are also measured for overall slope angle design reconciliations. The proposed approach allows for a more automated, accurate, quick and easier workflow for carrying out slope angle design reconciliations. This process has proved highly effective and timeous in the assessment of slope performance in Jwaneng Mine. This paper presents a newly proposed process for assessing compliance to slope angle designs for Jwaneng Mine.Keywords: slope angle designs, slope design recommendations, slope performance, slope stability
Procedia PDF Downloads 237826 Relations between the Internal Employment Conditions of International Organizations and the Characteristics of the National Civil Service
Authors: Renata Hrecska
Abstract:
This research seeks to fully examine the internal employment law of international organizations by comparing it with the characteristics of the national civil service. The aim of the research is to compare the legal system that has developed over many centuries and the relatively new internal staffing regulations to find out what solution schemes can help each other through mutual legal development in order to respond effectively to the social challenges of everyday life. Generally, the rules of civil service of any country or international entity have in common that they have, in their pragmatics inherently, the characteristic that makes them serving public interests. Though behind the common base there are many differences: there is the clear fragmentation of state regulation and the unity of organizational regulation. On the other hand, however, this difference disappears to some extent: the public service regulation of international organizations can be considered uniform until we examine it within, but not outside an organization. As soon as we compare the different organizations we may find many different solutions for staffing regulations. It is clear that the national civil service is a strong model for international organizations, but the question may be whether the staffing policy of international organizations can serve the national civil service as an example, too. In this respect, the easiest way to imagine a legislative environment would be to have a single comprehensive code, the general part of which is the Civil Service Act itself, and the specific part containing specific, necessarily differentiating rules for each layer of the civil service. Would it be advantageous to follow the footsteps of the leading international organizations, or is there any speciality in national level civil service that we cannot avoid during regulating processes? In addition to the above, the personal competencies of officials working in international organizations and public administrations also show a high degree of similarity, regardless of the type of employment. Thus, the whole public service system is characterized by the fundamental and special values that a person capable of holding a public office must be able to demonstrate, in some cases, even without special qualifications. It is also interesting how we can compare the two spheres of employment in light of the theory of Lawyer Louis Brandeis, a judge at the US Supreme Court, who formulated a complex theory of profession as distinguished from other occupations. From this point of view we can examine the continuous development of research and specialized knowledge at work; the community recognition and social status; that to what extent we can see a close-knit professional organization of altruistic philosophy; that how stability grows in the working conditions due to the stability of the profession; and that how the autonomy of the profession can prevail.Keywords: civil service, comparative law, international organizations, regulatory systems
Procedia PDF Downloads 134825 A Robust Optimization of Chassis Durability/Comfort Compromise Using Chebyshev Polynomial Chaos Expansion Method
Authors: Hanwei Gao, Louis Jezequel, Eric Cabrol, Bernard Vitry
Abstract:
The chassis system is composed of complex elements that take up all the loads from the tire-ground contact area and thus it plays an important role in numerous specifications such as durability, comfort, crash, etc. During the development of new vehicle projects in Renault, durability validation is always the main focus while deployment of comfort comes later in the project. Therefore, sometimes design choices have to be reconsidered because of the natural incompatibility between these two specifications. Besides, robustness is also an important point of concern as it is related to manufacturing costs as well as the performance after the ageing of components like shock absorbers. In this paper an approach is proposed aiming to realize a multi-objective optimization between chassis endurance and comfort while taking the random factors into consideration. The adaptive-sparse polynomial chaos expansion method (PCE) with Chebyshev polynomial series has been applied to predict responses’ uncertainty intervals of a system according to its uncertain-but-bounded parameters. The approach can be divided into three steps. First an initial design of experiments is realized to build the response surfaces which represent statistically a black-box system. Secondly within several iterations an optimum set is proposed and validated which will form a Pareto front. At the same time the robustness of each response, served as additional objectives, is calculated from the pre-defined parameter intervals and the response surfaces obtained in the first step. Finally an inverse strategy is carried out to determine the parameters’ tolerance combination with a maximally acceptable degradation of the responses in terms of manufacturing costs. A quarter car model has been tested as an example by applying the road excitations from the actual road measurements for both endurance and comfort calculations. One indicator based on the Basquin’s law is defined to compare the global chassis durability of different parameter settings. Another indicator related to comfort is obtained from the vertical acceleration of the sprung mass. An optimum set with best robustness has been finally obtained and the reference tests prove a good robustness prediction of Chebyshev PCE method. This example demonstrates the effectiveness and reliability of the approach, in particular its ability to save computational costs for a complex system.Keywords: chassis durability, Chebyshev polynomials, multi-objective optimization, polynomial chaos expansion, ride comfort, robust design
Procedia PDF Downloads 152824 Collaboration with Governmental Stakeholders in Positioning Reputation on Value
Authors: Zeynep Genel
Abstract:
The concept of reputation in corporate development comes to the fore as one of the most frequently discussed topics in recent years. Many organizations, which make worldwide investments, make effort in order to adapt themselves to the topics within the scope of this concept and to promote the name of the organization through the values that might become prominent. The stakeholder groups are considered as the most important actors determining the reputation. Even, the effect of stakeholders is not evaluated as a direct factor; it is signed as indirect effects of their perception are a very strong on ultimate reputation. It is foreseen that the parallelism between the projected reputation and the perceived c reputation, which is established as a result of communication experiences perceived by the stakeholders, has an important effect on achieving these objectives. In assessing the efficiency of these efforts, the opinions of stakeholders are widely utilized. In other words, the projected reputation, in which the positive and/or negative reflections of corporate communication play effective role, is measured through how the stakeholders perceptively position the organization. From this perspective, it is thought that the interaction and cooperation of corporate communication professionals with different stakeholder groups during the reputation positioning efforts play significant role in achieving the targeted reputation or in sustainability of this value. The governmental stakeholders having intense communication with mass stakeholder groups are within the most effective stakeholder groups of organization. The most important reason of this is that the organizations, regarding which the governmental stakeholders have positive perception, inspire more confidence to the mass stakeholders. At this point, the organizations carrying out joint projects with governmental stakeholders in parallel with sustainable communication approach come to the fore as the organizations having strong reputation, whereas the reputation of organizations, which fall behind in this regard or which cannot establish the efficiency from this aspect, is thought to be perceived as weak. Similarly, the social responsibility campaigns, in which the governmental stakeholders are involved and which play efficient role in strengthening the reputation, are thought to draw more attention. From this perspective, the role and effect of governmental stakeholders on the reputation positioning is discussed in this study. In parallel with this objective, it is aimed to reveal perspectives of seven governmental stakeholders towards the cooperation in reputation positioning. The sample group representing the governmental stakeholders is examined under the lights of results obtained from in-depth interviews with the executives of different ministries. It is asserted that this study, which aims to express the importance of stakeholder participation in corporate reputation positioning especially in Turkey and the effective role of governmental stakeholders in strong reputation, might provide a new perspective on measuring the corporate reputation, as well as establishing an important source to contribute to the studies in both academic and practical domains.Keywords: collaborative communications, reputation management, stakeholder engagement, ultimate reputation
Procedia PDF Downloads 225823 Integration, a Tool to Develop Critical Thinking Skills of Undergraduate Veterinary Students
Authors: M. L. W. P. De Silva, R. A. C. Rabel, N. Smith, L. McIntyre, T. J Parkinson, K. A. N. Wijayawardhane
Abstract:
Curricular integration is an important concept in medical education for developing students’ ability to create connections between different medical disciplines. Problem-Based Learning (PBL) is one of the vehicles through which such integration can be achieved. During the recent review of the veterinary curriculum at the University of Peradeniya, a series of courses in Integrative Veterinary Science (IVS) were introduced, in which PBL was the primary teaching methodology. The objectives of this study were to evaluate students’ opinions on PBL as a teaching method: it should be noted that, within the context of secondary and tertiary education in Sri Lanka, this would be an entirely novel learning experience for the students. Opinions were sought at the conclusion of IVS sessions where students of semesters 2, 4, 6, and 7 (of an 8-semester program) were exposed to a two, 2-hour PBL-based case scenario. The PBL-based case scenario in semesters 2, 4, and 7 were delivered using material previously developed by an experienced PBL practitioner, whilst material for semester 6 was prepared de novo by a less experienced practitioner. Each student (semesters 2: n=38, 4: n=37, 6: n=55, and 7: n=40) completed a questionnaire which asked whether: (i) the course had improved their critical thinking skills; (ii) the learning environment was sufficiently comfortable to express/share student’s opinion; (iii) there was sufficient facilitator guidance; (iv) the online study environment enhanced learning; and (v) the students were overall satisfied with the PBL approach and IVS concept. Responses were given on a 5-point Likert-scale (strongly agree (SA), agree (A), neutral (N), disagree (D), and strongly disagree (SD)). SA and A responses were summed to provide an overall ‘satisfactory’ response. Results were subjected to frequency-distribution statistical analysis. A total of 88.5% of students gave SA+A scores to their overall satisfaction. The proportion of SA+A scores differed between different semesters, such that 95% of semester 2, 4, and 7 students gave SA+A scores, whereas only 69% of semester 6 students did so for their respective sessions. Overall, 96% of the students gave SA+A scores to the question relating to the improvement of critical thinking skills: semester 6 students’ scores were marginally, but not significantly, lower (91% SA+A) than those in other semesters. The difference of scores between semester 6 and the other semesters may be attributed to the different PBL-material used and/or the different experience levels of the practitioners that developed the study material. The use of PBL as a means of teaching IVS curriculum-integration courses was well-received by the students in terms of their overall satisfaction and their perceptions of improved critical thinking skills. Importantly, this was achieved in the face of a methodology that was entirely novel to the students. Finally, the delivery of the PBL medium was readily mastered by the practitioner to whom it was also a novel methodology.Keywords: critical thinking skills, integration, problem based learning, veterinary education
Procedia PDF Downloads 133822 Creep Analysis and Rupture Evaluation of High Temperature Materials
Authors: Yuexi Xiong, Jingwu He
Abstract:
The structural components in an energy facility such as steam turbine machines are operated under high stress and elevated temperature in an endured time period and thus the creep deformation and creep rupture failure are important issues that need to be addressed in the design of such components. There are numerous creep models being used for creep analysis that have both advantages and disadvantages in terms of accuracy and efficiency. The Isochronous Creep Analysis is one of the simplified approaches in which a full-time dependent creep analysis is avoided and instead an elastic-plastic analysis is conducted at each time point. This approach has been established based on the rupture dependent creep equations using the well-known Larson-Miller parameter. In this paper, some fundamental aspects of creep deformation and the rupture dependent creep models are reviewed and the analysis procedures using isochronous creep curves are discussed. Four rupture failure criteria are examined from creep fundamental perspectives including criteria of Stress Damage, Strain Damage, Strain Rate Damage, and Strain Capability. The accuracy of these criteria in predicting creep life is discussed and applications of the creep analysis procedures and failure predictions of simple models will be presented. In addition, a new failure criterion is proposed to improve the accuracy and effectiveness of the existing criteria. Comparisons are made between the existing criteria and the new one using several examples materials. Both strain increase and stress relaxation form a full picture of the creep behaviour of a material under high temperature in an endured time period. It is important to bear this in mind when dealing with creep problems. Accordingly there are two sets of rupture dependent creep equations. While the rupture strength vs LMP equation shows how the rupture time depends on the stress level under load controlled condition, the strain rate vs rupture time equation reflects how the rupture time behaves under strain-controlled condition. Among the four existing failure criteria for rupture life predictions, the Stress Damage and Strain Damage Criteria provide the most conservative and non-conservative predictions, respectively. The Strain Rate and Strain Capability Criteria provide predictions in between that are believed to be more accurate because the strain rate and strain capability are more determined quantities than stress to reflect the creep rupture behaviour. A modified Strain Capability Criterion is proposed making use of the two sets of creep equations and therefore is considered to be more accurate than the original Strain Capability Criterion.Keywords: creep analysis, high temperature mateials, rapture evalution, steam turbine machines
Procedia PDF Downloads 290821 Ultra-deformable Drug-free Sequessome™ Vesicles (TDT 064) for the Treatment of Joint Pain Following Exercise: A Case Report and Clinical Data
Authors: Joe Collins, Matthias Rother
Abstract:
Background: Oral non-steroidal anti-inflammatory drugs (NSAIDs) are widely used for the relief of joint pain during and post-exercise. However, oral NSAIDs increase the risk of systemic side effects, even in healthy individuals, and retard recovery from muscle soreness. TDT 064 (Flexiseq®), a topical formulation containing ultra-deformable drug-free Sequessome™ vesicles, has demonstrated equivalent efficacy to oral celecoxib in reducing osteoarthritis-associated joint pain and stiffness. TDT 064 does not cause NSAID-related adverse effects. We describe clinical study data and a case report on the effectiveness of TDT 064 in reducing joint pain after exercise. Methods: Participants with a pain score ≥3 (10-point scale) 12–16 hours post-exercise were randomized to receive TDT 064 plus oral placebo, TDT 064 plus oral ketoprofen, or ketoprofen in ultra-deformable phospholipid vesicles plus oral placebo. Results: In the 168 study participants, pain scores were significantly higher with oral ketoprofen plus TDT 064 than with TDT 064 plus placebo in the 7 days post-exercise (P = 0.0240) and recovery from muscle soreness was significantly longer (P = 0.0262). There was a low incidence of adverse events. These data are supported by clinical experience. A 24-year-old male professional rugby player suffered a traumatic lisfranc fracture in March 2014 and underwent operative reconstruction. He had no relevant medical history and was not receiving concomitant medications. He had undergone anterior cruciate ligament reconstruction in 2008. The patient reported restricted training due to pain (score 7/10), stiffness (score 9/10) and poor function, as well as pain when changing direction and running on consecutive days. In July 2014 he started using TDT 064 twice daily at the recommended dose. In November 2014 he noted reduced pain on running (score 2-3/10), decreased morning stiffness (score 4/10) and improved joint mobility and was able to return to competitive rugby without restrictions. No side effects of TDT 064 were reported. Conclusions: TDT 064 shows efficacy against exercise- and injury-induced joint pain, as well as that associated with osteoarthritis. It does not retard muscle soreness recovery after exercise compared with an oral NSAID, making it an alternative approach for the treatment of joint pain during and post-exercise.Keywords: exercise, joint pain, TDT 064, phospholipid vesicles
Procedia PDF Downloads 480820 Response of Caldeira De Tróia Saltmarsh to Sea Level Rise, Sado Estuary, Portugal
Authors: A. G. Cunha, M. Inácio, M. C. Freitas, C. Antunes, T. Silva, C. Andrade, V. Lopes
Abstract:
Saltmarshes are essential ecosystems both from an ecological and biological point of view. Furthermore, they constitute an important social niche, providing valuable economic and protection functions. Thus, understanding their rates and patterns of sedimentation is critical for functional management and rehabilitation, especially in an SLR scenario. The Sado estuary is located 40 km south of Lisbon. It is a bar built estuary, separated from the sea by a large sand spit: the Tróia barrier. Caldeira de Tróia is located on the free edge of this barrier, and encompasses a salt marsh with ca. 21,000 m². Sediment cores were collected in the high and low marshes and in the mudflat area of the North bank of Caldeira de Tróia. From the low marsh core, fifteen samples were chosen for ²¹⁰Pb and ¹³⁷Cs determination at University of Geneva. The cores from the high marsh and the mudflat are still being analyzed. A sedimentation rate of 2.96 mm/year was derived from ²¹⁰Pb using the Constant Flux Constant Sedimentation model. The ¹³⁷Cs profile shows a peak in activity (1963) between 15.50 and 18.50 cm, giving a 3.1 mm/year sedimentation rate for the past 53 years. The adopted sea level rise scenario was based on a model built with the initial rate of SLR of 2.1 mm/year in 2000 and an acceleration of 0.08 mm/year². Based on the harmonic analysis of Setubal-Tróia tide gauge of 2005 data, the tide model was estimated and used to build the tidal tables to the period 2000-2016. With these tables, the average mean water levels were determined for the same time span. A digital terrain model was created from LIDAR scanning with 2m horizontal resolution (APA-DGT, 2011) and validated with altimetric data obtained with a DGPS-RTK. The response model calculates a new elevation for each pixel of the DTM for 2050 and 2100 based on the sedimentation rates specific of each environment. At this stage, theoretical values were chosen for the high marsh and the mudflat (respectively, equal and double the low marsh rate – 2.92 mm/year). These values will be rectified once sedimentation rates are determined for the other environments. For both projections, the total surface of the marsh decreases: 2% in 2050 and 61% in 2100. Additionally, the high marsh coverage diminishes significantly, indicating a regression in terms of maturity.Keywords: ¹³⁷Cs, ²¹⁰Pb, saltmarsh, sea level rise, response model
Procedia PDF Downloads 250819 Urban Waste Water Governance in South Africa: A Case Study of Stellenbosch
Authors: R. Malisa, E. Schwella, K. I. Theletsane
Abstract:
Due to climate change, population growth and rapid urbanization, the demand for water in South Africa is inevitably surpassing supply. To address similar challenges globally, there has been a paradigm shift from conventional urban waste water management “government” to a “governance” paradigm. From the governance paradigm, Integrated Urban Water Management (IUWM) principle emerged. This principle emphasizes efficient urban waste water treatment and production of high-quality recyclable effluent. In so doing mimicking natural water systems, in their processes of recycling water efficiently, and averting depletion of natural water resources. The objective of this study was to investigate drivers of shifting the current urban waste water management approach from a “government” paradigm towards “governance”. The study was conducted through Interactive Management soft systems research methodology which follows a qualitative research design. A case study methodology was employed, guided by realism research philosophy. Qualitative data gathered were analyzed through interpretative structural modelling using Concept Star for Professionals Decision-Making tools (CSPDM) version 3.64. The constructed model deduced that the main drivers in shifting the Stellenbosch municipal urban waste water management towards IUWM “governance” principles are mainly social elements characterized by overambitious expectations of the public on municipal water service delivery, mis-interpretation of the constitution on access to adequate clean water and sanitation as a human right and perceptions on recycling water by different communities. Inadequate public participation also emerged as a strong driver. However, disruptive events such as draught may play a positive role in raising an awareness on the value of water, resulting in a shift on the perceptions on recycled water. Once the social elements are addressed, the alignment of governance and administration elements towards IUWM are achievable. Hence, the point of departure for the desired paradigm shift is the change of water service authorities and serviced communities’ perceptions and behaviors towards shifting urban waste water management approaches from “government” to “governance” paradigm.Keywords: integrated urban water management, urban water system, wastewater governance, wastewater treatment works
Procedia PDF Downloads 157818 Systematic Evaluation of Convolutional Neural Network on Land Cover Classification from Remotely Sensed Images
Authors: Eiman Kattan, Hong Wei
Abstract:
In using Convolutional Neural Network (CNN) for classification, there is a set of hyperparameters available for the configuration purpose. This study aims to evaluate the impact of a range of parameters in CNN architecture i.e. AlexNet on land cover classification based on four remotely sensed datasets. The evaluation tests the influence of a set of hyperparameters on the classification performance. The parameters concerned are epoch values, batch size, and convolutional filter size against input image size. Thus, a set of experiments were conducted to specify the effectiveness of the selected parameters using two implementing approaches, named pertained and fine-tuned. We first explore the number of epochs under several selected batch size values (32, 64, 128 and 200). The impact of kernel size of convolutional filters (1, 3, 5, 7, 10, 15, 20, 25 and 30) was evaluated against the image size under testing (64, 96, 128, 180 and 224), which gave us insight of the relationship between the size of convolutional filters and image size. To generalise the validation, four remote sensing datasets, AID, RSD, UCMerced and RSCCN, which have different land covers and are publicly available, were used in the experiments. These datasets have a wide diversity of input data, such as number of classes, amount of labelled data, and texture patterns. A specifically designed interactive deep learning GPU training platform for image classification (Nvidia Digit) was employed in the experiments. It has shown efficiency in both training and testing. The results have shown that increasing the number of epochs leads to a higher accuracy rate, as expected. However, the convergence state is highly related to datasets. For the batch size evaluation, it has shown that a larger batch size slightly decreases the classification accuracy compared to a small batch size. For example, selecting the value 32 as the batch size on the RSCCN dataset achieves the accuracy rate of 90.34 % at the 11th epoch while decreasing the epoch value to one makes the accuracy rate drop to 74%. On the other extreme, setting an increased value of batch size to 200 decreases the accuracy rate at the 11th epoch is 86.5%, and 63% when using one epoch only. On the other hand, selecting the kernel size is loosely related to data set. From a practical point of view, the filter size 20 produces 70.4286%. The last performed image size experiment shows a dependency in the accuracy improvement. However, an expensive performance gain had been noticed. The represented conclusion opens the opportunities toward a better classification performance in various applications such as planetary remote sensing.Keywords: CNNs, hyperparamters, remote sensing, land cover, land use
Procedia PDF Downloads 169817 Heat Transfer Performance of a Small Cold Plate with Uni-Directional Porous Copper for Cooling Power Electronics
Authors: K. Yuki, R. Tsuji, K. Takai, S. Aramaki, R. Kibushi, N. Unno, K. Suzuki
Abstract:
A small cold plate with uni-directional porous copper is proposed for cooling power electronics such as an on-vehicle inverter with the heat generation of approximately 500 W/cm2. The uni-directional porous copper with the pore perpendicularly orienting the heat transfer surface is soldered to a grooved heat transfer surface. This structure enables the cooling liquid to evaporate in the pore of the porous copper and then the vapor to discharge through the grooves. In order to minimize the cold plate, a double flow channel concept is introduced for the design of the cold plate. The cold plate consists of a base plate, a spacer, and a vapor discharging plate, totally 12 mm in thickness. The base plate has multiple nozzles of 1.0 mm in diameter for the liquid supply and 4 slits of 2.0 mm in width for vapor discharging, and is attached onto the top surface of the porous copper plate of 20 mm in diameter and 5.0 mm in thickness. The pore size is 0.36 mm and the porosity is 36 %. The cooling liquid flows into the porous copper as an impinging jet flow from the multiple nozzles, and then the vapor, which is generated in the pore, is discharged through the grooves and the vapor slits outside the cold plate. A heated test section consists of the cold plate, which was explained above, and a heat transfer copper block with 6 cartridge heaters. The cross section of the heat transfer block is reduced in order to increase the heat flux. The top surface of the block is the grooved heat transfer surface of 10 mm in diameter at which the porous copper is soldered. The grooves are fabricated like latticework, and the width and depth are 1.0 mm and 0.5 mm, respectively. By embedding three thermocouples in the cylindrical part of the heat transfer block, the temperature of the heat transfer surface ant the heat flux are extrapolated in a steady state. In this experiment, the flow rate is 0.5 L/min and the flow velocity at each nozzle is 0.27 m/s. The liquid inlet temperature is 60 °C. The experimental results prove that, in a single-phase heat transfer regime, the heat transfer performance of the cold plate with the uni-directional porous copper is 2.1 times higher than that without the porous copper, though the pressure loss with the porous copper also becomes higher than that without the porous copper. As to the two-phase heat transfer regime, the critical heat flux increases by approximately 35% by introducing the uni-directional porous copper, compared with the CHF of the multiple impinging jet flow. In addition, we confirmed that these heat transfer data was much higher than that of the ordinary single impinging jet flow. These heat transfer data prove high potential of the cold plate with the uni-directional porous copper from the view point of not only the heat transfer performance but also energy saving.Keywords: cooling, cold plate, uni-porous media, heat transfer
Procedia PDF Downloads 295816 Optimizing Oil Production through 30-Inch Pipeline in Abu-Attifel Field
Authors: Ahmed Belgasem, Walid Ben Hussin, Emad Krekshi, Jamal Hashad
Abstract:
Waxy crude oil, characterized by its high paraffin wax content, poses significant challenges in the oil & gas industry due to its increased viscosity and semi-solid state at reduced temperatures. The wax formation process, which includes precipitation, crystallization, and deposition, becomes problematic when crude oil temperatures fall below the wax appearance temperature (WAT) or cloud point. Addressing these issues, this paper introduces a technical solution designed to mitigate the wax appearance and enhance the oil production process in Abu-Attifil Field via a 30-inch crude oil pipeline. A comprehensive flow assurance study validates the feasibility and performance of this solution across various production rates, temperatures, and operational scenarios. The study's findings indicate that maintaining the crude oil's temperature above a minimum threshold of 63°C is achievable through the strategic placement of two heating stations along the pipeline route. This approach effectively prevents wax deposition, gelling, and subsequent mobility complications, thereby bolstering the overall efficiency, reliability, safety, and economic viability of the production process. Moreover, this solution significantly curtails the environmental repercussions traditionally associated with wax deposition, which can accumulate up to 7,500kg. The research methodology involves a comprehensive flow assurance study to validate the feasibility and performance of the proposed solution. The study considers various production rates, temperatures, and operational scenarios. It includes crude oil analysis to determine the wax appearance temperature (WAT), as well as the evaluation and comparison of operating options for the heating stations. The study's findings indicate that the proposed solution effectively prevents wax deposition, gelling, and subsequent mobility complications. By maintaining the crude oil's temperature above the specified threshold, the solution improves the overall efficiency, reliability, safety, and economic viability of the oil production process. Additionally, the solution contributes to reducing environmental repercussions associated with wax deposition. The research conclusion presents a technical solution that optimizes oil production in the Abu-Attifil Field by addressing wax formation problems through the strategic placement of two heating stations. The solution effectively prevents wax deposition, improves overall operational efficiency, and contributes to environmental sustainability. Further research is suggested for field data validation and cost-benefit analysis exploration.Keywords: oil production, wax depositions, solar cells, heating stations
Procedia PDF Downloads 73815 Wave Powered Airlift PUMP for Primarily Artificial Upwelling
Authors: Bruno Cossu, Elio Carlo
Abstract:
The invention (patent pending) relates to the field of devices aimed to harness wave energy (WEC) especially for artificial upwelling, forced downwelling, production of compressed air. In its basic form, the pump consists of a hydro-pneumatic machine, driven by wave energy, characterised by the fact that it has no moving mechanical parts, and is made up of only two structural components: an hollow body, which is open at the bottom to the sea and partially immersed in sea water, and a tube, both joined together to form a single body. The shape of the hollow body is like a mushroom whose cap and stem are hollow; the stem is open at both ends and the lower part of its surface is crossed by holes; the tube is external and coaxial to the stem and is joined to it so as to form a single body. This shape of the hollow body and the type of connection to the tube allows the pump to operate simultaneously as an air compressor (OWC) on the cap side, and as an airlift on the stem side. The pump can be implemented in four versions, each of which provides different variants and methods of implementation: 1) firstly, for the artificial upwelling of cold, deep ocean water; 2) secondly, for the lifting and transfer of these waters to the place of use (above all, fish farming plants), even if kilometres away; 3) thirdly, for the forced downwelling of surface sea water; 4) fourthly, for the forced downwelling of surface water, its oxygenation, and the simultaneous production of compressed air. The transfer of the deep water or the downwelling of the raised surface water (as for pump versions indicated in points 2 and 3 above), is obtained by making the water raised by the airlift flow into the upper inlet of another pipe, internal or adjoined to the airlift; the downwelling of raised surface water, oxygenation, and the simultaneous production of compressed air (as for the pump version indicated in point 4), is obtained by installing a venturi tube on the upper end of the pipe, whose restricted section is connected to the external atmosphere, so that it also operates like a hydraulic air compressor (trompe). Furthermore, by combining one or more pumps for the upwelling of cold, deep water, with one or more pumps for the downwelling of the warm surface water, the system can be used in an Ocean Thermal Energy Conversion plant to supply the cold and the warm water required for the operation of the same, thus allowing to use, without increased costs, in addition to the mechanical energy of the waves, for the purposes indicated in points 1 to 4, the thermal one of the marine water treated in the process.Keywords: air lifted upwelling, fish farming plant, hydraulic air compressor, wave energy converter
Procedia PDF Downloads 148