Search results for: basic modal displacement
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4244

Search results for: basic modal displacement

494 Vibrational Spectra and Nonlinear Optical Investigations of a Chalcone Derivative (2e)-3-[4-(Methylsulfanyl) Phenyl]-1-(3-Bromophenyl) Prop-2-En-1-One

Authors: Amit Kumar, Archana Gupta, Poonam Tandon, E. D. D’Silva

Abstract:

Nonlinear optical (NLO) materials are the key materials for the fast processing of information and optical data storage applications. In the last decade, materials showing nonlinear optical properties have been the object of increasing attention by both experimental and computational points of view. Chalcones are one of the most important classes of cross conjugated NLO chromophores that are reported to exhibit good SHG efficiency, ultra fast optical nonlinearities and are easily crystallizable. The basic structure of chalcones is based on the π-conjugated system in which two aromatic rings are connected by a three-carbon α, β-unsaturated carbonyl system. Due to the overlap of π orbitals, delocalization of electronic charge distribution leads to a high mobility of the electron density. On a molecular scale, the extent of charge transfer across the NLO chromophore determines the level of SHG output. Hence, the functionalization of both ends of the π-bond system with appropriate electron donor and acceptor groups can enhance the asymmetric electronic distribution in either or both ground and excited states, leading to an increased optical nonlinearity. In this research, the experimental and theoretical study on the structure and vibrations of (2E)-3-[4-(methylsulfanyl) phenyl]-1-(3-bromophenyl) prop-2-en-1-one (3Br4MSP) is presented. The FT-IR and FT-Raman spectra of the NLO material in the solid phase have been recorded. Density functional theory (DFT) calculations at B3LYP with 6-311++G(d,p) basis set were carried out to study the equilibrium geometry, vibrational wavenumbers, infrared absorbance and Raman scattering activities. The interpretation of vibrational features (normal mode assignments, for instance) has an invaluable aid from DFT calculations that provide a quantum-mechanical description of the electronic energies and forces involved. Perturbation theory allows one to obtain the vibrational normal modes by estimating the derivatives of the Kohn−Sham energy with respect to atomic displacements. The molecular hyperpolarizability β plays a chief role in the NLO properties, and a systematical study on β has been carried out. Furthermore, the first order hyperpolarizability (β) and the related properties such as dipole moment (μ) and polarizability (α) of the title molecule are evaluated by Finite Field (FF) approach. The electronic α and β of the studied molecule are 41.907×10-24 and 79.035×10-24 e.s.u. respectively, indicating that 3Br4MSP can be used as a good nonlinear optical material.

Keywords: DFT, MEP, NLO, vibrational spectra

Procedia PDF Downloads 221
493 Human Coronary Sinus Venous System as a Target for Clinical Procedures

Authors: Wiesława Klimek-Piotrowska, Mateusz K. Hołda, Mateusz Koziej, Katarzyna Piątek, Jakub Hołda

Abstract:

Introduction: The coronary sinus venous system (CSVS), which has always been overshadowed by the coronary arterial tree, has recently begun to attract more attention. Since it is a target for clinicians the knowledge of its anatomy is essential. Cardiac resynchronization therapy, catheter ablation of cardiac arrhythmias, defibrillation, perfusion therapy, mitral valve annuloplasty, targeted drug delivery, and retrograde cardioplegia administration are commonly used therapeutic methods involving the CSVS. The great variability in the course of coronary veins and tributaries makes the diagnostic and therapeutic processes difficult. Our aim was to investigate detailed anatomy of most common clinically used CSVS`s structures: the coronary sinus with its ostium, great cardiac vein, posterior vein of the left ventricle, middle cardiac vein and oblique vein of the left atrium. Methodology: This is a prospective study of 70 randomly selected autopsied hearts dissected from adult humans (Caucasian) aged 50.1±17.6 years old (24.3% females) with BMI=27.6±6.7 kg/m2. The morphology of the CSVS was assessed as well as its precise measurements were performed. Results: The coronary sinus (CS) with its ostium was present in all hearts. The mean CS ostium diameter was 9.9±2.5mm. Considered ostium was covered by its valve in 87.1% with mean valve height amounted 5.1±3.1mm. The mean percentage coverage of the CS ostium by the valve was 56%. The Vieussens valve was present in 71.4% and was unicuspid in 70%, bicuspid in 26% and tricuspid in 4% of hearts. The great cardiac vein was present in all cases. The oblique vein of the left atrium was observed in 84.3% of hearts with mean length amounted 20.2±9.3mm and mean ostium diameter 1.4±0.9mm. The average length of the CS (from the CS ostium to the Vieussens valve) was 31.1±9.5mm or (from the CS ostium to the ostium of the oblique vein of the left atrium) 28.9±10.1mm and both were correlated with the heart weight (r=0.47; p=0.00 and r=0.38; p=0.006 respectively). In 90.5% the ostium of the oblique vein of the left atrium was located proximally to the Vieussens valve, in remaining cases was distally. The middle cardiac vein was present in all hearts and its valve was noticed in more than half of all the cases (52.9%). The posterior vein of the left ventricle was observed in 91.4% of cases. Conclusions: The CSVS is vastly variable and none of basic hearts parameters is a good predictor of its morphology. The Vieussens valve could be a significant obstacle during CS cannulation. Caution should be exercised in this area to avoid coronary sinus perforation. Because of the higher incidence of the presence of the oblique vein of the left atrium than the Vieussens valve, the vein orifice is more useful in determining the CS length.

Keywords: cardiac resynchronization therapy, coronary sinus, Thebesian valve, Vieussens valve

Procedia PDF Downloads 302
492 Impinging Acoustics Induced Combustion: An Alternative Technique to Prevent Thermoacoustic Instabilities

Authors: Sayantan Saha, Sambit Supriya Dash, Vinayak Malhotra

Abstract:

Efficient propulsive systems development is an area of major interest and concern in aerospace industry. Combustion forms the most reliable and basic form of propulsion for ground and space applications. The generation of large amount of energy from a small volume relates mostly to the flaming combustion. This study deals with instabilities associated with flaming combustion. Combustion is always accompanied by acoustics be it external or internal. Chemical propulsion oriented rockets and space systems are well known to encounter acoustic instabilities. Acoustic brings in changes in inter-energy conversion and alter the reaction rates. The modified heat fluxes, owing to wall temperature, reaction rates, and non-linear heat transfer are observed. The thermoacoustic instabilities significantly result in reduced combustion efficiency leading to uncontrolled liquid rocket engine performance, serious hazards to systems, assisted testing facilities, enormous loss of resources and every year a substantial amount of money is spent to prevent them. Present work attempts to fundamentally understand the mechanisms governing the thermoacoustic combustion in liquid rocket engine using a simplified experimental setup comprising a butane cylinder and an impinging acoustic source. Rocket engine produces sound pressure level in excess of 153 Db. The RL-10 engine generates noise of 180 Db at its base. Systematic studies are carried out for varying fuel flow rates, acoustic levels and observations are made on the flames. The work is expected to yield a good physical insight into the development of acoustic devices that when coupled with the present propulsive devices could effectively enhance combustion efficiency leading to better and safer missions. The results would be utilized to develop impinging acoustic devices that impinge sound on the combustion chambers leading to stable combustion thus, improving specific fuel consumption, specific impulse, reducing emissions, enhanced performance and fire safety. The results can be effectively applied to terrestrial and space application.

Keywords: combustion instability, fire safety, improved performance, liquid rocket engines, thermoacoustics

Procedia PDF Downloads 143
491 Economic Impact of Drought on Agricultural Society: Evidence Based on a Village Study in Maharashtra, India

Authors: Harshan Tee Pee

Abstract:

Climate elements include surface temperatures, rainfall patterns, humidity, type and amount of cloudiness, air pressure and wind speed and direction. Change in one element can have an impact on the regional climate. The scientific predictions indicate that global climate change will increase the number of extreme events, leading to more frequent natural hazards. Global warming is likely to intensify the risk of drought in certain parts and also leading to increased rainfall in some other parts. Drought is a slow advancing disaster and creeping phenomenon– which accumulate slowly over a long period of time. Droughts are naturally linked with aridity. But droughts occur over most parts of the world (both wet and humid regions) and create severe impacts on agriculture, basic household welfare and ecosystems. Drought condition occurs at least every three years in India. India is one among the most vulnerable drought prone countries in the world. The economic impacts resulting from extreme environmental events and disasters are huge as a result of disruption in many economic activities. The focus of this paper is to develop a comprehensive understanding about the distributional impacts of disaster, especially impact of drought on agricultural production and income through a panel study (drought year and one year after the drought) in Raikhel village, Maharashtra, India. The major findings of the study indicate that cultivating area as well as the number of cultivating households reduced after the drought, indicating a shift in the livelihood- households moved from agriculture to non-agriculture. Decline in the gross cropped area and production of various crops depended on the negative income from these crops in the previous agriculture season. All the landholding categories of households except landlords had negative income in the drought year and also the income disparities between the households were higher in that year. In the drought year, the cost of cultivation was higher for all the landholding categories due to the increased cost for irrigation and input cost. In the drought year, agriculture products (50 per cent of the total products) were used for household consumption rather than selling in the market. It is evident from the study that livelihood which was based on natural resources became less attractive to the people to due to the risk involved in it and people were moving to less risk livelihood for their sustenance.

Keywords: climate change, drought, agriculture economics, disaster impact

Procedia PDF Downloads 118
490 The Ideal Memory Substitute for Computer Memory Hierarchy

Authors: Kayode A. Olaniyi, Olabanji F. Omotoye, Adeola A. Ogunleye

Abstract:

Computer system components such as the CPU, the Controllers, and the operating system, work together as a team, and storage or memory is the essential parts of this team apart from the processor. The memory and storage system including processor caches, main memory, and storage, form basic storage component of a computer system. The characteristics of the different types of storage are inherent in the design and the technology employed in the manufacturing. These memory characteristics define the speed, compatibility, cost, volatility, and density of the various storage types. Most computers rely on a hierarchy of storage devices for performance. The effective and efficient use of the memory hierarchy of the computer system therefore is the single most important aspect of computer system design and use. The memory hierarchy is becoming a fundamental performance and energy bottleneck, due to the widening gap between the increasing demands of modern computer applications and the limited performance and energy efficiency provided by traditional memory technologies. With the dramatic development in the computers systems, computer storage has had a difficult time keeping up with the processor speed. Computer architects are therefore facing constant challenges in developing high-speed computer storage with high-performance which is energy-efficient, cost-effective and reliable, to intercept processor requests. It is very clear that substantial advancements in redesigning the existing memory physical and logical structures to meet up with the latest processor potential is crucial. This research work investigates the importance of computer memory (storage) hierarchy in the design of computer systems. The constituent storage types of the hierarchy today were investigated looking at the design technologies and how the technologies affect memory characteristics: speed, density, stability and cost. The investigation considered how these characteristics could best be harnessed for overall efficiency of the computer system. The research revealed that the best single type of storage, which we refer to as ideal memory is that logical single physical memory which would combine the best attributes of each memory type that make up the memory hierarchy. It is a single memory with access speed as high as one found in CPU registers, combined with the highest storage capacity, offering excellent stability in the presence or absence of power as found in the magnetic and optical disks as against volatile DRAM, and yet offers a cost-effective attribute that is far away from the expensive SRAM. The research work suggests that to overcome these barriers it may then mean that memory manufacturing will take a total deviation from the present technologies and adopt one that overcomes the associated challenges with the traditional memory technologies.

Keywords: cache, memory-hierarchy, memory, registers, storage

Procedia PDF Downloads 164
489 Introducing Principles of Land Surveying by Assigning a Practical Project

Authors: Introducing Principles of Land Surveying by Assigning a Practical Project

Abstract:

A practical project is used in an engineering surveying course to expose sophomore and junior civil engineering students to several important issues related to the use of basic principles of land surveying. The project, which is the design of a two-lane rural highway to connect between two arbitrary points, requires students to draw the profile of the proposed highway along with the existing ground level. Areas of all cross-sections are then computed to enable quantity computations between them. Lastly, Mass-Haul Diagram is drawn with all important parts and features shown on it for clarity. At the beginning, students faced challenges getting started on the project. They had to spend time and effort thinking of the best way to proceed and how the work would flow. It was even more challenging when they had to visualize images of cut, fill and mixed cross sections in three dimensions before they can draw them to complete the necessary computations. These difficulties were then somewhat overcome with the help of the instructor and thorough discussions among team members and/or between different teams. The method of assessment used in this study was a well-prepared-end-of-semester questionnaire distributed to students after the completion of the project and the final exam. The survey contained a wide spectrum of questions from students' learning experience when this course development was implemented to students' satisfaction of the class instructions provided to them and the instructor's competency in presenting the material and helping with the project. It also covered the adequacy of the project to show a sample of a real-life civil engineering application and if there is any excitement added by implementing this idea. At the end of the questionnaire, students had the chance to provide their constructive comments and suggestions for future improvements of the land surveying course. Outcomes will be presented graphically and in a tabular format. Graphs provide visual explanation of the results and tables, on the other hand, summarize numerical values for each student along with some descriptive statistics, such as the mean, standard deviation, and coefficient of variation for each student and each question as well. In addition to gaining experience in teamwork, communications, and customer relations, students felt the benefit of assigning such a project. They noticed the beauty of the practical side of civil engineering work and how theories are utilized in real-life engineering applications. It was even recommended by students that such a project be exercised every time this course is offered so future students can have the same learning opportunity they had.

Keywords: land surveying, highway project, assessment, evaluation, descriptive statistics

Procedia PDF Downloads 229
488 The Development of the Geological Structure of the Bengkulu Fore Arc Basin, Western Edge of Sundaland, Sumatra, and Its Relationship to Hydrocarbon Trapping Mechanism

Authors: Lauti Dwita Santy, Hermes Panggabean, Syahrir Andi Mangga

Abstract:

The Bengkulu Basin is part of the Sunda Arc system, which is a classic convergent type margin that occur around the southern rim of the Eurasian continental (Sundaland) plate. The basin is located between deep sea trench (Mentawai Outer Arc high) and the volvanic/ magmatic Arc of the Barisan Mountains Range. To the northwest it is bounded by Padang High, to the northest by Barisan Mountains (Sumatra Fault Zone) to the southwest by Mentawai Fault Zone and to the southeast by Semangko High/ Sunda Strait. The stratigraphic succession and tectonic development can be broadly divided into four stage/ periods, i.e Late Jurassic- Early Cretaceous, Late Eocene-Early Oligocene, Late Oligocene-Early Miocene, Middle Miocene-Late Miocene and Pliocene-Plistocene, which are mainly controlled by the development of subduction activities. The Pre Tertiary Basement consist of sedimentary and shallow water limestone, calcareous mudstone, cherts and tholeiitic volcanic rocks, with Late Jurassic to Early Cretaceous in age. The sedimentation in this basin is depend on the relief of the Pre Tertiary Basement (Woyla Terrane) and occured into two stages, i.e. transgressive stage during the Latest Oligocene-Early Middle Miocene Seblat Formation, and the regressive stage during the Latest Middle Miocene-Pleistocene (Lemau, Simpangaur and Bintunan Formations). The Pre-Tertiary Faults were more intensive than the overlying cover, The Tertiary Rocks. There are two main fault trends can be distinguished, Northwest–Southwest Faults and Northeast-Southwest Faults. The NW-SE fault (Ketaun) are commonly laterally persistent, are interpreted to the part of Sumatran Fault Systems. They commonly form the boundaries to the Pre Tertiary basement highs and therefore are one of the faults elements controlling the geometry and development of the Tertiary sedimentary basins.The Northeast-Southwest faults was formed a conjugate set to the Northwest–Southeast Faults. In the earliest Tertiary and reactivated during the Plio-Pleistocene in a compressive mode with subsequent dextral displacement. The Block Faulting accross these two sets of faults related to approximate North–South compression in Paleogene time and produced a series of elongate basins separated by basement highs in the backarc and forearc region. The Bengkulu basin is interpreted having evolved from pull apart feature in the area southwest of the main Sumatra Fault System related to NW-SE trending in dextral shear.Based on Pyrolysis Yield (PY) vs Total Organic Carbon (TOC) diagram show that Seblat and Lemau Formation belongs to oil and Gas Prone with the quality of the source rocks includes into excellent and good (Lemau Formation), Fair and Poor (Seblat Formation). The fine-grained carbonaceous sediment of the Seblat dan Lemau Formations as source rocks, the coarse grained and carbonate sediments of the Seblat and Lemau Formations as reservoir rocks, claystone bed in Seblat and Lemau Formation as caprock. The source rocks maturation are late immature to early mature, with kerogen type II and III (Seblat Formation), and late immature to post mature with kerogen type I and III (Lemau Formation). The burial history show to 2500 m in depthh with paleo temperature reached 80oC. Trapping mechanism occur during Oligo–Miocene and Middle Miocene, mainly in block faulting system.

Keywords: fore arc, bengkulu, sumatra, sundaland, hydrocarbon, trapping mechanism

Procedia PDF Downloads 558
487 Historical Development of Negative Emotive Intensifiers in Hungarian

Authors: Martina Katalin Szabó, Bernadett Lipóczi, Csenge Guba, István Uveges

Abstract:

In this study, an exhaustive analysis was carried out about the historical development of negative emotive intensifiers in the Hungarian language via NLP methods. Intensifiers are linguistic elements which modify or reinforce a variable character in the lexical unit they apply to. Therefore, intensifiers appear with other lexical items, such as adverbs, adjectives, verbs, infrequently with nouns. Due to the complexity of this phenomenon (set of sociolinguistic, semantic, and historical aspects), there are many lexical items which can operate as intensifiers. The group of intensifiers are admittedly one of the most rapidly changing elements in the language. From a linguistic point of view, particularly interesting are a special group of intensifiers, the so-called negative emotive intensifiers, that, on their own, without context, have semantic content that can be associated with negative emotion, but in particular cases, they may function as intensifiers (e.g.borzasztóanjó ’awfully good’, which means ’excellent’). Despite their special semantic features, negative emotive intensifiers are scarcely examined in literature based on large Historical corpora via NLP methods. In order to become better acquainted with trends over time concerning the intensifiers, The exhaustively analysed a specific historical corpus, namely the Magyar TörténetiSzövegtár (Hungarian Historical Corpus). This corpus (containing 3 millions text words) is a collection of texts of various genres and styles, produced between 1772 and 2010. Since the corpus consists of raw texts and does not contain any additional information about the language features of the data (such as stemming or morphological analysis), a large amount of manual work was required to process the data. Thus, based on a lexicon of negative emotive intensifiers compiled in a previous phase of the research, every occurrence of each intensifier was queried, and the results were stored in a separate data frame. Then, basic linguistic processing (POS-tagging, lemmatization etc.) was carried out automatically with the ‘magyarlanc’ NLP-toolkit. Finally, the frequency and collocation features of all the negative emotive words were automatically analyzed in the corpus. Outcomes of the research revealed in detail how these words have proceeded through grammaticalization over time, i.e., they change from lexical elements to grammatical ones, and they slowly go through a delexicalization process (their negative content diminishes over time). What is more, it was also pointed out which negative emotive intensifiers are at the same stage in this process in the same time period. Giving a closer look to the different domains of the analysed corpus, it also became certain that during this process, the pragmatic role’s importance increases: the newer use expresses the speaker's subjective, evaluative opinion at a certain level.

Keywords: historical corpus analysis, historical linguistics, negative emotive intensifiers, semantic changes over time

Procedia PDF Downloads 233
486 Inner and Outer School Contextual Factors Associated with Poor Performance of Grade 12 Students: A Case Study of an Underperforming High School in Mpumalanga, South Africa

Authors: Victoria L. Nkosi, Parvaneh Farhangpour

Abstract:

Often a Grade 12 certificate is perceived as a passport to tertiary education and the minimum requirement to enter the world of work. In spite of its importance, many students do not make this milestone in South Africa. It is important to find out why so many students still fail in spite of transformation in the education system in the post-apartheid era. Given the complexity of education and its context, this study adopted a case study design to examine one historically underperforming high school in Bushbuckridge, Mpumalanga Province, South Africa in 2013. The aim was to gain a understanding of the inner and outer school contextual factors associated with the high failure rate among Grade 12 students.  Government documents and reports were consulted to identify factors in the district and the village surrounding the school and a student survey was conducted to identify school, home and student factors. The randomly-sampled half of the population of Grade 12 students (53) participated in the survey and quantitative data are analyzed using descriptive statistical methods. The findings showed that a host of factors is at play. The school is located in a village within a municipality which has been one of the poorest three municipalities in South Africa and the lowest Grade 12 pass rate in the Mpumalanga province.   Moreover, over half of the families of the students are single parents, 43% are unemployed and the majority has a low level of education. In addition, most families (83%) do not have basic study materials such as a dictionary, books, tables, and chairs. A significant number of students (70%) are over-aged (+19 years old); close to half of them (49%) are grade repeaters. The school itself lacks essential resources, namely computers, science laboratories, library, and enough furniture and textbooks. Moreover, teaching and learning are negatively affected by the teachers’ occasional absenteeism, inadequate lesson preparation, and poor communication skills. Overall, the continuous low performance of students in this school mirrors the vicious circle of multiple negative conditions present within and outside of the school. The complexity of factors associated with the underperformance of Grade 12 students in this school calls for a multi-dimensional intervention from government and stakeholders. One important intervention should be the placement of over-aged students and grade-repeaters in suitable educational institutions for the benefit of other students.

Keywords: inner context, outer context, over-aged students, vicious cycle

Procedia PDF Downloads 201
485 Method for Controlling the Groundwater Polluted by the Surface Waters through Injection Wells

Authors: Victorita Radulescu

Abstract:

Introduction: The optimum exploitation of agricultural land in the presence of an aquifer polluted by the surface sources requires close monitoring of groundwater level in both periods of intense irrigation and in absence of the irrigations, in times of drought. Currently in Romania, in the south part of the country, the Baragan area, many agricultural lands are confronted with the risk of groundwater pollution in the absence of systematic irrigation, correlated with the climate changes. Basic Methods: The non-steady flow of the groundwater from an aquifer can be described by the Bousinesq’s partial differential equation. The finite element method was used, applied to the porous media needed for the water mass balance equation. By the proper structure of the initial and boundary conditions may be modeled the flow in drainage or injection systems of wells, according to the period of irrigation or prolonged drought. The boundary conditions consist of the groundwater levels required at margins of the analyzed area, in conformity to the reality of the pollutant emissaries, following the method of the double steps. Major Findings/Results: The drainage condition is equivalent to operating regimes on the two or three rows of wells, negative, as to assure the pollutant transport, modeled with the variable flow in groups of two adjacent nodes. In order to obtain the level of the water table, in accordance with the real constraints, are needed, for example, to be restricted its top level below of an imposed value, required in each node. The objective function consists of a sum of the absolute values of differences of the infiltration flow rates, increased by a large penalty factor when there are positive values of pollutant. In these conditions, a balanced structure of the pollutant concentration is maintained in the groundwater. The spatial coordinates represent the modified parameters during the process of optimization and the drainage flows through wells. Conclusions: The presented calculation scheme was applied to an area having a cross-section of 50 km between two emissaries with various levels of altitude and different values of pollution. The input data were correlated with the measurements made in-situ, such as the level of the bedrock, the grain size of the field, the slope, etc. This method of calculation can also be extended to determine the variation of the groundwater in the aquifer following the flood wave propagation in envoys.

Keywords: environmental protection, infiltrations, numerical modeling, pollutant transport through soils

Procedia PDF Downloads 155
484 Graduates Construction of Knowledge and Ability to Act on Employable Opportunities

Authors: Martabolette Stecher

Abstract:

Introductory: How is knowledge and ability to act on employable opportunities constructed among students and graduates at higher educations? This question have been drawn much attention by researchers, governments and universities in Denmark, since there has been an increases in the rate of unemployment among graduates from higher education. The fact that more than ten thousand graduates from higher education without the opportunity to get a job in these years has a tremendous impact upon the social economy in Denmark. Every time a student graduate from higher education and become unemployed, it is possible to trace upon the person´s chances to get a job many years ahead. This means that the tremendous rate of graduate unemployment implies a decrease in employment and lost prosperity in Denmark within a billion Danish Kroner scale. Basic methodologies: The present study investigates the construction of knowledge and ability to act upon employable opportunities among students and graduates at higher educations in Denmark in a literature review as well as a preliminary study of students from Aarhus University. 15 students from the candidate of drama have been engaging in an introductory program at the beginning of their candidate study, which included three workshops focusing upon the more personal matters of their studies and life. They have reflected upon this process during the intervention and afterwards in a semi-structured interview. Concurrently a thorough literature review has delivered key concepts for the exploration of the research question. Major findings of the study: It is difficult to find one definition of what employability encompasses, hence the overall picture of how to incorporate the concept is difficult. The present theory of employability has been focusing upon the competencies, which students and graduates are going to develop in order to become employable. In recent years there has been an emphasis upon the mechanism which supports graduates to trust themselves and to develop their self-efficacy in terms of getting a sustainable job. However, there has been little or no focus in the literature upon the idea of how students and graduates from higher education construct knowledge about and ability to act upon employable opportunities involving network of actors both material and immaterial network and meaningful relations for students and graduates in developing their enterprising behavior to achieve employment. The Act-network-theory combined with theory of entrepreneurship education suggests an alternative strategy to focus upon when explaining sustainable ways of creating employability among graduates. The preliminary study also supports this theory suggesting that it is difficult to emphasize a single or several factors of importance rather highlighting the effect of a multitude network. Concluding statement: This study is the first step of a ph.d.-study investigating this problem in Denmark and the USA in the period 2015 – 2019.

Keywords: employablity, graduates, action, opportunities

Procedia PDF Downloads 198
483 A (Morpho) Phonological Typology of Demonstratives: A Case Study in Sound Symbolism

Authors: Seppo Kittilä, Sonja Dahlgren

Abstract:

In this paper, a (morpho)phonological typology of proximal and distal demonstratives is proposed. Only the most basic proximal (‘this’) and distal (‘that’) forms have been considered, potential more fine-grained distinctions based on proximity are not relevant to our discussion, nor are the other functions the discussed demonstratives may have. The sample comprises 82 languages that represent the linguistic diversity of the world’s languages, although the study is not based on a systematic sample. Four different major types are distinguished; (1) Vowel type: front vs. back; high vs. low vowel (2) Consonant type: front-back consonants (3) Additional element –type (4) Varia. The proposed types can further be subdivided according to whether the attested difference concern only, e.g., vowels, or whether there are also other changes. For example, the first type comprises both languages such as Betta Kurumba, where only the vowel changes (i ‘this’, a ‘that’) and languages like Alyawarra (nhinha vs. nhaka), where there are also other changes. In the second type, demonstratives are distinguished based on whether the consonants are front or back; typically front consonants (e.g., labial and dental) appear on proximal demonstratives and back consonants on distal demonstratives (such as velar or uvular consonants). An example is provided by Bunaq, where bari marks ‘this’ and baqi ‘that’. In the third type, distal demonstratives typically have an additional element, making it longer in form than the proximal one (e.g., Òko òne ‘this’, ònébé ‘that’), but the type also comprises languages where the distal demonstrative is simply phonologically longer (e.g., Ngalakan nu-gaʔye vs. nu-gunʔbiri). Finally, the last type comprises cases that do not fit into the three other types, but a number of strategies are used by the languages of this group. The two first types can be explained by iconicity; front or high phonemes appear on the proximal demonstratives, while back/low phonemes are related to distal demonstratives. This means that proximal demonstratives are pronounced at the front and/or high part of the oral cavity, while distal demonstratives are pronounced lower and more back, which reflects the proximal/distal nature of their referents in the physical world. The first type is clearly the most common in our data (40/82 languages), which suggests a clear association with iconicity. Our findings support earlier findings that proximal and distal demonstratives have an iconic phonemic manifestation. For example, it has been argued that /i/ is related to smallness (small distance). Consonants, however, have not been considered before, or no systematic correspondences have been discovered. The third type, in turn, can be explained by markedness; the distal element is more marked than the proximal demonstrative. Moreover, iconicity is relevant also here: some languages clearly use less linguistic substance for referring to entities close to the speaker, which is manifested in the longer (morpho)phonological form of the distal demonstratives. The fourth type contains different kinds of cases, and systematic generalizations are hard to make.

Keywords: demonstratives, iconicity, language typology, phonology

Procedia PDF Downloads 153
482 Restriction on the Freedom of Economic Activity in the Polish Energy Law

Authors: Zofia Romanowska

Abstract:

Recently there have been significant changes in the Polish energy market. Due to the government's decision to strengthen energy security as well as to strengthen the implementation of the European Union common energy policy, the Polish energy market has been undergoing significant changes. In the face of these, it is necessary to answer the question about the direction the Polish energy rationing sector is going, how wide apart the powers of the state are and also whether the real regulator of energy projects in Poland is not in fact the European Union itself. In order to determine the role of the state as a regulator of the energy market, the study analyses the basic instruments of regulation, i.e. the licenses, permits and permissions to conduct various activities related to the energy market, such as the production and sale of liquid fuels or concessions for trade in natural gas. Bearing in mind that Polish law is part of the widely interpreted European Union energy policy, the legal solutions in neighbouring countries are also being researched, including those made in Germany, a country which plays a key role in the shaping of EU policies. The correct interpretation of the new legislation modifying the current wording of the Energy Law Act, such as obliging the entities engaged in the production and trade of liquid fuels (including abroad) to meet a number of additional requirements for the licensing and providing information to the state about conducted business, plays a key role in the study. Going beyond the legal framework for energy rationing, the study also includes a legal and economic analysis of public and private goods within the energy sector and delves into the subject of effective remedies. The research caused the relationships between progressive rationing introduced by the legislator and the rearrangement rules prevailing on the Polish energy market to be taken note of, which led to the introduction of greater transparency in the sector. The studies refer to the initial conclusion that currently, despite the proclaimed idea of liberalization of the oil and gas market and the opening of market to a bigger number of entities as a result of the newly implanted changes, the process of issuing and controlling the conduction of the concessions will be tightened, guaranteeing to entities greater security of energy supply. In the long term, the effect of the introduced legislative solutions will be the reduction of the amount of entities on the energy market. The companies that meet the requirements imposed on them by the new regulation to cope with the profitability of the business will in turn increase prices for their services, which will be have an impact on consumers' budgets.

Keywords: license, energy law, energy market, public goods, regulator

Procedia PDF Downloads 246
481 MARISTEM: A COST Action Focused on Stem Cells of Aquatic Invertebrates

Authors: Arzu Karahan, Loriano Ballarin, Baruch Rinkevich

Abstract:

Marine invertebrates, the highly diverse phyla of multicellular organisms, represent phenomena that are either not found or highly restricted in the vertebrates. These include phenomena like budding, fission, a fusion of ramets, and high regeneration power, such as the ability to create whole new organisms from either tiny parental fragment, many of which are controlled by totipotent, pluripotent, and multipotent stem cells. Thus, there is very much that can be learned from these organisms on the practical and evolutionary levels, further resembling Darwin's words, “It is not the strongest of the species that survives, nor the most intelligent, but the one most responsive to change”. The ‘stem cell’ notion highlights a cell that has the ability to continuously divide and differentiate into various progenitors and daughter cells. In vertebrates, adult stem cells are rare cells defined as lineage-restricted (multipotent at best) with tissue or organ-specific activities that are located in defined niches and further regulate the machinery of homeostasis, repair, and regeneration. They are usually categorized by their morphology, tissue of origin, plasticity, and potency. The above description not always holds when comparing the vertebrates with marine invertebrates’ stem cells that display wider ranges of plasticity and diversity at the taxonomic and the cellular levels. While marine/aquatic invertebrates stem cells (MISC) have recently raised more scientific interest, the know-how is still behind the attraction they deserve. MISC, not only are highly potent but, in many cases, are abundant (e.g., 1/3 of the entire animal cells), do not locate in permanent niches, participates in delayed-aging and whole-body regeneration phenomena, the knowledge of which can be clinically relevant. Moreover, they have massive hidden potential for the discovery of new bioactive molecules that can be used for human health (antitumor, antimicrobial) and biotechnology. The MARISTEM COST action (Stem Cells of Marine/Aquatic Invertebrates: From Basic Research to Innovative Applications) aims to connect the European fragmented MISC community. Under this scientific umbrella, the action conceptualizes the idea for adult stem cells that do not share many properties with the vertebrates’ stem cells, organizes meetings, summer schools, and workshops, stimulating young researchers, supplying technical and adviser support via short-term scientific studies, making new bridges between the MISC community and biomedical disciplines.

Keywords: aquatic/marine invertebrates, adult stem cell, regeneration, cell cultures, bioactive molecules

Procedia PDF Downloads 169
480 ‘Transnationalism and the Temporality of Naturalized Citizenship

Authors: Edward Shizha

Abstract:

Citizenship is not only political, but it is also a socio-cultural expectation that naturalized immigrants desire for. However, the outcomes of citizenship desirability are determined by forces outside the individual’s control based on legislation and laws that are designed at the macro and exosystemic levels by politicians and policy makers. These laws are then applied to determine the status (permanency or temporariness) of citizenship for immigrants and refugees, but the same laws do not apply to non-immigrant citizens who attain it by birth. While theoretically, citizenship has generally been considered an irrevocable legal status and the highest and most secure legal status one can hold in a state, it is not inviolate for immigrants. While Article 8 of the United Nations Convention on the Reduction of Statelessness provides grounds for revocation of citizenship obtained by immigrants and refugees in host countries, nation-states have their own laws tied to the convention that provide grounds for revocation. Ever since the 9/11 attacks in the USA, there has been a rise in conditional citizenship and the state’s withdrawal of citizenship through revocation laws that denaturalize citizens who end up not merely losing their citizenship but also the right to reside in the country of immigration. Because immigrants can be perceived as a security threat, the securitization of citizenship and the legislative changes have been adopted to specifically allow greater discretionary power in stripping people of their citizenship.The paper ‘Do We Really Belong Here?’ Transnationalism and the Temporality of Naturalized Citizenship examines literature on the temporality of naturalized citizenship and questions whether citizenship, for newcomers (immigrants and refugees), is a protected human right or a privilege. The paper argues that citizenship in a host country is a well sought-after status by newcomers. The question is whether their citizenship, if granted, has a permanent or temporary status and whether it is treated in the same way as that of non-immigrant citizens. The paper further argues that, despite citizenship having generally been considered an irrevocable status in most Western countries, in practice, if not in law, for immigrants and refugees, citizenship comes with strings attached because of policies and laws that control naturalized citizenship. These laws can be used to denationalize naturalized citizens through revocations for those stigmatized as ‘undesirables’ who are threatened with deportation. Whereas non-immigrant citizens (those who attain it by birth) have absolute right to their citizenship, this is seldom the case for immigrants.This paper takes a multidisciplinary approach using Urie Bronfenbrenner’s ecological systems theory, the macrosystem and exo-system, to examine and review literature on the temporality of naturalized citizenship and questions whether citizenship is a protected right or a privilege for immigrants. The paper challenges the human rights violation of citizenship revocation and argues for equality of treatment for all citizens despite how they acquired their citizenship. The fragility of naturalized citizenship undermines the basic rights and securities that citizenship status can provide to the person as an inclusive practice in a diverse society.

Keywords: citizenship, citizenship revocation, dual citizenship, human rights, naturalization, naturalized citizenship

Procedia PDF Downloads 75
479 Fake news and Conspiracy Narratives in the Covid-19 Crisis: An International Comparison

Authors: Caja Thimm

Abstract:

Already well before the Corona pandemic hit the world, ‘fake news‘ were no longer regarded as harmless twists of the truth but as intentionally composed disinformation, often with the goal of manipulative populist propaganda. During the Corona crisis, particularly conspiracy narratives have become a worldwide phenomenon with dangerous consequences (anti vaccination myths). The success of these manipulated news need s to be counteracted by trustworthy news, which in Europe particularly includes public broadcasting media and their social media channels. To understand better how the main public broadcasters in Germany, the UK, and France used Instagram strategically, a comparative study was carried out. The study – comparative analysis of Instagram during the Corona Crisis In our empirical study, we compared the activities by selected formats during the Corona crisis in order to see how the public broadcasters reached their audiences and how this might, in the longer run, affect journalistic strategies on social media platforms. First analysis showed that the increase in the use of social media overall was striking. Almost one in two adult online users (48 %) obtained information about the virus in social media, and in total, 38% of the younger age group (18-24) looked for Covid19 information on Instagram, so the platform can be regarded as one of the central digital spaces for Corona related information searches. Quantitative measures showed that 47% of recent posts by the broadcasters were related to Corona, and 7% treated conspiracy myths. For the more detailed content analysis, the following categories of analysis were applied: • Digital storytelling and instastories • Textuality and semantic keys • links to information • stickers • videochat • fact checking • news ticker • service • infografics and animated tables Additionally to these basic features, we particularly looked for new formats created during the crisis. Journalistic use of social media platforms opens up immediate and creative ways of applying the media logics of the respective platforms, and particularly the BBC and ARD formats proved to be interactive, responsive, and entertaining. Among them were new formats such as a space for user questions and personal uploads, interviews, music, comedy, etc. Particularly the fact checking channel got a lot of attention, as many user questions were focused on the conspiracy theories, which dominated the public discourse during many weeks in 2020. In the presentation, we will introduce eight particular strategies that show how public broadcasting journalism can adopt digital platforms and use them creatively and, hence help to counteract against conspiracy narratives and fake news.

Keywords: fake news, social media, digital journalism, digital methods

Procedia PDF Downloads 156
478 Transport Properties of Alkali Nitrites

Authors: Y. Mateyshina, A.Ulihin, N.Uvarov

Abstract:

Electrolytes with different type of charge carrier can find widely application in different using, e.g. sensors, electrochemical equipments, batteries and others. One of important components ensuring stable functioning of the equipment is electrolyte. Electrolyte has to be characterized by high conductivity, thermal stability, and wide electrochemical window. In addition to many advantageous characteristic for liquid electrolytes, the solid state electrolytes have good mechanical stability, wide working range of temperature range. Thus search of new system of solid electrolytes with high conductivity is an actual task of solid state chemistry. Families of alkali perchlorates and nitrates have been investigated by us earlier. In literature data about transport properties of alkali nitrites are absent. Nevertheless, alkali nitrites MeNO2 (Me= Li+, Na+, K+, Rb+ and Cs+), except for the lithium salt, have high-temperature phases with crystal structure of the NaCl-type. High-temperature phases of nitrites are orientationally disordered, i.e. non-spherical anions are reoriented over several equivalents directions in the crystal lattice. Pure lithium nitrite LiNO2 is characterized by ionic conductivity near 10-4 S/cm at 180°C and more stable as compared with lithium nitrate and can be used as a component for synthesis of composite electrolytes. In this work composite solid electrolytes in the binary system LiNO2 - A (A= MgO, -Al2O3, Fe2O3, CeO2, SnO2, SiO2) were synthesized and their structural, thermodynamic and electrical properties investigated. Alkali nitrite was obtained by exchange reaction from water solutions of barium nitrite and alkali sulfate. The synthesized salt was characterized by X-ray powder diffraction technique using D8 Advance X-Ray Diffractometer with Cu K radiation. Using thermal analysis, the temperatures of dehydration and thermal decomposition of salt were determined.. The conductivity was measured using a two electrode scheme in a forevacuum (6.7 Pa) with an HP 4284A (Precision LCR meter) in a frequency range 20 Hz < ν < 1 MHz. Solid composite electrolytes LiNO2 - A A (A= MgO, -Al2O3, Fe2O3, CeO2, SnO2, SiO2) have been synthesized by mixing of preliminary dehydrated components followed by sintering at 250°C. In the series of nitrite of alkaline metals Li+-Cs+, the conductivity varies not monotonically with increasing radius of cation. The minimum conductivity is observed for KNO2; however, with further increase in the radius of cation in the series, the conductivity tends to increase. The work was supported by the Russian Foundation for Basic research, grant #14-03-31442.

Keywords: conductivity, alkali nitrites, composite electrolytes, transport properties

Procedia PDF Downloads 319
477 Barriers and Opportunities in Apprenticeship Training: How to Complete a Vocational Upper Secondary Qualification with Intermediate Finnish Language Skills

Authors: Inkeri Jaaskelainen

Abstract:

The aim of this study is to shed light on what is it like to study in apprenticeship training using intermediate (or even lower level) Finnish. The aim is to find out and describe these students' experiences and feelings while acquiring a profession in Finnish as it is important to understand how immigrant background adult learners learn and how their needs could be better taken into account. Many students choose apprenticeships and start vocational training while their language skills in Finnish are still very weak. At work, students should be able to simultaneously learn Finnish and do vocational studies in a noisy, demanding, and stressful environment. Learning and understanding new things is very challenging under these circumstances, and sometimes students get exhausted and experience a lot of stress - which makes learning even more difficult. Students are different from each other, and so are their ways to learn. Both duties at work and school assignments require reasonably good general language skills, and, especially at work, language skills are also a safety issue. The empirical target of this study is a group of students with an immigrant background who studied in various fields with intensive L2 support in 2016–2018 and who by now have completed a vocational upper secondary qualification. The interview material for this narrative study was collected from those who completed apprenticeship training in 2019–2020. The data collection methods used are a structured thematic interview, a questionnaire, and observational data. Interviewees with an immigrant background have an inconsistent cultural and educational background - some have completed an academic degree in their country of origin while others have learned to read and write only in Finland. The analysis of the material utilizes thematic analysis, which is used to examine learning and related experiences. Learning a language at work is very different from traditional classroom teaching. With evolving language skills, at an intermediate level at best, rushing and stressing makes it even more difficult to understand and increases the fear of failure. Constant noise, rapidly changing situations, and uncertainty undermine the learning and well-being of apprentices. According to preliminary results, apprenticeship training is well suited to the needs of an adult immigrant student. In apprenticeship training, students need a lot of support for learning and understanding a new communication and working culture. Stress can result in, e.g., fatigue, frustration, and difficulties in remembering and understanding. Apprenticeship training can be seen as a good path to working life. However, L2 support is a very important part of apprenticeship training, and it indeed helps students to believe that one day they will graduate and even get employed in their new country.

Keywords: apprenticeship training, vocational basic degree, Finnish learning, wee-being

Procedia PDF Downloads 133
476 Advanced Techniques in Semiconductor Defect Detection: An Overview of Current Technologies and Future Trends

Authors: Zheng Yuxun

Abstract:

This review critically assesses the advancements and prospective developments in defect detection methodologies within the semiconductor industry, an essential domain that significantly affects the operational efficiency and reliability of electronic components. As semiconductor devices continue to decrease in size and increase in complexity, the precision and efficacy of defect detection strategies become increasingly critical. Tracing the evolution from traditional manual inspections to the adoption of advanced technologies employing automated vision systems, artificial intelligence (AI), and machine learning (ML), the paper highlights the significance of precise defect detection in semiconductor manufacturing by discussing various defect types, such as crystallographic errors, surface anomalies, and chemical impurities, which profoundly influence the functionality and durability of semiconductor devices, underscoring the necessity for their precise identification. The narrative transitions to the technological evolution in defect detection, depicting a shift from rudimentary methods like optical microscopy and basic electronic tests to more sophisticated techniques including electron microscopy, X-ray imaging, and infrared spectroscopy. The incorporation of AI and ML marks a pivotal advancement towards more adaptive, accurate, and expedited defect detection mechanisms. The paper addresses current challenges, particularly the constraints imposed by the diminutive scale of contemporary semiconductor devices, the elevated costs associated with advanced imaging technologies, and the demand for rapid processing that aligns with mass production standards. A critical gap is identified between the capabilities of existing technologies and the industry's requirements, especially concerning scalability and processing velocities. Future research directions are proposed to bridge these gaps, suggesting enhancements in the computational efficiency of AI algorithms, the development of novel materials to improve imaging contrast in defect detection, and the seamless integration of these systems into semiconductor production lines. By offering a synthesis of existing technologies and forecasting upcoming trends, this review aims to foster the dialogue and development of more effective defect detection methods, thereby facilitating the production of more dependable and robust semiconductor devices. This thorough analysis not only elucidates the current technological landscape but also paves the way for forthcoming innovations in semiconductor defect detection.

Keywords: semiconductor defect detection, artificial intelligence in semiconductor manufacturing, machine learning applications, technological evolution in defect analysis

Procedia PDF Downloads 51
475 Wave Powered Airlift PUMP for Primarily Artificial Upwelling

Authors: Bruno Cossu, Elio Carlo

Abstract:

The invention (patent pending) relates to the field of devices aimed to harness wave energy (WEC) especially for artificial upwelling, forced downwelling, production of compressed air. In its basic form, the pump consists of a hydro-pneumatic machine, driven by wave energy, characterised by the fact that it has no moving mechanical parts, and is made up of only two structural components: an hollow body, which is open at the bottom to the sea and partially immersed in sea water, and a tube, both joined together to form a single body. The shape of the hollow body is like a mushroom whose cap and stem are hollow; the stem is open at both ends and the lower part of its surface is crossed by holes; the tube is external and coaxial to the stem and is joined to it so as to form a single body. This shape of the hollow body and the type of connection to the tube allows the pump to operate simultaneously as an air compressor (OWC) on the cap side, and as an airlift on the stem side. The pump can be implemented in four versions, each of which provides different variants and methods of implementation: 1) firstly, for the artificial upwelling of cold, deep ocean water; 2) secondly, for the lifting and transfer of these waters to the place of use (above all, fish farming plants), even if kilometres away; 3) thirdly, for the forced downwelling of surface sea water; 4) fourthly, for the forced downwelling of surface water, its oxygenation, and the simultaneous production of compressed air. The transfer of the deep water or the downwelling of the raised surface water (as for pump versions indicated in points 2 and 3 above), is obtained by making the water raised by the airlift flow into the upper inlet of another pipe, internal or adjoined to the airlift; the downwelling of raised surface water, oxygenation, and the simultaneous production of compressed air (as for the pump version indicated in point 4), is obtained by installing a venturi tube on the upper end of the pipe, whose restricted section is connected to the external atmosphere, so that it also operates like a hydraulic air compressor (trompe). Furthermore, by combining one or more pumps for the upwelling of cold, deep water, with one or more pumps for the downwelling of the warm surface water, the system can be used in an Ocean Thermal Energy Conversion plant to supply the cold and the warm water required for the operation of the same, thus allowing to use, without increased costs, in addition to the mechanical energy of the waves, for the purposes indicated in points 1 to 4, the thermal one of the marine water treated in the process.

Keywords: air lifted upwelling, fish farming plant, hydraulic air compressor, wave energy converter

Procedia PDF Downloads 148
474 Understanding the Qualitative Nature of Product Reviews by Integrating Text Processing Algorithm and Usability Feature Extraction

Authors: Cherry Yieng Siang Ling, Joong Hee Lee, Myung Hwan Yun

Abstract:

The quality of a product to be usable has become the basic requirement in consumer’s perspective while failing the requirement ends up the customer from not using the product. Identifying usability issues from analyzing quantitative and qualitative data collected from usability testing and evaluation activities aids in the process of product design, yet the lack of studies and researches regarding analysis methodologies in qualitative text data of usability field inhibits the potential of these data for more useful applications. While the possibility of analyzing qualitative text data found with the rapid development of data analysis studies such as natural language processing field in understanding human language in computer, and machine learning field in providing predictive model and clustering tool. Therefore, this research aims to study the application capability of text processing algorithm in analysis of qualitative text data collected from usability activities. This research utilized datasets collected from LG neckband headset usability experiment in which the datasets consist of headset survey text data, subject’s data and product physical data. In the analysis procedure, which integrated with the text-processing algorithm, the process includes training of comments onto vector space, labeling them with the subject and product physical feature data, and clustering to validate the result of comment vector clustering. The result shows 'volume and music control button' as the usability feature that matches best with the cluster of comment vectors where centroid comments of a cluster emphasized more on button positions, while centroid comments of the other cluster emphasized more on button interface issues. When volume and music control buttons are designed separately, the participant experienced less confusion, and thus, the comments mentioned only about the buttons' positions. While in the situation where the volume and music control buttons are designed as a single button, the participants experienced interface issues regarding the buttons such as operating methods of functions and confusion of functions' buttons. The relevance of the cluster centroid comments with the extracted feature explained the capability of text processing algorithms in analyzing qualitative text data from usability testing and evaluations.

Keywords: usability, qualitative data, text-processing algorithm, natural language processing

Procedia PDF Downloads 285
473 Flow Field Optimization for Proton Exchange Membrane Fuel Cells

Authors: Xiao-Dong Wang, Wei-Mon Yan

Abstract:

The flow field design in the bipolar plates affects the performance of the proton exchange membrane (PEM) fuel cell. This work adopted a combined optimization procedure, including a simplified conjugate-gradient method and a completely three-dimensional, two-phase, non-isothermal fuel cell model, to look for optimal flow field design for a single serpentine fuel cell of size 9×9 mm with five channels. For the direct solution, the two-fluid method was adopted to incorporate the heat effects using energy equations for entire cells. The model assumes that the system is steady; the inlet reactants are ideal gases; the flow is laminar; and the porous layers such as the diffusion layer, catalyst layer and PEM are isotropic. The model includes continuity, momentum and species equations for gaseous species, liquid water transport equations in the channels, gas diffusion layers, and catalyst layers, water transport equation in the membrane, electron and proton transport equations. The Bulter-Volumer equation was used to describe electrochemical reactions in the catalyst layers. The cell output power density Pcell is maximized subjected to an optimal set of channel heights, H1-H5, and channel widths, W2-W5. The basic case with all channel heights and widths set at 1 mm yields a Pcell=7260 Wm-2. The optimal design displays a tapered characteristic for channels 1, 3 and 4, and a diverging characteristic in height for channels 2 and 5, producing a Pcell=8894 Wm-2, about 22.5% increment. The reduced channel heights of channels 2-4 significantly increase the sub-rib convection and widths for effectively removing liquid water and oxygen transport in gas diffusion layer. The final diverging channel minimizes the leakage of fuel to outlet via sub-rib convection from channel 4 to channel 5. Near-optimal design without huge loss in cell performance but is easily manufactured is tested. The use of a straight, final channel of 0.1 mm height has led to 7.37% power loss, while the design with all channel widths to be 1 mm with optimal channel heights obtained above yields only 1.68% loss of current density. The presence of a final, diverging channel has greater impact on cell performance than the fine adjustment of channel width at the simulation conditions set herein studied.

Keywords: optimization, flow field design, simplified conjugate-gradient method, serpentine flow field, sub-rib convection

Procedia PDF Downloads 296
472 Colorful Ethnoreligious Map of Iraq and the Current Situation of Minorities in the Country

Authors: Meszár Tárik

Abstract:

The aim of the study is to introduce the minority groups living in Iraq and to shed light on their current situation. The Middle East is a rather heterogeneous region in ethnic terms. It includes many ethnic, national, religious, linguistic, or ethnoreligious groups. The relationship between the majority and minority is the main cause of various conflicts in the region. It seems that most of the post-Ottoman states have not yet developed a unified national identity capable of integrating their multi-ethnic societies. The issue of minorities living in the Middle East is highly politicized and controversial, as the various Arab states consider the treatment of minorities as their internal affair, do not recognize discrimination or even deny the existence of any kind of minorities on their territory. This attitude of the Middle Eastern states may also be due to the fact that the minority issue can be abused and can serve as a reference point for the intervention policies of Western countries at any time. Methodologically, the challenges of these groups are perceived through the manifestos of prominent individuals and organizations belonging to minorities. The basic aim is to present the minorities’ own history in dealing with the issue. It also introduces the different ethnic and religious minorities in Iraq and analyzes their situation during the operation of the terrorist organization „Islamic State” and in the aftermath. It is clear that the situation of these communities deteriorated significantly with the advance of ISIS, but it is also clear that even after the expulsion of the militant group, we cannot necessarily report an improvement in this area, especially in terms of the ability of minorities to assert their interests and physical security. The emergence of armed militias involved in the expulsion of ISIS sometimes has extremely negative effects on them. Until the interests of non-Muslims are adequately represented at the local level and in the legislature, most experts and advocates believe that little will change in their situation. When conflicts flare, many Iraqi citizens usually leave Iraq, but because of the poor public security situation (threats from terrorist organizations, interventions by other countries), emigration causes serious problems not only outside the country’s borders but also within the country. Another ominous implication for minorities is that their communities are very slow if ever, to return to their homes after fleeing their own settlements. An important finding of the study is that this phenomenon is changing the face of traditional Iraqi settlements and threatens to plunge groups that have lived there for thousands of years into the abyss of history. Therefore, we not only present the current situation of minorities living in Iraq but also discuss their future possibilities.

Keywords: Middle East, Iraq, Islamic State, minorities

Procedia PDF Downloads 85
471 Recent Policy Changes in Israeli Early Childhood Frameworks: Hope for the Future

Authors: Yaara Shilo

Abstract:

Early childhood education and care (ECEC)in Israel has undergone extensive reform and now requires daycare centers to meet internationally recognized professional standards. Since 1948, one of the aims of childcare facilities was to enable women’s participation in the workforce.A 1965 law grouped daycare centers for young children with facilities for the elderly and for disabled persons under the same authority. In the 1970’s, ECEC leaders sought to change childcare from proprietary to educational facilities. From 1976 deliberations in the Knesset regarding appropriate attribution of ECEC frameworks resulted in their being moved to various authorities that supported women’s employment: Ministries of Finance, Industry, and Commerce, as well as the Welfare Department. Prior to 2018, 75% of infants and toddlers in institutional care were in unlicensed and unsupervised settings. Legislative processes accompanied the conceptual change to an eventual appropriate attribution of ECEC frameworks. Position papers over the past two decades resulted in recommendations for standards conforming to OECD regulations. Simultaneous incidents of child abuse, some resulting in death, riveted public attention to the need for adequate government supervision, accelerating the legislative process. Appropriate care for very young children must center on quality interactions with caregivers, thus requiring adequate staff training. Finally, in 2018 a law was passed stipulating standards for staff training, proper facilities, child-adult ratios, and safety measures. The Ariav commission expanded training to caregivers for ages 0-3. Transfer of the ECEC to the Ministry of Education ensured establishment of basic training. Groundwork created by new legislation initiated professional development of EC educators for ages 0-3. This process should raise salaries and bolster the system’s ability to attract quality employees. In 2022 responsibility for ECEC ages 0-3 was transferred from the Ministry of Finance to the Ministry of Education, shifting emphasis from proprietary care to professional considerations focusing on wellbeing and early childhood education. The recent revolutionary changes in ECEC point to a new age in the care and education of Israel’s youngest citizens. Implementation of international standards, adequate training, and professionalization of the workforce focus on the child’s needs.

Keywords: policy, early childhood, care and education, daycare, development

Procedia PDF Downloads 115
470 Energy Efficiency Measures in Canada’s Iron and Steel Industry

Authors: A. Talaei, M. Ahiduzzaman, A. Kumar

Abstract:

In Canada, an increase in the production of iron and steel is anticipated for satisfying the increasing demand of iron and steel in the oil sands and automobile industries. It is predicted that GHG emissions from iron and steel sector will show a continuous increase till 2030 and, with emissions of 20 million tonnes of carbon dioxide equivalent, the sector will account for more than 2% of total national GHG emissions, or 12% of industrial emissions (i.e. 25% increase from 2010 levels). Therefore, there is an urgent need to improve the energy intensity and to implement energy efficiency measures in the industry to reduce the GHG footprint. This paper analyzes the current energy consumption in the Canadian iron and steel industries and identifies energy efficiency opportunities to improve the energy intensity and mitigate greenhouse gas emissions from this industry. In order to do this, a demand tree is developed representing different iron and steel production routs and the technologies within each rout. The main energy consumer within the industry is found to be flared heaters accounting for 81% of overall energy consumption followed by motor system and steam generation each accounting for 7% of total energy consumption. Eighteen different energy efficiency measures are identified which will help the efficiency improvement in various subsector of the industry. In the sintering process, heat recovery from coolers provides a high potential for energy saving and can be integrated in both new and existing plants. Coke dry quenching (CDQ) has the same advantages. Within the blast furnace iron-making process, injection of large amounts of coal in the furnace appears to be more effective than any other option in this category. In addition, because coal-powered electricity is being phased out in Ontario (where the majority of iron and steel plants are located) there will be surplus coal that could be used in iron and steel plants. In the steel-making processes, the recovery of Basic Oxygen Furnace (BOF) gas and scrap preheating provides considerable potential for energy savings in BOF and Electric Arc Furnace (EAF) steel-making processes, respectively. However, despite the energy savings potential, the BOF gas recovery is not applicable in existing plants using steam recovery processes. Given that the share of EAF in steel production is expected to increase the application potential of the technology will be limited. On the other hand, the long lifetime of the technology and the expected capacity increase of EAF makes scrap preheating a justified energy saving option. This paper would present the results of the assessment of the above mentioned options in terms of the costs and GHG mitigation potential.

Keywords: Iron and Steel Sectors, Energy Efficiency Improvement, Blast Furnace Iron-making Process, GHG Mitigation

Procedia PDF Downloads 396
469 Affects Associations Analysis in Emergency Situations

Authors: Joanna Grzybowska, Magdalena Igras, Mariusz Ziółko

Abstract:

Association rule learning is an approach for discovering interesting relationships in large databases. The analysis of relations, invisible at first glance, is a source of new knowledge which can be subsequently used for prediction. We used this data mining technique (which is an automatic and objective method) to learn about interesting affects associations in a corpus of emergency phone calls. We also made an attempt to match revealed rules with their possible situational context. The corpus was collected and subjectively annotated by two researchers. Each of 3306 recordings contains information on emotion: (1) type (sadness, weariness, anxiety, surprise, stress, anger, frustration, calm, relief, compassion, contentment, amusement, joy) (2) valence (negative, neutral, or positive) (3) intensity (low, typical, alternating, high). Also, additional information, that is a clue to speaker’s emotional state, was annotated: speech rate (slow, normal, fast), characteristic vocabulary (filled pauses, repeated words) and conversation style (normal, chaotic). Exponentially many rules can be extracted from a set of items (an item is a previously annotated single information). To generate the rules in the form of an implication X → Y (where X and Y are frequent k-itemsets) the Apriori algorithm was used - it avoids performing needless computations. Then, two basic measures (Support and Confidence) and several additional symmetric and asymmetric objective measures (e.g. Laplace, Conviction, Interest Factor, Cosine, correlation coefficient) were calculated for each rule. Each applied interestingness measure revealed different rules - we selected some top rules for each measure. Owing to the specificity of the corpus (emergency situations), most of the strong rules contain only negative emotions. There are though strong rules including neutral or even positive emotions. Three examples of the strongest rules are: {sadness} → {anxiety}; {sadness, weariness, stress, frustration} → {anger}; {compassion} → {sadness}. Association rule learning revealed the strongest configurations of affects (as well as configurations of affects with affect-related information) in our emergency phone calls corpus. The acquired knowledge can be used for prediction to fulfill the emotional profile of a new caller. Furthermore, a rule-related possible context analysis may be a clue to the situation a caller is in.

Keywords: data mining, emergency phone calls, emotional profiles, rules

Procedia PDF Downloads 408
468 Cognitive Control Moderates the Concurrent Effect of Autistic and Schizotypal Traits on Divergent Thinking

Authors: Julie Ramain, Christine Mohr, Ahmad Abu-Akel

Abstract:

Divergent thinking—a cognitive component of creativity—and particularly the ability to generate unique and novel ideas, has been linked to both autistic and schizotypal traits. However, to our knowledge, the concurrent effect of these trait dimensions on divergent thinking has not been investigated. Moreover, it has been suggested that creativity is associated with different types of attention and cognitive control, and consequently how information is processed in a given context. Intriguingly, consistent with the diametric model, autistic and schizotypal traits have been associated with contrasting attentional and cognitive control styles. Positive schizotypal traits have been associated with reactive cognitive control and attentional flexibility, while autistic traits have been associated with proactive cognitive control and the increased focus of attention. The current study investigated the relationship between divergent thinking, autistic and schizotypal traits and cognitive control in a non-clinical sample of 83 individuals (Males = 42%; Mean age = 22.37, SD = 2.93), sufficient to detect a medium effect size. Divergent thinking was evaluated in an adapted version of-of the Figural Torrance Test of Creative Thinking. Crucially, since we were interested in testing divergent thinking productivity across contexts, participants were asked to generate items from basic shapes in four different contexts. The variance of the proportion of unique to total responses across contexts represented a measure of context adaptability, with lower variance indicating increased context adaptability. Cognitive control was estimated with the Behavioral Proactive Index of the AX-CPT task, with higher scores representing the ability to actively maintain goal-relevant information in a sustained/anticipatory manner. Autistic and schizotypal traits were assessed with the Autism Quotient (AQ) and the Community Assessment of Psychic Experiences (CAPE-42). Generalized linear models revealed a 3-way interaction of autistic and positive schizotypal traits, and proactive cognitive control, associated with increased context adaptability. Specifically, the concurrent effect of autistic and positive schizotypal traits on increased context adaptability was moderated by the level of proactive control and was only significant when proactive cognitive control was high. Our study reveals that autistic and positive schizotypal traits interactively facilitate the capacity to generate unique ideas across various contexts. However, this effect depends on cognitive control mechanisms indicative of the ability to proactively maintain attention when needed. The current results point to a unique profile of divergent thinkers who have the ability to respectively tap both systematic and flexible processing modes within and across contexts. This is particularly intriguing as such combination of phenotypes has been proposed to explain the genius of Beethoven, Nash, and Newton.

Keywords: autism, schizotypy, creativity, cognitive control

Procedia PDF Downloads 137
467 Frequency Interpretation of a Wave Function, and a Vertical Waveform Treated as A 'Quantum Leap'

Authors: Anthony Coogan

Abstract:

Born’s probability interpretation of wave functions would have led to nearly identical results had he chosen a frequency interpretation instead. Logically, Born may have assumed that only one electron was under consideration, making it nonsensical to propose a frequency wave. Author’s suggestion: the actual experimental results were not of a single electron; rather, they were groups of reflected x-ray photons. The vertical waveform used by Scrhödinger in his Particle in the Box Theory makes sense if it was intended to represent a quantum leap. The author extended the single vertical panel to form a bar chart: separate panels would represent different energy levels. The proposed bar chart would be populated by reflected photons. Expansion of basic ideas: Part of Scrhödinger’s ‘Particle in the Box’ theory may be valid despite negative criticism. The waveform used in the diagram is vertical, which may seem absurd because real waves decay at a measurable rate, rather than instantaneously. However, there may be one notable exception. Supposedly, following from the theory, the Uncertainty Principle was derived – may a Quantum Leap not be represented as an instantaneous waveform? The great Scrhödinger must have had some reason to suggest a vertical waveform if the prevalent belief was that they did not exist. Complex wave forms representing a particle are usually assumed to be continuous. The actual observations made were x-ray photons, some of which had struck an electron, been reflected, and then moved toward a detector. From Born’s perspective, doing similar work the years in question 1926-7, he would also have considered a single electron – leading him to choose a probability distribution. Probability Distributions appear very similar to Frequency Distributions, but the former are considered to represent the likelihood of future events. Born’s interpretation of the results of quantum experiments led (or perhaps misled) many researchers into claiming that humans can influence events just by looking at them, e.g. collapsing complex wave functions by 'looking at the electron to see which slit it emerged from', while in reality light reflected from the electron moved in the observer’s direction after the electron had moved away. Astronomers may say that they 'look out into the universe' but are actually using logic opposed to the views of Newton and Hooke and many observers such as Romer, in that light carries information from a source or reflector to an observer, rather the reverse. Conclusion: Due to the controversial nature of these ideas, especially its implications about the nature of complex numbers used in applications in science and engineering, some time may pass before any consensus is reached.

Keywords: complex wave functions not necessary, frequency distributions instead of wave functions, information carried by light, sketch graph of uncertainty principle

Procedia PDF Downloads 199
466 Re-interpreting Ruskin with Respect to the Wall

Authors: Anjali Sadanand, R. V. Nagarajan

Abstract:

Architecture morphs with advances in technology and the roof, wall, and floor as basic elements of a building, follow in redefining themselves over time. Their contribution is bound by time and held by design principles that deal with function, sturdiness, and beauty. Architecture engages with people to give joy through its form, material, design structure, and spatial qualities. This paper attempts to re-interpret John Ruskin’s “Seven lamps of Architecture” in the context of the architecture of the modern and present period. The paper focuses on the “wall” as an element of study in this context. Four of Ruskin’s seven lamps will be discussed, namely beauty, truth, life, and memory, through examples of architecture ranging from modernism to contemporary architecture of today. The study will focus on the relevance of Ruskin’s principles to the “wall” in specific, in buildings of different materials and over a range of typologies from all parts of the world. Two examples will be analyzed for each lamp. It will be shown that in each case, there is relevance to the significance of Ruskin’s lamps in modern and contemporary architecture. Nature to which Ruskin alludes to for his lamp of “beauty” is found in the different expressions of interpretation used by Corbusier in his Villa Stein façade based on proportion found in nature and in the direct expression of Toyo Ito in his translation of an understanding of the structure of trees into his façade design of the showroom for a Japanese bag boutique. “Truth” is shown in Mies van der Rohe’s Crown Hall building with its clarity of material and structure and Studio Mumbai’s Palmyra House, which celebrates the use of natural materials and local craftsmanship. “Life” is reviewed with a sustainable house in Kerala by Ashrams Ravi and Alvar Aalto’s summer house, which illustrate walls as repositories of intellectual thought and craft. “Memory” is discussed with Charles Correa’s Jawahar Kala Kendra and Venturi’s Vana Venturi house and discloses facades as text in the context of its materiality and iconography. Beauty is reviewed in Villa Stein and Toyo Ito’s Branded Retail building in Tokyo. The paper thus concludes that Ruskin’s Lamps can be interpreted in today’s context and add richness to meaning to the understanding of architecture.

Keywords: beauty, design, facade, modernism

Procedia PDF Downloads 118
465 Collaboration versus Cooperation: Grassroots Activism in Divided Cities and Communication Networks

Authors: R. Barbour

Abstract:

Peace-building organisations act as a network of information for communities. Through fieldwork, it was highlighted that grassroots organisations and activists may cooperate with each other in their actions of peace-building; however, they would not collaborate. Within two divided societies; Nicosia in Cyprus and Jerusalem in Israel, there is a distinction made by organisations and activists with regards to activities being more ‘co-operative’ than ‘collaborative’. This theme became apparent when having informal conversations and semi-structured interviews with various members of the activist communities. This idea needs further exploration as these distinctions could impact upon the efficiency of peacebuilding activities within divided societies. Civil societies within divided landscapes, both physically and socially, play an important role in conflict resolution. How organisations and activists interact with each other has the possibility to be very influential with regards to peacebuilding activities. Working together sets a positive example for divided communities. Cooperation may be considered a primary level of interaction between CSOs. Therefore, at the beginning of a working relationship, organisations cooperate over basic agendas, parallel power structures and focus, which led to the same objective. Over time, in some instances, due to varying factors such as funding, more trust and understanding within the relationship, it could be seen that processes progressed to more collaborative ways. It is evident to see that NGOs and activist groups are highly independent and focus on their own agendas before coming together over shared issues. At this time, there appears to be more collaboration in Nicosia among CSOs and activists than Jerusalem. The aims and objectives of agendas also influence how organisations work together. In recent years, Nicosia, and Cyprus in general, have perhaps changed their focus from peace-building initiatives to more environmental issues which have become new-age reconciliation topics. Civil society does not automatically indicate like-minded organisations however solidarity within social groups can create ties that bring people and resources together. In unequal societies, such as those in Nicosia and Jerusalem, it is these ties that cut across groups and are essential for social cohesion. Societies are a collection of social groups; individuals who have come together over common beliefs. These groups in turn shape the identities and determine the values and structures within societies. At many different levels and stages, social groups work together through cooperation and collaboration. These structures in turn have the capabilities to open up networks to less powerful or excluded groups, with the aim to produce social cohesion which may contribute social stability and economic welfare over any extended period.

Keywords: collaboration, cooperation, grassroots activism, networks of communication

Procedia PDF Downloads 158