Search results for: planned behavior theory
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11106

Search results for: planned behavior theory

876 Managers' Awareness of Employees' Mental Health in Small- and Medium-Sized Enterprises in Underpopulated Mountainous Areas

Authors: Susumu Fukita, Hiromi Kawasaki, Satoko Yamasaki, Kotomi Yamashita, Tomoko Iki

Abstract:

The increase in the number of workers with mental health problems has become an issue. Many workers work in small- and medium-sized enterprises, which often support local employment and economy, especially in underpopulated mountainous areas. It is important for managers to take mental health measures for employees since there is no budget to hire health staff in small- and medium-sized enterprises. It is necessary to understand the manager's attitude toward the mental health of employees and to publicly support the manager in promoting mental health measures for employees. The purpose of this study was to study the awareness of managers of small- and medium-sized enterprises regarding the mental health of employees and to consider support for managers to take measures for the mental health of employees. Semi-structured interviews were conducted with six managers of small- and medium-sized enterprises in underpopulated mountainous areas in November 2019. Managers were asked about their awareness of the mental health of their employees. Qualitative descriptive analysis was used, and subcategories and categories were extracted. Four categories emerged. Regarding the mental health of employees, the managers acknowledged that if the appearance and behavior of the employees do not interfere with their lives, the manager judges that the employees’ mental health is normal. It was also found that the managers acknowledged that there is a comfortable working environment due to the characteristics of the underpopulated mountainous area. On the other hand, the managers acknowledged that employees are dissatisfied with salaries and management systems. In addition, it was found the manager acknowledged that some employees retire due to mental health problems. Although managers recognized that employees may be dissatisfied with salaries, they also recognized that there was a comfortable working environment due to the characteristics of the areas, with good interpersonal relationships. Economic challenges are difficult to solve in underpopulated mountainous areas. It is useful to consider measures that take advantage of the characteristics of the areas where it is easy to work because of good relations with each other, for example, to create a family-like workplace culture where managers and employees can engage in daily conversation. The managers judged that the employees were in good health if there was no interference with their lives. However, it is too late to take measures at the stage when it becomes an obstacle to life. Therefore, it is necessary to provide training for managers to learn observation techniques by which they quickly notice changes in the situation of employees and give appropriate responses; and to set up a contact point for managers to consult. Local governments should actively provide public support such as training for managers and establishing consultation desks to maintain valuable employment and local economics in underpopulated mountainous areas.

Keywords: employer, mental health, small- and medium- sized enterprises, underpopulated areas

Procedia PDF Downloads 142
875 Investigation of Mangrove Area Effects on Hydrodynamic Conditions of a Tidal Dominant Strait Near the Strait of Hormuz

Authors: Maryam Hajibaba, Mohsen Soltanpour, Mehrnoosh Abbasian, S. Abbas Haghshenas

Abstract:

This paper aims to evaluate the main role of mangroves forests on the unique hydrodynamic characteristics of the Khuran Strait (KS) in the Persian Gulf. Investigation of hydrodynamic conditions of KS is vital to predict and estimate sedimentation and erosion all over the protected areas north of Qeshm Island. KS (or Tang-e-Khuran) is located between Qeshm Island and the Iranian mother land and has a minimum width of approximately two kilometers. Hydrodynamics of the strait is dominated by strong tidal currents of up to 2 m/s. The bathymetry of the area is dynamic and complicated as 1) strong currents do exist in the area which lead to seemingly sand dune movements in the middle and southern parts of the strait, and 2) existence a vast area with mangrove coverage next to the narrowest part of the strait. This is why ordinary modeling schemes with normal mesh resolutions are not capable for high accuracy estimations of current fields in the KS. A comprehensive set of measurements were carried out with several components, to investigate the hydrodynamics and morpho-dynamics of the study area, including 1) vertical current profiling at six stations, 2) directional wave measurements at four stations, 3) water level measurements at six stations, 4) wind measurements at one station, and 5) sediment grab sampling at 100 locations. Additionally, a set of periodic hydrographic surveys was included in the program. The numerical simulation was carried out by using Delft3D – Flow Module. Model calibration was done by comparing water levels and depth averaged velocity of currents against available observational data. The results clearly indicate that observed data and simulations only fit together if a realistic perspective of the mangrove area is well captured by the model bathymetry data. Generating unstructured grid by using RGFGRID and QUICKIN, the flow model was driven with water level time-series at open boundaries. Adopting the available field data, the key role of mangrove area on the hydrodynamics of the study area can be studied. The results show that including the accurate geometry of the mangrove area and consideration of its sponge-like behavior are the key aspects through which a realistic current field can be simulated in the KS.

Keywords: Khuran Strait, Persian Gulf, tide, current, Delft3D

Procedia PDF Downloads 202
874 Impacts of Commercial Honeybees on Native Butterflies in High-Elevation Meadows in Utah, USA

Authors: Jacqueline Kunzelman, Val Anderson, Robert Johnson, Nicholas Anderson, Rebecca Bates

Abstract:

In an effort to protect honeybees from colony collapse disorder, beekeepers are filing for government permits to use natural lands as summer pasture for honeybees under the multiple-use management regime in the United States. Utilizing natural landscapes in high mountain ranges may help strengthen honeybee colonies, as this natural setting is generally void of chemical pollutants and pesticides that are found in agricultural and urban settings. However, the introduction of a competitive species could greatly impact the native species occupying these natural landscapes. While honeybees and butterflies have different life histories, behavior, and foraging strategies, they compete for the same nectar resources. Few, if any, studies have focused on the potential population effects of commercial honeybees on native butterfly abundance and diversity. This study attempts to observe this impact using a paired before-after control-impact (BACI) design. Over the course of two years, malaise trap samples were collected every week during the months of the flowering season in two similar areas separated by 11 kilometers. Each area contained nine malaise trap sites for replication. In the first year, samples were taken to analyze and establish trends within the pollinating communities. In the second year, honeybees were introduced to only one of the two areas, and a change in trends between the two areas was assessed. Contrary to the original hypothesis, the resulting observation was an overall significant increase in the mean butterfly abundance in the impact areas after honeybees were introduced, while control areas remained relatively stable. This overall increase in abundance over the season can be attributed to an increase in butterflies during the first and second periods of the data collection when populations were near their peak. Several potential theories are 1) Honeybees are deterring a natural predator/competitor of butterflies that previously limited population growth. 2) Honeybees are consuming resources regularly used by butterflies, which may extend the foraging time and consequent capture rates of butterflies. 3) Environmental factors such as number of rainy days were inconsistent between control and impact areas, biasing capture rates. This ongoing research will help determine the suitability of high mountain ranges for the summer pasturing of honeybees and the population impacts on many different pollinators.

Keywords: butterfly, competition, honeybee, pollinator

Procedia PDF Downloads 144
873 Interface Designer as Cultural Producer: A Dialectic Materialist Approach to the Role of Visual Designer in the Present Digital Era

Authors: Cagri Baris Kasap

Abstract:

In this study, how interface designers can be viewed as producers of culture in the current era will be interrogated from a critical theory perspective. Walter Benjamin was a German Jewish literary critical theorist who, during 1930s, was engaged in opposing and criticizing the Nazi use of art and media. ‘The Author as Producer’ is an essay that Benjamin has read at the Communist Institute for the Study of Fascism in Paris. In this article, Benjamin relates directly to the dialectics between base and superstructure and argues that authors, normally placed within the superstructure should consider how writing and publishing is production and directly related to the base. Through it, he discusses what it could mean to see author as producer of his own text, as a producer of writing, understood as an ideological construct that rests on the apparatus of production and distribution. So Benjamin concludes that the author must write in ways that relate to the conditions of production, he must do so in order to prepare his readers to become writers and even make this possible for them by engineering an ‘improved apparatus’ and must work toward turning consumers to producers and collaborators. In today’s world, it has become a leading business model within Web 2.0 services of multinational Internet technologies and culture industries like Amazon, Apple and Google, to transform readers, spectators, consumers or users into collaborators and co-producers through platforms such as Facebook, YouTube and Amazon’s CreateSpace Kindle Direct Publishing print-on-demand, e-book and publishing platforms. However, the way this transformation happens is tightly controlled and monitored by combinations of software and hardware. In these global-market monopolies, it has become increasingly difficult to get insight into how one’s writing and collaboration is used, captured, and capitalized as a user of Facebook or Google. In the lens of this study, it could be argued that this criticism could very well be considered by digital producers or even by the mass of collaborators in contemporary social networking software. How do software and design incorporate users and their collaboration? Are they truly empowered, are they put in a position where they are able to understand the apparatus and how their collaboration is part of it? Or has the apparatus become a means against the producers? Thus, when using corporate systems like Google and Facebook, iPhone and Kindle without any control over the means of production, which is closed off by opaque interfaces and licenses that limit our rights of use and ownership, we are already the collaborators that Benjamin calls for. For example, the iPhone and the Kindle combine a specific use of technology to distribute the relations between the ‘authors’ and the ‘prodUsers’ in ways that secure their monopolistic business models by limiting the potential of the technology.

Keywords: interface designer, cultural producer, Walter Benjamin, materialist aesthetics, dialectical thinking

Procedia PDF Downloads 140
872 Detailed Quantum Circuit Design and Evaluation of Grover's Algorithm for the Bounded Degree Traveling Salesman Problem Using the Q# Language

Authors: Wenjun Hou, Marek Perkowski

Abstract:

The Traveling Salesman problem is famous in computing and graph theory. In short, it asks for the Hamiltonian cycle of the least total weight in a given graph with N nodes. All variations on this problem, such as those with K-bounded-degree nodes, are classified as NP-complete in classical computing. Although several papers propose theoretical high-level designs of quantum algorithms for the Traveling Salesman Problem, no quantum circuit implementation of these algorithms has been created up to our best knowledge. In contrast to previous papers, the goal of this paper is not to optimize some abstract complexity measures based on the number of oracle iterations, but to be able to evaluate the real circuit and time costs of the quantum computer. Using the emerging quantum programming language Q# developed by Microsoft, which runs quantum circuits in a quantum computer simulation, an implementation of the bounded-degree problem and its respective quantum circuit were created. To apply Grover’s algorithm to this problem, a quantum oracle was designed, evaluating the cost of a particular set of edges in the graph as well as its validity as a Hamiltonian cycle. Repeating the Grover algorithm with an oracle that finds successively lower cost each time allows to transform the decision problem to an optimization problem, finding the minimum cost of Hamiltonian cycles. N log₂ K qubits are put into an equiprobablistic superposition by applying the Hadamard gate on each qubit. Within these N log₂ K qubits, the method uses an encoding in which every node is mapped to a set of its encoded edges. The oracle consists of several blocks of circuits: a custom-written edge weight adder, node index calculator, uniqueness checker, and comparator, which were all created using only quantum Toffoli gates, including its special forms, which are Feynman and Pauli X. The oracle begins by using the edge encodings specified by the qubits to calculate each node that this path visits and adding up the edge weights along the way. Next, the oracle uses the calculated nodes from the previous step and check that all the nodes are unique. Finally, the oracle checks that the calculated cost is less than the previously-calculated cost. By performing the oracle an optimal number of times, a correct answer can be generated with very high probability. The oracle of the Grover Algorithm is modified using the recalculated minimum cost value, and this procedure is repeated until the cost cannot be further reduced. This algorithm and circuit design have been verified, using several datasets, to generate correct outputs.

Keywords: quantum computing, quantum circuit optimization, quantum algorithms, hybrid quantum algorithms, quantum programming, Grover’s algorithm, traveling salesman problem, bounded-degree TSP, minimal cost, Q# language

Procedia PDF Downloads 187
871 Examining Kokugaku as a Pattern of Defining Identity in Global Comparison

Authors: Mária Ildikó Farkas

Abstract:

Kokugaku of the Edo period can be seen as a key factor of defining cultural (and national) identity in the 18th and early 19th century based on Japanese cultural heritage. Kokugaku focused on Japanese classics, on exploring, studying and reviving (or even inventing) ancient Japanese language, literature, myths, history and also political ideology. ‘Japanese culture’ as such was distinguished from Chinese (and all other) cultures, ‘Japanese identity’ was thus defined. Meiji scholars used kokugaku conceptions of Japan to construct a modern national identity based on the premodern and culturalist conceptions of community. The Japanese cultural movement of the 18-19th centuries (kokugaku) of defining cultural and national identity before modernization can be compared not to the development of Western Europe (where national identity strongly attached to modern nation states) or other parts of Asia (where these emerged after the Western colonization), but rather with the ‘national awakening’ movements of the peoples of East Central Europe, a comparison which have not been dealt with in the secondary literature yet. The role of a common language, culture, history and myths in the process of defining cultural identity – following mainly Miroslav Hroch’s comparative and interdisciplinary theory of national development – can be examined compared to the movements of defining identity of the peoples of East Central Europe (18th-19th c). In the shadow of a cultural and/or political ‘monolith’ (China for Japan and Germany for Central Europe), before modernity, ethnic groups or communities started to evolve their own identities with cultural movements focusing on their own language and culture, thus creating their cultural identity, and in the end, a new sense of community, the nation. Comparing actual texts (‘narratives’) of the kokugaku scholars and Central European writers of the nation building period (18th and early 19th centuries) can reveal the similarities of the discourses of deliberate searches for identity. Similar motives of argument can be identified in these narratives: ‘language’ as the primary bearer of collective identity, the role of language in culture, ‘culture’ as the main common attribute of the community; and similar aspirations to explore, search and develop native language, ‘genuine’ culture, ‘original’ traditions. This comparative research offering ‘development patterns’ for interpretation can help us understand processes that may be ambiguously considered ‘backward’ or even ‘deleterious’ (e.g. cultural nationalism) or just ‘unique’. ‘Cultural identity’ played a very important role in the formation of national identity during modernization especially in the case of non-Western communities, who had to face the danger of losing their identities in the course of ‘Westernization’ accompanying modernization.

Keywords: cultural identity, Japanese modernization, kokugaku, national awakening

Procedia PDF Downloads 264
870 Revealing Thermal Degradation Characteristics of Distinctive Oligo-and Polisaccharides of Prebiotic Relevance

Authors: Attila Kiss, Erzsébet Némedi, Zoltán Naár

Abstract:

As natural prebiotic (non-digestible) carbohydrates stimulate the growth of colon microflora and contribute to maintain the health of the host, analytical studies aiming at revealing the chemical behavior of these beneficial food components came to the forefront of interest. Food processing (especially baking) may lead to a significant conversion of the parent compounds, hence it is of utmost importance to characterize the transformation patterns and the plausible decomposition products formed by thermal degradation. The relevance of this work is confirmed by the wide-spread use of these carbohydrates (fructo-oligosaccharides, cyclodextrins, raffinose and resistant starch) in the food industry. More and more functional foodstuffs are being developed based on prebiotics as bioactive components. 12 different types of oligosaccharides have been investigated in order to reveal their thermal degradation characteristics. Different carbohydrate derivatives (D-fructose and D-glucose oligomers and polymers) have been exposed to elevated temperatures (150 °C 170 °C, 190 °C, 210 °C, and 220 °C) for 10 min. An advanced HPLC method was developed and used to identify the decomposition products of carbohydrates formed as a consequence of thermal treatment. Gradient elution was applied with binary solvent elution (acetonitrile, water) through amine based carbohydrate column. Evaporative light scattering (ELS) proved to be suitable for the reliable detection of the UV/VIS inactive carbohydrate degradation products. These experimental conditions and applied advanced techniques made it possible to survey all the formed intermediers. Change in oligomer distribution was established in cases of all studied prebiotics throughout the thermal treatments. The obtained results indicate increased extent of chain degradation of the carbohydrate moiety at elevated temperatures. Prevalence of oligomers with shorter chain length and even the formation of monomer sugars (D-glucose and D-fructose) might be observed at higher temperatures. Unique oligomer distributions, which have not been described previously are revealed in the case of each studied, specific carbohydrate, which might result in various prebiotic activities. Resistant starches exhibited high stability when being thermal treated. The degradation process has been modeled by a plausible reaction mechanism, in which proton catalyzed degradation and chain cleavage take place.

Keywords: prebiotics, thermal degradation, fructo-oligosaccharide, HPLC, ELS detection

Procedia PDF Downloads 300
869 Development of Technologies for the Treatment of Nutritional Problems in Primary Care

Authors: Marta Fernández Batalla, José María Santamaría García, Maria Lourdes Jiménez Rodríguez, Roberto Barchino Plata, Adriana Cercas Duque, Enrique Monsalvo San Macario

Abstract:

Background: Primary Care Nursing is taking more autonomy in clinical decisions. One of the most frequent therapies to solve is related to the problems of maintaining a sufficient supply of food. Nursing diagnoses related to food are addressed by the nurse-family and community as the first responsible. Objectives and interventions are set according to each patient. To improve the goal setting and the treatment of these care problems, a technological tool is developed to help nurses. Objective: To evaluate the computational tool developed to support the clinical decision in feeding problems. Material and methods: A cross-sectional descriptive study was carried out at the Meco Health Center, Madrid, Spain. The study population consisted of four specialist nurses in primary care. These nurses tested the tool on 30 people with ‘need for nutritional therapy’. Subsequently, the usability of the tool and the satisfaction of the professional were sought. Results: A simple and convenient computational tool is designed for use. It has 3 main entrance fields: age, size, sex. The tool returns the following information: BMI (Body Mass Index) and caloric consumed by the person. The next step is the caloric calculation depending on the activity. It is possible to propose a goal of BMI or weight to achieve. With this, the amount of calories to be consumed is proposed. After using the tool, it was determined that the tool calculated the BMI and calories correctly (in 100% of clinical cases). satisfaction on nutritional assessment was ‘satisfactory’ or ‘very satisfactory’, linked to the speed of operations. As a point of improvement, the options of ‘stress factor’ linked to weekly physical activity. Conclusion: Based on the results, it is clear that the computational tools of decision support are useful in the clinic. Nurses are not only consumers of computational tools, but can develop their own tools. These technological solutions improve the effectiveness of nutrition assessment and intervention. We are currently working on improvements such as the calculation of protein percentages as a function of protein percentages as a function of stress parameters.

Keywords: feeding behavior health, nutrition therapy, primary care nursing, technology assessment

Procedia PDF Downloads 225
868 Effect of Inoculum Ratio on Dark Fermentative Hydrogen Production

Authors: Zeynep Yilmazer Hitit, Patrick C. Hallenbeck

Abstract:

Fuel reserve requirements due to depletion of fossil fuels have increased interest in biohydrogen since the 1990’s. In fermentative hydrogen production, pure, mixed, and co-cultures can be used to produce hydrogen. Several previous studies have evaluated hydrogen production by pure cultures of Clostridium butyricum or Enterobacter aerogenes. Evaluating hydrogen production by co-culture of these microorganisms is an interestıng approach since E. aerogenes is a facultative microorganism with resistance to oxygen in contrast to the strict anaerobe C. butyricum, and therefore has the ability to maintain anaerobic conditions. It was found that using co-cultures of facultative E. aerogenes (as a reducing agent and H2 producer) and the obligate anaerobe C. butyricum for producing hydrogen increases the yield of hydrogen by about 50% compared to C. butyricum by itself. Also, using different types of microorganisms for hydrogen production eliminates the need to use expensive reducing agents. C. butyricum strain pre-cultured anaerobically at 37 0C for 15h by inoculating 100 mL of GP medium (pH 6.8) consisting of 1% glucose, 2% polypeptone, 0.2% KH2PO4, 0.05% yeast extract, 0.05% MgSO4. 7H2O and E. aerogenes strain was pre-cultured aerobically at 30 0C, 150 rpm for 9 h by inoculating 100 mL of TGY medium (pH 6.8), consisting of 0.1% glucose, 0.5% tryptone, 0.1% K2HPO4, 0.5% yeast extract. All duplicate batch experiments were conducted in 100 mL bottles with different inoculum ratios of Clostridium butyricum and Enterobater aerogenes (C:E) using 5x diluted rich media (GP) consisting of 2 g/L glucose, 4g/L polypeptone, 0.4 g/L KH2PO4, 0.1 g/L yeast extract, 0.1 MgSO4.7H2O. The range of inoculum ratio of C. butyricum to E. aerogenes were 2:1,4:1,8:1, 1:2,1:4, 1:8, 1:0, 0:1. Using glucose as a carbon source aided in the observation of microbial behavior as well as making the effect of inoculum ratio more evident. Nearly all the glucose in the medium was used to produce hydrogen, except at a 1:0 ratio of inoculum (i.e. containing only C. butyricum). Low glucose consumption leads to a higher hydrogen yield due to cumulative hydrogen production and consumption of glucose, but not as much as C:E, 8:1. The lowest hydrogen yield was achieved in 1:8 inoculum ratio of C:E, 71.9 mL, 1.007±0.01 mol H2/mol glucose and the highest cumulative hydrogen, hydrogen yield and dry cell weight were achieved in 8:1 inoculum ratio of C:E, 117.4 mL, 2.035±0.082 mol H2/mol glucose, 0.4 g/L respectively. In this study effect of inoculum ratio on dark fermentative biohydrogen production using C. butyricum and E. aerogenes was investigated. The maximum hydrogen yield of 2.035mol H2/mol glucose was obtained using 2g/L glucose, an initial pH of 6 and an inoculum ratio of C. butyricum to E. aerogenes of 8:1. Results showed that inoculum ratio is an important parameter on hydrogen production due to competition between the two microorganisms in using substrate for growth and production of by-products. The results presented here could be of great significance for further waste management studies using co-culture hydrogen production.

Keywords: biohydrogen, Clostridium butyricum, dark fermentation, Enterobacter aerogenes, inoculum ratio in biohydrogen production

Procedia PDF Downloads 232
867 Automatic and High Precise Modeling for System Optimization

Authors: Stephanie Chen, Mitja Echim, Christof Büskens

Abstract:

To describe and propagate the behavior of a system mathematical models are formulated. Parameter identification is used to adapt the coefficients of the underlying laws of science. For complex systems this approach can be incomplete and hence imprecise and moreover too slow to be computed efficiently. Therefore, these models might be not applicable for the numerical optimization of real systems, since these techniques require numerous evaluations of the models. Moreover not all quantities necessary for the identification might be available and hence the system must be adapted manually. Therefore, an approach is described that generates models that overcome the before mentioned limitations by not focusing on physical laws, but on measured (sensor) data of real systems. The approach is more general since it generates models for every system detached from the scientific background. Additionally, this approach can be used in a more general sense, since it is able to automatically identify correlations in the data. The method can be classified as a multivariate data regression analysis. In contrast to many other data regression methods this variant is also able to identify correlations of products of variables and not only of single variables. This enables a far more precise and better representation of causal correlations. The basis and the explanation of this method come from an analytical background: the series expansion. Another advantage of this technique is the possibility of real-time adaptation of the generated models during operation. Herewith system changes due to aging, wear or perturbations from the environment can be taken into account, which is indispensable for realistic scenarios. Since these data driven models can be evaluated very efficiently and with high precision, they can be used in mathematical optimization algorithms that minimize a cost function, e.g. time, energy consumption, operational costs or a mixture of them, subject to additional constraints. The proposed method has successfully been tested in several complex applications and with strong industrial requirements. The generated models were able to simulate the given systems with an error in precision less than one percent. Moreover the automatic identification of the correlations was able to discover so far unknown relationships. To summarize the above mentioned approach is able to efficiently compute high precise and real-time-adaptive data-based models in different fields of industry. Combined with an effective mathematical optimization algorithm like WORHP (We Optimize Really Huge Problems) several complex systems can now be represented by a high precision model to be optimized within the user wishes. The proposed methods will be illustrated with different examples.

Keywords: adaptive modeling, automatic identification of correlations, data based modeling, optimization

Procedia PDF Downloads 402
866 Model-Based Diagnostics of Multiple Tooth Cracks in Spur Gears

Authors: Ahmed Saeed Mohamed, Sadok Sassi, Mohammad Roshun Paurobally

Abstract:

Gears are important machine components that are widely used to transmit power and change speed in many rotating machines. Any breakdown of these vital components may cause severe disturbance to production and incur heavy financial losses. One of the most common causes of gear failure is the tooth fatigue crack. Early detection of teeth cracks is still a challenging task for engineers and maintenance personnel. So far, to analyze the vibration behavior of gears, different approaches have been tried based on theoretical developments, numerical simulations, or experimental investigations. The objective of this study was to develop a numerical model that could be used to simulate the effect of teeth cracks on the resulting vibrations and hence to permit early fault detection for gear transmission systems. Unlike the majority of published papers, where only one single crack has been considered, this work is more realistic, since it incorporates the possibility of multiple simultaneous cracks with different lengths. As cracks significantly alter the gear mesh stiffness, we performed a finite element analysis using SolidWorks software to determine the stiffness variation with respect to the angular position for different combinations of crack lengths. A simplified six degrees of freedom non-linear lumped parameter model of a one-stage gear system is proposed to study the vibration of a pair of spur gears, with and without tooth cracks. The model takes several physical properties into account, including variable gear mesh stiffness and the effect of friction, but ignores the lubrication effect. The vibration simulation results of the gearbox were obtained via Matlab and Simulink. The results were found to be consistent with the results from previously published works. The effect of one crack with different levels was studied and very similar changes in the total mesh stiffness and the vibration response, both were observed and compared to what has been found in previous studies. The effect of the crack length on various statistical time domain parameters was considered and the results show that these parameters were not equally sensitive to the crack percentage. Multiple cracks are introduced at different locations and the vibration response and the statistical parameters were obtained.

Keywords: dynamic simulation, gear mesh stiffness, simultaneous tooth cracks, spur gear, vibration-based fault detection

Procedia PDF Downloads 206
865 Mapping the Early History of Common Law Education in England, 1292-1500

Authors: Malcolm Richardson, Gabriele Richardson

Abstract:

This paper illustrates how historical problems can be studied successfully using GIS even in cases in which data, in the modern sense, is fragmentary. The overall problem under investigation is how early (1300-1500) English schools of Common Law moved from apprenticeship training in random individual London inns run in part by clerks of the royal chancery to become what is widely called 'the Third University of England,' a recognized system of independent but connected legal inns. This paper focuses on the preparatory legal inns, called the Inns of Chancery, rather than the senior (and still existing) Inns of Court. The immediate problem studied in this paper is how the junior legal inns were organized, staffed, and located from 1292 to about 1500, and what maps tell us about the role of the chancery clerks as managers of legal inns. The authors first uncovered the names of all chancery clerks of the period, most of them unrecorded in histories, from archival sources in the National Archives, Kew. Then they matched the names with London property leases. Using ArcGIS, the legal inns and their owners were plotted on a series of maps covering the period 1292 to 1500. The results show a distinct pattern of ownership of the legal inns and suggest a narrative that would help explain why the Inns of Chancery became serious centers of learning during the fifteenth century. In brief, lower-ranking chancery clerks, always looking for sources of income, discovered by 1370 that legal inns could be a source of income. Since chancery clerks were intimately involved with writs and other legal forms, and since the chancery itself had a long-standing training system, these clerks opened their own legal inns to train fledgling lawyers, estate managers, and scriveners. The maps clearly show growth patterns of ownership by the chancery clerks for both legal inns and other London properties in the areas of Holborn and The Strand between 1450 and 1417. However, the maps also show that a royal ordinance of 1417 forbidding chancery clerks to live with lawyers, law students, and other non-chancery personnel had an immediate effect, and properties in that area of London leased by chancery clerks simply stop after 1417. The long-term importance of the patterns shown in the maps is that while the presence of chancery clerks in the legal inns likely created a more coherent education system, their removal forced the legal profession, suddenly without a hostelry managerial class, to professionalize the inns and legal education themselves. Given the number and social status of members of the legal inns, the effect on English education was to free legal education from the limits of chancery clerk education (the clerks were not practicing common lawyers) and to enable it to become broader in theory and practice, in fact, a kind of 'finishing school' for the governing (if not noble) class.

Keywords: GIS, law, London, education

Procedia PDF Downloads 171
864 Preliminary Seismic Vulnerability Assessment of Existing Historic Masonry Building in Pristina, Kosovo

Authors: Florim Grajcevci, Flamur Grajcevci, Fatos Tahiri, Hamdi Kurteshi

Abstract:

The territory of Kosova is actually included in one of the most seismic-prone regions in Europe. Therefore, the earthquakes are not so rare in Kosova; and when they occurred, the consequences have been rather destructive. The importance of assessing the seismic resistance of existing masonry structures has drawn strong and growing interest in the recent years. Engineering included those of Vulnerability, Loss of Buildings and Risk assessment, are also of a particular interest. This is due to the fact that this rapidly developing field is related to great impact of earthquakes on the socioeconomic life in seismic-prone areas, as Kosova and Prishtina are, too. Such work paper for Prishtina city may serve as a real basis for possible interventions in historic buildings as are museums, mosques, old residential buildings, in order to adequately strengthen and/or repair them, by reducing the seismic risk within acceptable limits. The procedures of the vulnerability assessment of building structures have concentrated on structural system, capacity, and the shape of layout and response parameters. These parameters will provide expected performance of the very important existing building structures on the vulnerability and the overall behavior during the earthquake excitations. The structural systems of existing historical buildings in Pristina, Kosovo, are dominantly unreinforced brick or stone masonry with very high risk potential from the expected earthquakes in the region. Therefore, statistical analysis based on the observed damage-deformation, cracks, deflections and critical building elements, would provide more reliable and accurate results for the regional assessments. The analytical technique was used to develop a preliminary evaluation methodology for assessing seismic vulnerability of the respective structures. One of the main objectives is also to identify the buildings that are highly vulnerable to damage caused from inadequate seismic performance-response. Hence, the damage scores obtained from the derived vulnerability functions will be used to categorize the evaluated buildings as “stabile”, “intermediate”, and “unstable”. The vulnerability functions are generated based on the basic damage inducing parameters, namely number of stories (S), lateral stiffness (LS), capacity curve of total building structure (CCBS), interstory drift (IS) and overhang ratio (OR).

Keywords: vulnerability, ductility, seismic microzone, ductility, energy efficiency

Procedia PDF Downloads 401
863 A Study of Emotional Intelligence and Adjustment of Senior Secondary School Students in District Karnal, Haryana, India

Authors: Rooma Rani

Abstract:

The education is really important for the improvement of physical and mental well-being of the school students. It is used to express inner potential, acquire knowledge, develop skills, shape habits, attitudes, values, belief, etc. along with providing strengths and resilience to people to changing situations and allowing them to develop all those capacities which will enable individual to control surrounding environment. Education has a significant effect on the behavior of individuals which helps us in the new situations of everyday life. Educating the child is directing the child’s capacities, attitudes interest, urges, and needs into the most desirable channels. We are the part of 21st century and now a day emotional intelligence is considered more important than intelligence in the success of a person. Success depends on several intelligences and on the control of emotions too. Emotional Intelligence, like general intelligence is the product of one’s heredity and its interaction with his environmental forces. There are certain methods evolved in modern researches. Keeping in view the nature and purpose of the study, the descriptive survey method is preferred. This method is one of the important methods in education research because it describes the current position of the phenomenon under study. The term descriptive survey is generally used for the type of research which proposes to condition of practices of the present time. In the present study, a systematically random sampling method was used to select a representative sample. 50 students were selected from 2 schools. Out of 50 students, 25 were boys and 25 were girls. In the study, a) it has been found a significant difference in the level of adjustment between male and female students; b) it has been found a non-significant difference in the level of emotional intelligence between male and female students; c) it has been found a non-significant relationship between adjustment and emotional intelligence among male students; d) it has been found a significant relationship between adjustment and emotional intelligence among male students. The results of the study indicated that amongst the students those who possess high scores on emotional intelligence tests are high in level of adjustment. Measures should be adopted to improve and sustain the emotional intelligence level of students throughout their studies. Adolescent students are prone to many problems like physical, social and psychological. They need a congenial home atmosphere so that they grow into full-fledged citizens of our country. After understanding these, it helps in the development of personality which leads to a better learning situation and better thinking capacities, in turn, enhances adjustment and achievement along with a better perception of self.

Keywords: adjustment, education, emotional intelligence, students

Procedia PDF Downloads 130
862 Bi-Component Particle Segregation Studies in a Spiral Concentrator Using Experimental and CFD Techniques

Authors: Prudhvinath Reddy Ankireddy, Narasimha Mangadoddy

Abstract:

Spiral concentrators are commonly used in various industries, including mineral and coal processing, to efficiently separate materials based on their density and size. In these concentrators, a mixture of solid particles and fluid (usually water) is introduced as feed at the top of a spiral channel. As the mixture flows down the spiral, centrifugal and gravitational forces act on the particles, causing them to stratify based on their density and size. Spiral flows exhibit complex fluid dynamics, and interactions involve multiple phases and components in the process. Understanding the behavior of these phases within the spiral concentrator is crucial for achieving efficient separation. An experimental bi-component particle interaction study is conducted in this work utilizing magnetite (heavier density) and silica (lighter density) with different proportions processed in the spiral concentrator. The observation separation reveals that denser particles accumulate towards the inner region of the spiral trough, while a significant concentration of lighter particles are found close to the outer edge. The 5th turn of the spiral trough is partitioned into five zones to achieve a comprehensive distribution analysis of bicomponent particle segregation. Samples are then gathered from these individual streams using an in-house sample collector, and subsequent analysis is conducted to assess component segregation. Along the trough, there was a decline in the concentration of coarser particles, accompanied by an increase in the concentration of lighter particles. The segregation pattern indicates that the heavier coarse component accumulates in the inner zone, whereas the lighter fine component collects in the outer zone. The middle zone primarily consists of heavier fine particles and lighter coarse particles. The zone-wise results reveal that there is a significant fraction of segregation occurs in inner and middle zones. Finer magnetite and silica particles predominantly accumulate in outer zones with the smallest fraction of segregation. Additionally, numerical simulations are also carried out using the computational fluid dynamics (CFD) model based on the volume of fluid (VOF) approach incorporating the RSM turbulence model. The discrete phase model (DPM) is employed for particle tracking, thereby understanding the particle segregation of magnetite and silica along the spiral trough.

Keywords: spiral concentrator, bi-component particle segregation, computational fluid dynamics, discrete phase model

Procedia PDF Downloads 62
861 Using Fractal Architectures for Enhancing the Thermal-Fluid Transport

Authors: Surupa Shaw, Debjyoti Banerjee

Abstract:

Enhancing heat transfer in compact volumes is a challenge when constrained by cost issues, especially those associated with requirements for minimizing pumping power consumption. This is particularly acute for electronic chip cooling applications. Technological advancements in microelectronics have led to development of chip architectures that involve increased power consumption. As a consequence packaging, technologies are saddled with needs for higher rates of power dissipation in smaller form factors. The increasing circuit density, higher heat flux values for dissipation and the significant decrease in the size of the electronic devices are posing thermal management challenges that need to be addressed with a better design of the cooling system. Maximizing surface area for heat exchanging surfaces (e.g., extended surfaces or “fins”) can enable dissipation of higher levels of heat flux. Fractal structures have been shown to maximize surface area in compact volumes. Self-replicating structures at multiple length scales are called “Fractals” (i.e., objects with fractional dimensions; unlike regular geometric objects, such as spheres or cubes whose volumes and surface area values scale as integer values of the length scale dimensions). Fractal structures are expected to provide an appropriate technology solution to meet these challenges for enhanced heat transfer in the microelectronic devices by maximizing surface area available for heat exchanging fluids within compact volumes. In this study, the effect of different fractal micro-channel architectures and flow structures on the enhancement of transport phenomena in heat exchangers is explored by parametric variation of fractal dimension. This study proposes a model that would enable cost-effective solutions for thermal-fluid transport for energy applications. The objective of this study is to ascertain the sensitivity of various parameters (such as heat flux and pressure gradient as well as pumping power) to variation in fractal dimension. The role of the fractal parameters will be instrumental in establishing the most effective design for the optimum cooling of microelectronic devices. This can help establish the requirement of minimal pumping power for enhancement of heat transfer during cooling. Results obtained in this study show that the proposed models for fractal architectures of microchannels significantly enhanced heat transfer due to augmentation of surface area in the branching networks of varying length-scales.

Keywords: fractals, microelectronics, constructal theory, heat transfer enhancement, pumping power enhancement

Procedia PDF Downloads 317
860 Study of Bis(Trifluoromethylsulfonyl)Imide Based Ionic Liquids by Gas Chromatography

Authors: F. Mutelet, L. Cesari

Abstract:

Development of safer and environmentally friendly processes and products is needed to achieve sustainable production and consumption patterns. Ionic liquids, which are of great interest to the chemical and related industries because of their attractive properties as solvents, should be considered. Ionic liquids are comprised of an asymmetric, bulky organic cation and a weakly coordinating organic or inorganic anion. A large number of possible combinations allows for the ability to ‘fine tune’ the solvent properties for a specific purpose. Physical and chemical properties of ionic liquids are not only influenced by the nature of the cation and the nature of cation substituents but also by the polarity and the size of the anion. These features infer to ionic liquids numerous applications, in organic synthesis, separation processes, and electrochemistry. Separation processes required a good knowledge of the behavior of organic compounds with ionic liquids. Gas chromatography is a useful tool to estimate the interactions between organic compounds and ionic liquids. Indeed, retention data may be used to determine infinite dilution thermodynamic properties of volatile organic compounds in ionic liquids. Among others, the activity coefficient at infinite dilution is a direct measure of solute-ionic liquid interaction. In this work, infinite dilution thermodynamic properties of volatile organic compounds in specific bis(trifluoromethylsulfonyl)imide based ionic liquids measured by gas chromatography is presented. It was found that apolar compounds are not miscible in this family of ionic liquids. As expected, the solubility of organic compounds is related to their polarity and hydrogen-bond. Through activity coefficients data, the performance of these ionic liquids was evaluated for different separation processes (benzene/heptane, thiophene/heptane and pyridine/heptane). Results indicate that ionic liquids may be used for the extraction of polar compounds (aromatics, alcohols, pyridine, thiophene, tetrahydrofuran) from aliphatic media. For example, 1-benzylpyridinium bis(trifluoromethylsulfonyl) imide and 1-cyclohexylmethyl-1-methylpyrrolidinium bis(trifluoromethylsulfonyl)imide are more efficient for the extraction of aromatics or pyridine from aliphatics than classical solvents. Ionic liquids with long alkyl chain length present important capacity values but their selectivity values are low. In conclusion, we have demonstrated that specific bis(trifluoromethylsulfonyl)imide based ILs containing polar chain grafted on the cation (for example benzyl or cyclohexyl) increases considerably their performance in separation processes.

Keywords: interaction organic solvent-ionic liquid, gas chromatography, solvation model, COSMO-RS

Procedia PDF Downloads 105
859 Inter-Personal and Inter-Organizational Relationships in Supply Chain Integration: A Resource Orchestration Perspective

Authors: Bill Wang, Paul Childerhouse, Yuanfei Kang

Abstract:

Purpose: The research is to extend resource orchestration theory (ROT) into supply chain management (SCM) area to investigate the dyadic relationships at both individual and organizational levels in supply chain integration (SCI). Also, we try to explore the interaction mechanism between inter-personal relationships (IPRs) and inter-organizational (IORs) during the whole SCI process. Methodology/approach: The research employed an exploratory multiple case study approach of four New Zealand companies. The data was collected via semi-structured interviews with top, middle, and lower level managers and operators from different departments of both suppliers and customers triangulated with company archival data. Findings: The research highlights the important role of both IPRs and IORs in the whole SCI process. Both IPRs and IORs are valuable, inimitable resources but IORs are formal and exterior while IPRs are informal and subordinated. In the initial stage of SCI process, IPRs are seen as key resources antecedents to IOR building while three IPRs dimensions work differently: personal credibility acts as an icebreaker to strengthen the confidence forming IORs, and personal affection acts as a gatekeeper, whilst personal communication expedites the IORs process. In the maintenance and development stage, IORs and IPRs interact each other continuously: good interaction between IPRs and IORs can facilitate SCI process while the bad interaction between IPRs can damage the SCI process. On the other hand, during the life-cycle of SCI process, IPRs can facilitate the formation, development of IORs while IORs development can cultivate the ties of IPRs. Out of the three dimensions of IPRs, Personal communication plays a more important role to develop IORs than personal credibility and personal affection. Originality/value: This research contributes to ROT in supply chain management literature by highlighting the interaction of IPRs and IORs in SCI. The intangible resources and capabilities of three dimensions of IPRs need to be orchestrated and nurtured to achieve efficient and effective IORs in SCI. Also, IPRs and IORs need to be orchestrated in terms of breadth, depth, and life-cycle of whole SCI process. Our study provides further insight into the rarely explored inter-personal level of SCI. Managerial implications: Our research provides top management with further evidence of the significance roles of IPRs at different levels when working with trading partners. This highlights the need to actively manage and develop these soft IPRs skills as an intangible competitive resource. Further, the research identifies when staff with specific skills and connections should be utilized during the different stages of building and maintaining inter-organizational ties. More importantly, top management needs to orchestrate and balance the resources of IPRs and IORs.

Keywords: case study, inter-organizational relationships, inter-personal relationships, resource orchestration, supply chain integration

Procedia PDF Downloads 231
858 Investigating the Impact of Task Demand and Duration on Passage of Time Judgements and Duration Estimates

Authors: Jesika A. Walker, Mohammed Aswad, Guy Lacroix, Denis Cousineau

Abstract:

There is a fundamental disconnect between the experience of time passing and the chronometric units by which time is quantified. Specifically, there appears to be no relationship between the passage of time judgments (PoTJs) and verbal duration estimates at short durations (e.g., < 2000 milliseconds). When a duration is longer than several minutes, however, evidence suggests that a slower feeling of time passing is predictive of overestimation. Might the length of a task moderate the relation between PoTJs and duration estimates? Similarly, the estimation paradigm (prospective vs. retrospective) and the mental effort demanded of a task (task demand) have both been found to influence duration estimates. However, only a handful of experiments have investigated these effects for tasks of long durations, and the results have been mixed. Thus, might the length of a task also moderate the effects of the estimation paradigm and task demand on duration estimates? To investigate these questions, 273 participants performed either an easy or difficult visual and memory search task for either eight or 58 minutes, under prospective or retrospective instructions. Afterward, participants provided a duration estimate in minutes, followed by a PoTJ on a Likert scale (1 = very slow, 7 = very fast). A 2 (prospective vs. retrospective) × 2 (eight minutes vs. 58 minutes) × 2 (high vs. low difficulty) between-subjects ANOVA revealed a two-way interaction between task demand and task duration on PoTJs, p = .02. Specifically, time felt faster in the more challenging task, but only in the eight-minute condition, p < .01. Duration estimates were transformed into RATIOs (estimate/actual duration) to standardize estimates across durations. An ANOVA revealed a two-way interaction between estimation paradigm and task duration, p = .03. Specifically, participants overestimated the task more if they were given prospective instructions, but only in the eight-minute task. Surprisingly, there was no effect of task difficulty on duration estimates. Thus, the demands of a task may influence ‘feeling of time’ and ‘estimation time’ differently, contributing to the existing theory that these two forms of time judgement rely on separate underlying cognitive mechanisms. Finally, a significant main effect of task duration was found for both PoTJs and duration estimates (ps < .001). Participants underestimated the 58-minute task (m = 42.5 minutes) and overestimated the eight-minute task (m = 10.7 minutes). Yet, they reported the 58-minute task as passing significantly slower on a Likert scale (m = 2.5) compared to the eight-minute task (m = 4.1). In fact, a significant correlation was found between PoTJ and duration estimation (r = .27, p <.001). This experiment thus provides evidence for a compensatory effect at longer durations, in which people underestimate a ‘slow feeling condition and overestimate a ‘fast feeling condition. The results are discussed in relation to heuristics that might alter the relationship between these two variables when conditions range from several minutes up to almost an hour.

Keywords: duration estimates, long durations, passage of time judgements, task demands

Procedia PDF Downloads 128
857 Data Analytics in Hospitality Industry

Authors: Tammy Wee, Detlev Remy, Arif Perdana

Abstract:

In the recent years, data analytics has become the buzzword in the hospitality industry. The hospitality industry is another example of a data-rich industry that has yet fully benefited from the insights of data analytics. Effective use of data analytics can change how hotels operate, market and position themselves competitively in the hospitality industry. However, at the moment, the data obtained by individual hotels remain under-utilized. This research is a preliminary research on data analytics in the hospitality industry, using an in-depth face-to-face interview on one hotel as a start to a multi-level research. The main case study of this research, hotel A, is a chain brand of international hotel that has been systematically gathering and collecting data on its own customer for the past five years. The data collection points begin from the moment a guest book a room until the guest leave the hotel premises, which includes room reservation, spa booking, and catering. Although hotel A has been gathering data intelligence on its customer for some time, they have yet utilized the data to its fullest potential, and they are aware of their limitation as well as the potential of data analytics. Currently, the utilization of data analytics in hotel A is limited in the area of customer service improvement, namely to enhance the personalization of service for each individual customer. Hotel A is able to utilize the data to improve and enhance their service which in turn, encourage repeated customers. According to hotel A, 50% of their guests returned to their hotel, and 70% extended nights because of the personalized service. Apart from using the data analytics for enhancing customer service, hotel A also uses the data in marketing. Hotel A uses the data analytics to predict or forecast the change in consumer behavior and demand, by tracking their guest’s booking preference, payment preference and demand shift between properties. However, hotel A admitted that the data they have been collecting was not fully utilized due to two challenges. The first challenge of using data analytics in hotel A is the data is not clean. At the moment, the data collection of one guest profile is meaningful only for one department in the hotel but meaningless for another department. Cleaning up the data and getting standards correctly for usage by different departments are some of the main concerns of hotel A. The second challenge of using data analytics in hotel A is the non-integral internal system. At the moment, the internal system used by hotel A do not integrate with each other well, limiting the ability to collect data systematically. Hotel A is considering another system to replace the current one for more comprehensive data collection. Hotel proprietors recognized the potential of data analytics as reported in this research, however, the current challenges of implementing a system to collect data come with a cost. This research has identified the current utilization of data analytics and the challenges faced when it comes to implementing data analytics.

Keywords: data analytics, hospitality industry, customer relationship management, hotel marketing

Procedia PDF Downloads 174
856 Heating Demand Reduction in Single Family Houses Community through Home Energy Management: Putting Users in Charge

Authors: Omar Shafqat, Jaime Arias, Cristian Bogdan, Björn Palm

Abstract:

Heating constitutes a major part of the overall energy consumption in Sweden. In 2013 heating and hot water accounted for about 55% of the total energy use in the housing sector. Historically, the end users have not been able to make a significant impact on their consumption on account of traditional control systems that do not facilitate interaction and control of the heating systems. However, in recent years internet connected home energy management systems have become increasingly available which allow users to visualize the indoor temperatures as well as control the heating system. However, the adoption of these systems is still in its nascent stages. This paper presents the outcome of a study carried out in a community of single-family houses in Stockholm. Heating in the area is provided through district heating, and the neighbourhood is connected through a local micro thermal grid, which is owned and operated by the local community. Heating in the houses is accomplished through a hydronic system equipped with radiators. The system installed offers the households to control the indoor temperature through a mobile application as well as through a physical thermostat. It was also possible to program the system to, for instance, lower the temperatures during night time and when the users were away. The users could also monitor the indoor temperatures through the application. It was additionally possible to create different zones in the house with their own individual programming. The historical heating data (in the form of billing data) was available for several previous years and has been used to perform quantitative analysis for the study after necessary normalization for weather variations. The experiment involved 30 households out of a community of 178 houses. The area was selected due to uniform construction profile in the area. It was observed that despite similar design and construction period there was a large variation in the heating energy consumption in the area which can for a large part be attributed to user behaviour. The paper also presents qualitative analysis done through survey questions as well as a focus group carried out with the participants. Overall, considerable energy savings were accomplished during the trial, however, there was a considerable variation between the participating households. The paper additionally presents recommendations to improve the impact of home energy management systems for heating in terms of improving user engagement and hence the energy impact.

Keywords: energy efficiency in buildings, energy behavior, heating control system, home energy management system

Procedia PDF Downloads 169
855 An Evaluation of the Use of Telematics for Improving the Driving Behaviours of Young People

Authors: James Boylan, Denny Meyer, Won Sun Chen

Abstract:

Background: Globally, there is an increasing trend of road traffic deaths, reaching 1.35 million in 2016 in comparison to 1.3 million a decade ago, and overall, road traffic injuries are ranked as the eighth leading cause of death for all age groups. The reported death rate for younger drivers aged 16-19 years is almost twice the rate reported for older drivers aged 25 and above, with a rate of 3.5 road traffic fatalities per annum for every 10,000 licenses held. Telematics refers to a system with the ability to capture real-time data about vehicle usage. The data collected from telematics can be used to better assess a driver's risk. It is typically used to measure acceleration, turn, braking, and speed, as well as to provide locational information. With the Australian government creating the National Telematics Framework, there has been an increase in the government's focus on using telematics data to improve road safety outcomes. The purpose of this study is to test the hypothesis that improvements in telematics measured driving behaviour to relate to improvements in road safety attitudes measured by the Driving Behaviour Questionnaire (DBQ). Methodology: 28 participants were recruited and given a telematics device to insert into their vehicles for the duration of the study. The participant's driving behaviour over the course of the first month will be compared to their driving behaviour in the second month to determine whether feedback from telematics devices improves driving behaviour. Participants completed the DBQ, evaluated using a 6-point Likert scale (0 = never, 5 = nearly all the time) at the beginning, after the first month, and after the second month of the study. This is a well-established instrument used worldwide. Trends in the telematics data will be captured and correlated with the changes in the DBQ using regression models in SAS. Results: The DBQ has provided a reliable measure (alpha = .823) of driving behaviour based on a sample of 23 participants, with an average of 50.5 and a standard deviation of 11.36, and a range of 29 to 76, with higher scores, indicating worse driving behaviours. This initial sample is well stratified in terms of gender and age (range 19-27). It is expected that in the next six weeks, a larger sample of around 40 will have completed the DBQ after experiencing in-vehicle telematics for 30 days, allowing a comparison with baseline levels. The trends in the telematics data over the first 30 days will be compared with the changes observed in the DBQ. Conclusions: It is expected that there will be a significant relationship between the improvements in the DBQ and the trends in reduced telematics measured aggressive driving behaviours supporting the hypothesis.

Keywords: telematics, driving behavior, young drivers, driving behaviour questionnaire

Procedia PDF Downloads 102
854 Applying the View of Cognitive Linguistics on Teaching and Learning English at UFLS - UDN

Authors: Tran Thi Thuy Oanh, Nguyen Ngoc Bao Tran

Abstract:

In the view of Cognitive Linguistics (CL), knowledge and experience of things and events are used by human beings in expressing concepts, especially in their daily life. The human conceptual system is considered to be fundamentally metaphorical in nature. It is also said that the way we think, what we experience, and what we do everyday is very much a matter of language. In fact, language is an integral factor of cognition in that CL is a family of broadly compatible theoretical approaches sharing the fundamental assumption. The relationship between language and thought, of course, has been addressed by many scholars. CL, however, strongly emphasizes specific features of this relation. By experiencing, we receive knowledge of lives. The partial things are ideal domains, we make use of all aspects of this domain in metaphorically understanding abstract targets. The paper refered to applying this theory on pragmatics lessons for major English students at University of Foreign Language Studies - The University of Da Nang, Viet Nam. We conducted the study with two third – year students groups studying English pragmatics lessons. To clarify this study, the data from these two classes were collected for analyzing linguistic perspectives in the view of CL and traditional concepts. Descriptive, analytic, synthetic, comparative, and contrastive methods were employed to analyze data from 50 students undergoing English pragmatics lessons. The two groups were taught how to transfer the meanings of expressions in daily life with the view of CL and one group used the traditional view for that. The research indicated that both ways had a significant influence on students' English translating and interpreting abilities. However, the traditional way had little effect on students' understanding, but the CL view had a considerable impact. The study compared CL and traditional teaching approaches to identify benefits and challenges associated with incorporating CL into the curriculum. It seeks to extend CL concepts by analyzing metaphorical expressions in daily conversations, offering insights into how CL can enhance language learning. The findings shed light on the effectiveness of applying CL in teaching and learning English pragmatics. They highlight the advantages of using metaphorical expressions from daily life to facilitate understanding and explore how CL can enhance cognitive processes in language learning in general and teaching English pragmatics to third-year students at the UFLS - UDN, Vietnam in personal. The study contributes to the theoretical understanding of the relationship between language, cognition, and learning. By emphasizing the metaphorical nature of human conceptual systems, it offers insights into how CL can enrich language teaching practices and enhance students' comprehension of abstract concepts.

Keywords: cognitive linguisitcs, lakoff and johnson, pragmatics, UFLS

Procedia PDF Downloads 31
853 Generalized Up-downlink Transmission using Black-White Hole Entanglement Generated by Two-level System Circuit

Authors: Muhammad Arif Jalil, Xaythavay Luangvilay, Montree Bunruangses, Somchat Sonasang, Preecha Yupapin

Abstract:

Black and white holes form the entangled pair⟨BH│WH⟩, where a white hole occurs when the particle moves at the same speed as light. The entangled black-white hole pair is at the center with the radian between the gap. When the speed of particle motion is slower than light, the black hole is gravitational (positive gravity), where the white hole is smaller than the black hole. On the downstream side, the entangled pair appears to have a black hole outside the gap increases until the white holes disappear, which is the emptiness paradox. On the upstream side, when moving faster than light, white holes form times tunnels, with black holes becoming smaller. It will continue to move faster and further when the black hole disappears and becomes a wormhole (Singularity) that is only a white hole in emptiness (Emptiness). This research studies use of black and white holes generated by a two-level circuit for communication transmission carriers, in which high ability and capacity of data transmission can be obtained. The black and white hole pair can be generated by the two-level system circuit when the speech of a particle on the circuit is equal to the speed of light. The black hole forms when the particle speed has increased from slower to equal to the light speed, while the white hole is established when the particle comes down faster than light. They are bound by the entangled pair, signal and idler, ⟨Signal│Idler⟩, and the virtual ones for the white hole, which has an angular displacement of half of π radian. A two-level system is made from an electronic circuit to create black and white holes bound by the entangled bits that are immune or cloning-free from thieves. Start by creating a wave-particle behavior when its speed is equal to light black hole is in the middle of the entangled pair, which is the two bit gate. The required information can be input into the system and wrapped by the black hole carrier. A timeline (Tunnel) occurs when the wave-particle speed is faster than light, from which the entangle pair is collapsed. The transmitted information is safely in the time tunnel. The required time and space can be modulated via the input for the downlink operation. The downlink is established when the particle speed is given by a frequency(energy) form is down and entered into the entangled gap, where this time the white hole is established. The information with the required destination is wrapped by the white hole and retrieved by the clients at the destination. The black and white holes are disappeared, and the information can be recovered and used.

Keywords: cloning free, time machine, teleportation, two-level system

Procedia PDF Downloads 71
852 On the Question of Ideology: Criticism of the Enlightenment Approach and Theory of Ideology as Objective Force in Gramsci and Althusser

Authors: Edoardo Schinco

Abstract:

Studying the Marxist intellectual tradition, it is possible to verify that there were numerous cases of philosophical regression, in which the important achievements of detailed studies have been replaced by naïve ideas and previous misunderstandings: one of most important example of this tendency is related to the question of ideology. According to a common Enlightenment approach, the ideology is essentially not a reality, i.e., a factor capable of having an effect on the reality itself; in other words, the ideology is a mere error without specific historical meaning, which is only due to ignorance or inability of subjects to understand the truth. From this point of view, the consequent and immediate practice against every form of ideology are the rational dialogue, the reasoning based on common sense, in order to dispel the obscurity of ignorance through the light of pure reason. The limits of this philosophical orientation are however both theoretical and practical: on the one hand, the Enlightenment criticism of ideology is not an historicistic thought, since it cannot grasp the inner connection that ties an historical context and its peculiar ideology together; moreover, on the other hand, when the Enlightenment approach fails to release people from their illusions (e.g., when the ideology persists, despite the explanation of its illusoriness), it usually becomes a racist or elitarian thought. Unlike this first conception of ideology, Gramsci attempts to recover Marx’s original thought and to valorize its dialectical methodology with respect to the reality of ideology. As Marx suggests, the ideology – in negative meaning – is surely an error, a misleading knowledge, which aims to defense the current state of things and to conceal social, political or moral contradictions; but, that is precisely why the ideological error is not casual: every ideology mediately roots in a particular material context, from which it takes its reason being. Gramsci avoids, however, any mechanistic interpretation of Marx and, for this reason; he underlines the dialectic relation that exists between material base and ideological superstructure; in this way, a specific ideology is not only a passive product of base but also an active factor that reacts on the base itself and modifies it. Therefore, there is a considerable revaluation of ideology’s role in maintenance of status quo and the consequent thematization of both ideology as objective force, active in history, and ideology as cultural hegemony of ruling class on subordinate groups. Among the Marxists, the French philosopher Louis Althusser also gives his contribution to this crucial question; as follower of Gramsci’s thought, he develops the idea of ideology as an objective force through the notions of Repressive State Apparatus (RSA) and Ideological State Apparatuses (ISA). In addition to this, his philosophy is characterized by the presence of structuralist elements, which must be studied, since they deeply change the theoretical foundation of his Marxist thought.

Keywords: Althusser, enlightenment, Gramsci, ideology

Procedia PDF Downloads 195
851 Local Binary Patterns-Based Statistical Data Analysis for Accurate Soccer Match Prediction

Authors: Mohammad Ghahramani, Fahimeh Saei Manesh

Abstract:

Winning a soccer game is based on thorough and deep analysis of the ongoing match. On the other hand, giant gambling companies are in vital need of such analysis to reduce their loss against their customers. In this research work, we perform deep, real-time analysis on every soccer match around the world that distinguishes our work from others by focusing on particular seasons, teams and partial analytics. Our contributions are presented in the platform called “Analyst Masters.” First, we introduce various sources of information available for soccer analysis for teams around the world that helped us record live statistical data and information from more than 50,000 soccer matches a year. Our second and main contribution is to introduce our proposed in-play performance evaluation. The third contribution is developing new features from stable soccer matches. The statistics of soccer matches and their odds before and in-play are considered in the image format versus time including the halftime. Local Binary patterns, (LBP) is then employed to extract features from the image. Our analyses reveal incredibly interesting features and rules if a soccer match has reached enough stability. For example, our “8-minute rule” implies if 'Team A' scores a goal and can maintain the result for at least 8 minutes then the match would end in their favor in a stable match. We could also make accurate predictions before the match of scoring less/more than 2.5 goals. We benefit from the Gradient Boosting Trees, GBT, to extract highly related features. Once the features are selected from this pool of data, the Decision trees decide if the match is stable. A stable match is then passed to a post-processing stage to check its properties such as betters’ and punters’ behavior and its statistical data to issue the prediction. The proposed method was trained using 140,000 soccer matches and tested on more than 100,000 samples achieving 98% accuracy to select stable matches. Our database from 240,000 matches shows that one can get over 20% betting profit per month using Analyst Masters. Such consistent profit outperforms human experts and shows the inefficiency of the betting market. Top soccer tipsters achieve 50% accuracy and 8% monthly profit in average only on regional matches. Both our collected database of more than 240,000 soccer matches from 2012 and our algorithm would greatly benefit coaches and punters to get accurate analysis.

Keywords: soccer, analytics, machine learning, database

Procedia PDF Downloads 234
850 The French Ekang Ethnographic Dictionary. The Quantum Approach

Authors: Henda Gnakate Biba, Ndassa Mouafon Issa

Abstract:

Dictionaries modeled on the Western model [tonic accent languages] are not suitable and do not account for tonal languages phonologically, which is why the [prosodic and phonological] ethnographic dictionary was designed. It is a glossary that expresses the tones and the rhythm of words. It recreates exactly the speaking or singing of a tonal language, and allows the non-speaker of this language to pronounce the words as if they were a native. It is a dictionary adapted to tonal languages. It was built from ethnomusicological theorems and phonological processes, according to Jean. J. Rousseau 1776 hypothesis /To say and to sing were once the same thing/. Each word in the French dictionary finds its corresponding language, ekaη. And each word ekaη is written on a musical staff. This ethnographic dictionary is also an inventive, original and innovative research thesis, but it is also an inventive, original and innovative research thesis. A contribution to the theoretical, musicological, ethno musicological and linguistic conceptualization of languages, giving rise to the practice of interlocution between the social and cognitive sciences, the activities of artistic creation and the question of modeling in the human sciences: mathematics, computer science, translation automation and artificial intelligence. When you apply this theory to any text of a folksong of a world-tone language, you do not only piece together the exact melody, rhythm, and harmonies of that song as if you knew it in advance but also the exact speaking of this language. The author believes that the issue of the disappearance of tonal languages and their preservation has been structurally resolved, as well as one of the greatest cultural equations related to the composition and creation of tonal, polytonal and random music. The experimentation confirming the theorization designed a semi-digital, semi-analog application which translates the tonal languages of Africa (about 2,100 languages) into blues, jazz, world music, polyphonic music, tonal and anatonal music and deterministic and random music). To test this application, I use a music reading and writing software that allows me to collect the data extracted from my mother tongue, which is already modeled in the musical staves saved in the ethnographic (semiotic) dictionary for automatic translation ( volume 2 of the book). Translation is done (from writing to writing, from writing to speech and from writing to music). Mode of operation: you type a text on your computer, a structured song (chorus-verse), and you command the machine a melody of blues, jazz and, world music or, variety etc. The software runs, giving you the option to choose harmonies, and then you select your melody.

Keywords: music, language, entenglement, science, research

Procedia PDF Downloads 66
849 Investigation a New Approach "AGM" to Solve of Complicate Nonlinear Partial Differential Equations at All Engineering Field and Basic Science

Authors: Mohammadreza Akbari, Pooya Soleimani Besheli, Reza Khalili, Davood Domiri Danji

Abstract:

In this conference, our aims are accuracy, capabilities and power at solving of the complicated non-linear partial differential. Our purpose is to enhance the ability to solve the mentioned nonlinear differential equations at basic science and engineering field and similar issues with a simple and innovative approach. As we know most of engineering system behavior in practical are nonlinear process (especially basic science and engineering field, etc.) and analytical solving (no numeric) these problems are difficult, complex, and sometimes impossible like (Fluids and Gas wave, these problems can't solve with numeric method, because of no have boundary condition) accordingly in this symposium we are going to exposure an innovative approach which we have named it Akbari-Ganji's Method or AGM in engineering, that can solve sets of coupled nonlinear differential equations (ODE, PDE) with high accuracy and simple solution and so this issue will emerge after comparing the achieved solutions by Numerical method (Runge-Kutta 4th). Eventually, AGM method will be proved that could be created huge evolution for researchers, professors and students in whole over the world, because of AGM coding system, so by using this software we can analytically solve all complicated linear and nonlinear partial differential equations, with help of that there is no difficulty for solving all nonlinear differential equations. Advantages and ability of this method (AGM) as follow: (a) Non-linear Differential equations (ODE, PDE) are directly solvable by this method. (b) In this method (AGM), most of the time, without any dimensionless procedure, we can solve equation(s) by any boundary or initial condition number. (c) AGM method always is convergent in boundary or initial condition. (d) Parameters of exponential, Trigonometric and Logarithmic of the existent in the non-linear differential equation with AGM method no needs Taylor expand which are caused high solve precision. (e) AGM method is very flexible in the coding system, and can solve easily varieties of the non-linear differential equation at high acceptable accuracy. (f) One of the important advantages of this method is analytical solving with high accuracy such as partial differential equation in vibration in solids, waves in water and gas, with minimum initial and boundary condition capable to solve problem. (g) It is very important to present a general and simple approach for solving most problems of the differential equations with high non-linearity in engineering sciences especially at civil engineering, and compare output with numerical method (Runge-Kutta 4th) and Exact solutions.

Keywords: new approach, AGM, sets of coupled nonlinear differential equation, exact solutions, numerical

Procedia PDF Downloads 456
848 Winkler Springs for Embedded Beams Subjected to S-Waves

Authors: Franco Primo Soffietti, Diego Fernando Turello, Federico Pinto

Abstract:

Shear waves that propagate through the ground impose deformations that must be taken into account in the design and assessment of buried longitudinal structures such as tunnels, pipelines, and piles. Conventional engineering approaches for seismic evaluation often rely on a Euler-Bernoulli beam models supported by a Winkler foundation. This approach, however, falls short in capturing the distortions induced when the structure is subjected to shear waves. To overcome these limitations, in the present work an analytical solution is proposed considering a Timoshenko beam and including transverse and rotational springs. The present research proposes ground springs derived as closed-form analytical solutions of the equations of elasticity including the seismic wavelength. These proposed springs extend the applicability of previous plane-strain models. By considering variations in displacements along the longitudinal direction, the presented approach ensures the springs do not approach zero at low frequencies. This characteristic makes them suitable for assessing pseudo-static cases, which typically govern structural forces in kinematic interaction analyses. The results obtained, validated against existing literature and a 3D Finite Element model, reveal several key insights: i) the cutoff frequency significantly influences transverse and rotational springs; ii) neglecting displacement variations along the structure axis (i.e., assuming plane-strain deformation) results in unrealistically low transverse springs, particularly for wavelengths shorter than the structure length; iii) disregarding lateral displacement components in rotational springs and neglecting variations along the structure axis leads to inaccurately low spring values, misrepresenting interaction phenomena; iv) transverse springs exhibit a notable drop in resonance frequency, followed by increasing damping as frequency rises; v) rotational springs show minor frequency-dependent variations, with radiation damping occurring beyond resonance frequencies, starting from negative values. This comprehensive analysis sheds light on the complex behavior of embedded longitudinal structures when subjected to shear waves and provides valuable insights for the seismic assessment.

Keywords: shear waves, Timoshenko beams, Winkler springs, sol-structure interaction

Procedia PDF Downloads 59
847 A Markov Model for the Elderly Disability Transition and Related Factors in China

Authors: Huimin Liu, Li Xiang, Yue Liu, Jing Wang

Abstract:

Background: As one of typical case for the developing countries who are stepping into the aging times globally, more and more older people in China might face the problem of which they could not maintain normal life due to the functional disability. While the government take efforts to build long-term care system and further carry out related policies for the core concept, there is still lack of strong evidence to evaluating the profile of disability states in the elderly population and its transition rate. It has been proved that disability is a dynamic condition of the person rather than irreversible so it means possible to intervene timely on them who might be in a risk of severe disability. Objective: The aim of this study was to depict the picture of the disability transferring status of the older people in China, and then find out individual characteristics that change the state of disability to provide theory basis for disability prevention and early intervention among elderly people. Methods: Data for this study came from the 2011 baseline survey and the 2013 follow-up survey of the China Health and Retirement Longitudinal Study (CHARLS). Normal ADL function, 1~2 ADLs disability,3 or above ADLs disability and death were defined from state 1 to state 4. Multi-state Markov model was applied and the four-state homogeneous model with discrete states and discrete times from two visits follow-up data was constructed to explore factors for various progressive stages. We modeled the effect of explanatory variables on the rates of transition by using a proportional intensities model with covariate, such as gender. Result: In the total sample, state 2 constituent ratio is nearly about 17.0%, while state 3 proportion is blow the former, accounting for 8.5%. Moreover, ADL disability statistics difference is not obvious between two years. About half of the state 2 in 2011 improved to become normal in 2013 even though they get elder. However, state 3 transferred into the proportion of death increased obviously, closed to the proportion back to state 2 or normal functions. From the estimated intensities, we see the older people are eleven times as likely to develop at 1~2 ADLs disability than dying. After disability onset (state 2), progression to state 3 is 30% more likely than recovery. Once in state 3, a mean of 0.76 years is spent before death or recovery. In this model, a typical person in state 2 has a probability of 0.5 of disability-free one year from now while the moderate disabled or above has a probability of 0.14 being dead. Conclusion: On the long-term care cost considerations, preventive programs for delay the disability progression of the elderly could be adopted based on the current disabled state and main factors of each stage. And in general terms, those focusing elderly individuals who are moderate or above disabled should go first.

Keywords: Markov model, elderly people, disability, transition intensity

Procedia PDF Downloads 287