Search results for: Michelle D. Hand
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3821

Search results for: Michelle D. Hand

521 Status of Sensory Profile Score among Children with Autism in Selected Centers of Dhaka City

Authors: Nupur A. D., Miah M. S., Moniruzzaman S. K.

Abstract:

Autism is a neurobiological disorder that affects physical, social, and language skills of a person. A child with autism feels difficulty for processing, integrating, and responding to sensory stimuli. Current estimates have shown that 45% to 96 % of children with Autism Spectrum Disorder demonstrate sensory difficulties. As autism is a worldwide burning issue, it has become a highly prioritized and important service provision in Bangladesh. The sensory deficit does not only hamper the normal development of a child, it also hampers the learning process and functional independency. The purpose of this study was to find out the prevalence of sensory dysfunction among children with autism and recognize common patterns of sensory dysfunction. A cross-sectional study design was chosen to carry out this research work. This study enrolled eighty children with autism and their parents by using the systematic sampling method. In this study, data were collected through the Short Sensory Profile (SSP) assessment tool, which consists of 38 items in the questionnaire, and qualified graduate Occupational Therapists were directly involved in interviewing parents as well as observing child responses to sensory related activities of the children with autism from four selected autism centers in Dhaka, Bangladesh. All item analyses were conducted to identify items yielding or resulting in the highest reported sensory processing dysfunction among those children through using SSP and Statistical Package for Social Sciences (SPSS) version 21.0 for data analysis. This study revealed that almost 78.25% of children with autism had significant sensory processing dysfunction based on their sensory response to relevant activities. Under-responsive sensory seeking and auditory filtering were the least common problems among them. On the other hand, most of them (95%) represented that they had definite to probable differences in sensory processing, including under-response or sensory seeking, auditory filtering, and tactile sensitivity. Besides, the result also shows that the definite difference in sensory processing among 64 children was within 100%; it means those children with autism suffered from sensory difficulties, and thus it drew a great impact on the children’s Daily Living Activities (ADLs) as well as social interaction with others. Almost 95% of children with autism require intervention to overcome or normalize the problem. The result gives insight regarding types of sensory processing dysfunction to consider during diagnosis and ascertaining the treatment. So, early sensory problem identification is very important and thus will help to provide appropriate sensory input to minimize the maladaptive behavior and enhance to reach the normal range of adaptive behavior.

Keywords: autism, sensory processing difficulties, sensory profile, occupational therapy

Procedia PDF Downloads 65
520 Euthanasia Reconsidered: Voting and Multicriteria Decision-Making in Medical Ethics

Authors: J. Hakula

Abstract:

Discussion on euthanasia is a continuous process. Euthanasia is defined as 'deliberately ending a patient's life by administering life-ending drugs at the patient's explicit request'. With few exceptions, worldwide in most countries human societies have not been able to agree on some fundamental issues concerning ultimate decisions of life and death. Outranking methods in voting oriented social choice theory and multicriteria decision-making (MCDM) can be applied to issues in medical ethics. There is a wide range of voting methods, and using different methods the same group of voters can end up with different outcomes. In the MCDM context, decision alternatives can be substituted for candidates, and criteria for voters. The view chosen here is that of a single decision-maker. Initially, three alternatives and three criteria are chosen. Pairwise and basic positional voting rules - plurality, anti-plurality and the Borda count - are applied. In the MCDM solution, criteria are put weights by giving them the more 'votes'; the more important the decision-maker ranks them. A hypothetical example on evaluating properties of euthanasia consists of three alternatives A, B, and C, which are ranked according to three criteria - the patient’s willingness to cooperate, general action orientation (active/passive), and cost-effectiveness - the criteria having weights 7, 5, and 4, respectively. Using the plurality rule and the weights given to criteria, A is the best alternative, B and C thereafter. In pairwise comparisons, both B and C defeat A with weight scores 7 to 9. On the other hand, B is defeated by C with weights 11 to 5. Thus, C (i.e. the so-called Condorcet winner) defeats both A and B. The best alternative using the plurality principle is not necessarily the best in the pairwise sense, the conflict remaining unsolved with or without additional weights. Positional rules are sensitive to variations in alternative sets. In the example above, the plurality rule gives the rank ABC. If we leave out C, the plurality ranking between A and B results in BA. Withdrawing B or A the ranking is CA and CB, respectively. In pairwise comparisons an analogous problem emerges when the number of criteria is varied. Cyclic preferences may lead to a total tie, and no (rational) choice between the alternatives can be made. In conclusion, the choice of the best commitment to re-evaluate euthanasia, with criteria left unchanged, depends entirely on the evaluation method used. The right strategies matter, too. Future studies might concern the problem of an abstention - a situation where voters do not vote - and still their best candidate may win. Or vice versa, actively giving the ballot to their first rank choice might lead to a total loss. In MCDM terms, a decision might occur where some central criteria are not actively involved in the best choice made.

Keywords: medical ethics, euthanasia, voting methods, multicriteria decision-making

Procedia PDF Downloads 157
519 Study of Oxidative Stability, Cold Flow Properties and Iodine Value of Macauba Biodiesel Blends

Authors: Acacia A. Salomão, Willian L. Gomes da Silva, Gustavo G. Shimamoto, Matthieu Tubino

Abstract:

Biodiesel physical and chemical properties depend on the raw material composition used in its synthesis. Saturated fatty acid esters confer high oxidative stability, while unsaturated fatty acid esters improve the cold flow properties. In this study, an alternative vegetal source - the macauba kernel oil - was used in the biodiesel synthesis instead of conventional sources. Macauba can be collected from native palm trees and is found in several regions in Brazil. Its oil is a promising source when compared to several other oils commonly obtained from food products, such as soybean, corn or canola oil, due to its specific characteristics. However, the usage of biodiesel made from macauba oil alone is not recommended due to the difficulty of producing macauba in large quantities. For this reason, this project proposes the usage of blends of the macauba oil with conventional oils. These blends were prepared by mixing the macauba biodiesel with biodiesels obtained from soybean, corn, and from residual frying oil, in the following proportions: 20:80, 50:50 e 80:20 (w/w). Three parameters were evaluated, using the standard methods, in order to check the quality of the produced biofuel and its blends: oxidative stability, cold filter plugging point (CFPP), and iodine value. The induction period (IP) expresses the oxidative stability of the biodiesel, the CFPP expresses the lowest temperature in which the biodiesel flows through a filter without plugging the system and the iodine value is a measure of the number of double bonds in a sample. The biodiesels obtained from soybean, residual frying oil and corn presented iodine values higher than 110 g/100 g, low oxidative stability and low CFPP. The IP values obtained from these biodiesels were lower than 8 h, which is below the recommended standard value. On the other hand, the CFPP value was found within the allowed limit (5 ºC is the maximum). Regarding the macauba biodiesel, a low iodine value was observed (31.6 g/100 g), which indicates the presence of high content of saturated fatty acid esters. The presence of saturated fatty acid esters should imply in a high oxidative stability (which was found accordingly, with IP = 64 h), and high CFPP, but curiously the latter was not observed (-3 ºC). This behavior can be explained by looking at the size of the carbon chains, as 65% of this biodiesel is composed by short chain saturated fatty acid esters (less than 14 carbons). The high oxidative stability and the low CFPP of macauba biodiesel are what make this biofuel a promising source. The soybean, corn and residual frying oil biodiesels also have low CFPP, but low oxidative stability. Therefore the blends proposed in this work, if compared to the common biodiesels, maintain the flow properties but present enhanced oxidative stability.

Keywords: biodiesel, blends, macauba kernel oil, stability oxidative

Procedia PDF Downloads 539
518 Static Charge Control Plan for High-Density Electronics Centers

Authors: Clara Oliver, Oibar Martinez, Jose Miguel Miranda

Abstract:

Ensuring a safe environment for sensitive electronics boards in places with high limitations in size poses two major difficulties: the control of charge accumulation in floating floors and the prevention of excess charge generation due to air cooling flows. In this paper, we discuss these mechanisms and possible solutions to prevent them. An experiment was made in the control room of a Cherenkov Telescope, where six racks of 2x1x1 m size and independent cooling units are located. The room is 10x4x2.5 m, and the electronics include high-speed digitizers, trigger circuits, etc. The floor used in this room was antistatic, but it was a raised floor mounted in floating design to facilitate the handling of the cables and maintenance. The tests were made by measuring the contact voltage acquired by a person who was walking along the room with different footwear qualities. In addition, we took some measurements of the voltage accumulated in a person in other situations like running or sitting up and down on an office chair. The voltages were taken in real time with an electrostatic voltage meter and dedicated control software. It is shown that peak voltages as high as 5 kV were measured with ambient humidity of more than 30%, which are within the range of a class 3A according to the HBM standard. In order to complete the results, we have made the same experiment in different spaces with alternative types of the floor like synthetic floor and earthenware floor obtaining peak voltages much lower than the ones measured with the floating synthetic floor. The grounding quality one achieves with this kind of floors can hardly beat the one typically encountered in standard floors glued directly on a solid substrate. On the other hand, the air ventilation used to prevent the overheating of the boards probably contributed in a significant way to the charge accumulated in the room. During the assessment of the quality of the static charge control, it is necessary to guarantee that the tests are made under repeatable conditions. One of the major difficulties which one encounters during these assessments is the fact the electrostatic voltmeters might provide different values depending on the humidity conditions and ground resistance quality. In addition, the use of certified antistatic footwear might mask deficiencies in the charge control. In this paper, we show how we defined protocols to guarantee that electrostatic readings are reliable. We believe that this can be helpful not only to qualify the static charge control in a laboratory but also to asses any procedure oriented to minimize the risk of electrostatic discharge events.

Keywords: electrostatics, ESD protocols, HBM, static charge control

Procedia PDF Downloads 129
517 Strengthening by Assessment: A Case Study of Rail Bridges

Authors: Evangelos G. Ilias, Panagiotis G. Ilias, Vasileios T. Popotas

Abstract:

The United Kingdom has one of the oldest railway networks in the world dating back to 1825 when the world’s first passenger railway was opened. The network has some 40,000 bridges of various construction types using a wide range of materials including masonry, steel, cast iron, wrought iron, concrete and timber. It is commonly accepted that the successful operation of the network is vital for the economy of the United Kingdom, consequently the cost effective maintenance of the existing infrastructure is a high priority to maintain the operability of the network, prevent deterioration and to extend the life of the assets. Every bridge on the railway network is required to be assessed every eighteen years and a structured approach to assessments is adopted with three main types of progressively more detailed assessments used. These assessment types include Level 0 (standardized spreadsheet assessment tools), Level 1 (analytical hand calculations) and Level 2 (generally finite element analyses). There is a degree of conservatism in the first two types of assessment dictated to some extent by the relevant standards which can lead to some structures not achieving the required load rating. In these situations, a Level 2 Assessment is often carried out using finite element analysis to uncover ‘latent strength’ and improve the load rating. If successful, the more sophisticated analysis can save on costly strengthening or replacement works and avoid disruption to the operational railway. This paper presents the ‘strengthening by assessment’ achieved by Level 2 analyses. The use of more accurate analysis assumptions and the implementation of non-linear modelling and functions (material, geometric and support) to better understand buckling modes and the structural behaviour of historic construction details that are not specifically covered by assessment codes are outlined. Metallic bridges which are susceptible to loss of section size through corrosion have largest scope for improvement by the Level 2 Assessment methodology. Three case studies are presented, demonstrating the effectiveness of the sophisticated Level 2 Assessment methodology using finite element analysis against the conservative approaches employed for Level 0 and Level 1 Assessments. One rail overbridge and two rail underbridges that did not achieve the required load rating by means of a Level 1 Assessment due to the inadequate restraint provided by U-Frame action are examined and the increase in assessed capacity given by the Level 2 Assessment is outlined.

Keywords: assessment, bridges, buckling, finite element analysis, non-linear modelling, strengthening

Procedia PDF Downloads 309
516 Pyridine-N-oxide Based AIE-active Triazoles: Synthesis, Morphology and Photophysical Properties

Authors: Luminita Marin, Dalila Belei, Carmen Dumea

Abstract:

Aggregation induced emission (AIE) is an intriguing optical phenomenon recently evidenced by Tang and his co-workers, for which aggregation works constructively in the improving of light emission. The AIE challenging phenomenon is quite opposite to the notorious aggregation caused quenching (ACQ) of light emission in the condensed phase, and comes in line with requirements of photonic and optoelectronic devices which need solid state emissive substrates. This paper reports a series of ten new aggregation induced emission (AIE) low molecular weight compounds based on triazole and pyridine-N-oxide heterocyclic units bonded by short flexible chains, obtained by a „click” chemistry reaction. The compounds present extremely weak luminescence in solution but strong light emission in solid state. To distinguish the influence of the crystallinity degree on the emission efficiency, the photophysical properties were explored by UV-vis and photoluminescence spectroscopy in solution, water suspension, amorphous and crystalline films. On the other hand, the compound morphology of the up mentioned states was monitored by dynamic light scattering, scanning electron microscopy, atomic force microscopy and polarized light microscopy methods. To further understand the structural design – photophysical properties relationship, single crystal X-ray diffraction on some understudy compounds was performed too. The UV-vis absorption spectra of the triazole water suspensions indicated a typical behaviour for nanoparticle formation, while the photoluminescence spectra revealed an emission intensity enhancement up to 921-fold higher of the crystalline films compared to solutions, clearly indicating an AIE behaviour. The compounds have the tendency to aggregate forming nano- and micro- crystals in shape of rose-like and fibres. The crystals integrity is kept due to the strong lateral intermolecular forces, while the absence of face-to-face forces explains the enhanced luminescence in crystalline state, in which the intramolecular rotations are restricted. The studied flexible triazoles draw attention to a new structural design in which small biologically friendly luminophore units are linked together by small flexible chains. This design enlarges the variety of the AIE luminogens to the flexible molecules, guiding further efforts in development of new AIE structures for appropriate applications, the biological ones being especially envisaged.

Keywords: aggregation induced emission, pyridine-N-oxide, triazole

Procedia PDF Downloads 467
515 The Debate over Dutch Universities: An Analysis of Stakeholder Perspectives

Authors: B. Bernabela, P. Bles, A. Bloecker, D. DeRock, M. van Es, M. Gerritse, T. de Jongh, W. Lansing, M. Martinot, J. van de Wetering

Abstract:

A heated debate has been taking place concerning research and teaching at Dutch universities for the last few years. The ministry of science and education has published reports on its strategy to improve university curricula and position the Netherlands as a globally competitive knowledge economy. These reports have provoked an uproar of responses from think tanks, concerned academics, and the media. At the center of the debate is disagreement over who should determine the Dutch university curricula and how these curricula should look. Many stakeholders in the higher education system have voiced their opinion, and some have not been heard. The result is that the diversity of visions is ignored or taken for granted in the official reports. Recognizing this gap in stakeholder analysis, the aim of this paper is to bring attention to the wide range of perspectives on who should be responsible for designing higher education curricula. Based on a previous analysis by the Rathenau Institute, we distinguish five different groups of stakeholders: government, business sector, university faculty and administration, students, and the societal sector. We conducted semi-structured, in-depth interviews with representatives from each stakeholder group, and distributed quantitative questionnaires to people in the societal sector (i.e. people not directly affiliated with universities or graduates). Preliminary data suggests that the stakeholders have different target points concerning the university curricula. Representatives from the governmental sector tend to place special emphasis on the link between research and education, while representatives from the business sector rather focus on greater opportunities for students to obtain practical experience in the job market. Responses from students reflect a belief that they should be able to influence the curriculum in order to compete with other students on the international job market. On the other hand, university faculty expresses concern that focusing on the labor market puts undue pressure on students and compromises the quality of education. Interestingly, the opinions of members of ‘society’ seem to be relatively unchanged by political and economic shifts. Following a comprehensive analysis of the data, we believe that our results will make a significant contribution to the debate on university education in the Netherlands. These results should be regarded as a foundation for further research concerning the direction of Dutch higher education, for only if we take into account the different opinions and views of the various stakeholders can we decide which steps to take. Moreover, the Dutch experience offers lessons to other countries as well. As the internationalization of higher education is occurring faster than ever before, universities throughout Europe and globally are experiencing many of the same pressures.

Keywords: Dutch University curriculum, higher education, participants’ opinions, stakeholder perspectives

Procedia PDF Downloads 343
514 Generative Behaviors and Psychological Well-Being in Mexican Elders

Authors: Ana L. Gonzalez-Celis, Edgardo Ruiz-Carrillo, Karina Reyes-Jarquin, Margarita Chavez-Becerra

Abstract:

Since recent decades, the aging has been viewed from a more positive perspective, where is not only about losses and damage, but also about being on a stage where you can enjoy life and live with well-being and quality of life. The challenge to feel better is to find those resources that seniors have. For that reason, psychological well-being has shown interest in the study of the affect and life satisfaction (hedonic well-being), while from a more recent tradition, focus on the development of capabilities and the personal growth, considering both as the main indicators of the quality of life. A resource that can be used in the later age is generativity, which refers to the ability of older people to develop and grow through activities that contribute with the improvement of the context in which they live and participate. In this way the generative interest is understood as a favourable attitude that contribute to the common benefit while strengthening and enriching the social institutions, to ensure continuity between generations and social development. On the other hand, generative behavior, differentiating from generative interest, is the expression of that attitude reflected in activities that make a social contribution and a benefit for generations to come. Hence the purpose of the research was to test if there is an association between the generative behaviour type and the psychological well-being with their dimensions. For this reason 188 Mexican adults from 60 to 94 years old (M = 69.78), 67% women, 33% men, completed two instruments: The Ryff’s Well-Being Scales to measure psychological well-being with 39 items with two dimensions (Hedonic and Eudaimonic well-being), and the Loyola’s Generative Behaviors Scale, grouped in five categories: Knowledge transmitted to the next generation, things to be remember, creativity, be productive, contribution to the community, and responsibility of other people. In addition, the socio-demographic data sheet was tested, and self-reported health status. The results indicated that the psychological well-being and its dimensions were significantly associated with the presence of generative behavior, where the level of well-being was higher when the frequency of some generative behaviour excelled; finding that the behavior with greater psychological well-being (M = 81.04, SD = 8.18) was "things to be remembered"; while with greater hedonic well-being (M = 73.39, SD = 12.19) was the behavior "responsibility of other people"; and with greater Eudaimonic well-being (M = 84.61, SD = 6.63), was the behavior "things to be remembered”. The most important findings highlight the importance of generative behaviors in adulthood, finding empirical evidence that the generativity in the last stage of life is associated with well-being. However, by finding differences in the types of generative behaviors at the level of well-being, is proposed the idea that generativity is not situated as an isolated construct, but needs other contextualized and related constructs that can simultaneously operate at different levels, taking into account the relationship between the environment and the individual, encompassing both the social and psychological dimension.

Keywords: eudaimonic well-being, generativity, hedonic well-being, Mexican elders, psychological well-being

Procedia PDF Downloads 273
513 Chronic wrist pain among handstand practitioners. A questionnaire study.

Authors: Martonovich Noa, Maman David, Alfandari Liad, Behrbalk Eyal.

Abstract:

Introduction: The human body is designed for upright standing and walking, with the lower extremities and axial skeleton supporting weight-bearing. Constant weight-bearing on joints not meant for this action can lead to various pathologies, as seen in wheelchair users. Handstand practitioners use their wrists as weight-bearing joints during activities, but little is known about wrist injuries in this population. This study aims to investigate the epidemiology of wrist pain among handstand practitioners, as no such data currently exist. Methods: The study is a cross-sectional online survey conducted among athletes who regularly practice handstands. Participants were asked to complete a three-part questionnaire regarding their workout regimen, training habits, and history of wrist pain. The inclusion criteria were athletes over 18 years old who practice handstands more than twice a month for at least 4 months. All data were collected using Google Forms, organized and anonymized using Microsoft Excel, and analyzed using IBM SPSS 26.0. Descriptive statistics were calculated, and potential risk factors were tested using asymptotic t-tests and Fisher's tests. Differences were considered significant when p < 0.05. Results: This study surveyed 402 athletes who regularly practice handstands to investigate the prevalence of chronic wrist pain and potential risk factors. The participants had a mean age of 31.3 years, with most being male and having an average of 5 years of training experience. 56% of participants reported chronic wrist pain, and 14.4% reported a history of distal radial fracture. Yoga was the most practiced form, followed by Capoeira. No significant differences were found in demographic data between participants with and without chronic wrist pain, and no significant associations were found between chronic wrist pain prevalence and warm-up routines or protective aids. Conclusion: The lower half of the body is meant to handle weight-bearing and impact, while transferring the load to upper extremities can lead to various pathologies. Athletes who perform handstands are particularly prone to chronic wrist pain, which affects over half of them. Warm-up sessions and protective instruments like wrist braces do not seem to prevent chronic wrist pain, and there are no significant differences in age or training volume between athletes with and without the condition. Further research is needed to understand the causes of chronic wrist pain in athletes, given the growing popularity of sports and activities that can cause this type of injury.

Keywords: handstand, handbalance, wrist pain, hand and wrist surgery, yoga, calisthenics, circus, capoeira, movement.

Procedia PDF Downloads 91
512 Impact of Weather Conditions on Non-Food Retailers and Implications for Marketing Activities

Authors: Noriyuki Suyama

Abstract:

This paper discusses purchasing behavior in retail stores, with a particular focus on the impact of weather changes on customers' purchasing behavior. Weather conditions are one of the factors that greatly affect the management and operation of retail stores. However, there is very little research on the relationship between weather conditions and marketing from an academic perspective, although there is some importance from a practical standpoint and knowledge based on experience. For example, customers are more hesitant to go out when it rains than when it is sunny, and they may postpone purchases or buy only the minimum necessary items even if they do go out. It is not difficult to imagine that weather has a significant impact on consumer behavior. To the best of the authors' knowledge, there have been only a few studies that have delved into the purchasing behavior of individual customers. According to Hirata (2018), the economic impact of weather in the United States is estimated to be 3.4% of GDP, or "$485 billion ± $240 billion per year. However, weather data is not yet fully utilized. Representative industries include transportation-related industries (e.g., airlines, shipping, roads, railroads), leisure-related industries (e.g., leisure facilities, event organizers), energy and infrastructure-related industries (e.g., construction, factories, electricity and gas), agriculture-related industries (e.g., agricultural organizations, producers), and retail-related industries (e.g., retail, food service, convenience stores, etc.). This paper focuses on the retail industry and advances research on weather. The first reason is that, as far as the author has investigated the retail industry, only grocery retailers use temperature, rainfall, wind, weather, and humidity as parameters for their products, and there are very few examples of academic use in other retail industries. Second, according to NBL's "Toward Data Utilization Starting from Consumer Contact Points in the Retail Industry," labor productivity in the retail industry is very low compared to other industries. According to Hirata (2018) mentioned above, improving labor productivity in the retail industry is recognized as a major challenge. On the other hand, according to the "Survey and Research on Measurement Methods for Information Distribution and Accumulation (2013)" by the Ministry of Internal Affairs and Communications, the amount of data accumulated by each industry is extremely large in the retail industry, so new applications are expected by analyzing these data together with weather data. Third, there is currently a wealth of weather-related information available. There are, for example, companies such as WeatherNews, Inc. that make weather information their business and not only disseminate weather information but also disseminate information that supports businesses in various industries. Despite the wide range of influences that weather has on business, the impact of weather has not been a subject of research in the retail industry, where business models need to be imagined, especially from a micro perspective. In this paper, the author discuss the important aspects of the impact of weather on marketing strategies in the non-food retail industry.

Keywords: consumer behavior, weather marketing, marketing science, big data, retail marketing

Procedia PDF Downloads 81
511 On-Chip Ku-Band Bandpass Filter with Compact Size and Wide Stopband

Authors: Jyh Sheen, Yang-Hung Cheng

Abstract:

This paper presents a design of a microstrip bandpass filter with a compact size and wide stopband by using 0.15-μm GaAs pHEMT process. The wide stop band is achieved by suppressing the first and second harmonic resonance frequencies. The slow-wave coupling stepped impedance resonator with cross coupled structure is adopted to design the bandpass filter. A two-resonator filter was fabricated with 13.5GHz center frequency and 11% bandwidth was achieved. The devices are simulated using the ADS design software. This device has shown a compact size and very low insertion loss of 2.6 dB. Microstrip planar bandpass filters have been widely adopted in various communication applications due to the attractive features of compact size and ease of fabricating. Various planar resonator structures have been suggested. In order to reach a wide stopband to reduce the interference outside the passing band, various designs of planar resonators have also been submitted to suppress the higher order harmonic frequencies of the designed center frequency. Various modifications to the traditional hairpin structure have been introduced to reduce large design area of hairpin designs. The stepped-impedance, slow-wave open-loop, and cross-coupled resonator structures have been studied to miniaturize the hairpin resonators. In this study, to suppress the spurious harmonic bands and further reduce the filter size, a modified hairpin-line bandpass filter with cross coupled structure is suggested by introducing the stepped impedance resonator design as well as the slow-wave open-loop resonator structure. In this way, very compact circuit size as well as very wide upper stopband can be achieved and realized in a Roger 4003C substrate. On the other hand, filters constructed with integrated circuit technology become more attractive for enabling the integration of the microwave system on a single chip (SOC). To examine the performance of this design structure at the integrated circuit, the filter is fabricated by the 0.15 μm pHEMT GaAs integrated circuit process. This pHEMT process can also provide a much better circuit performance for high frequency designs than those made on a PCB board. The design example was implemented in GaAs with center frequency at 13.5 GHz to examine the performance in higher frequency in detail. The occupied area is only about 1.09×0.97 mm2. The ADS software is used to design those modified filters to suppress the first and second harmonics.

Keywords: microstrip resonator, bandpass filter, harmonic suppression, GaAs

Procedia PDF Downloads 326
510 A Brief Review on the Relationship between Pain and Sociology

Authors: Hanieh Sakha, Nader Nader, Haleh Farzin

Abstract:

Introduction: Throughout history, pain theories have been supposed by biomedicine, especially regarding its diagnosis and treatment aspects. Therefore, the feeling of pain is not only a personal experience and is affected by social background; therefore, it involves extensive systems of signals. The challenges in emotional and sentimental dimensions of pain originate from scientific medicine (i.e., the dominant theory is also referred to as the specificity theory); however, this theory has accepted some alterations by emerging physiology. Then, Von Frey suggested the theory of cutaneous senses (i.e., Muller’s concept: the common sensation of combined four major skin receptors leading to a proper sensation) 50 years after the specificity theory. The pain pathway was composed of spinothalamic tracts and thalamus with an inhibitory effect on the cortex. Pain is referred to as a series of unique experiences with various reasons and qualities. Despite the gate control theory, the biological aspect overcomes the social aspect. Vrancken provided a more extensive definition of pain and found five approaches: Somatico-technical, dualistic body-oriented, behaviorist, phenomenological, and consciousness approaches. The Western model combined physical, emotional, and existential aspects of the human body. On the other hand, Kotarba felt confused about the basic origins of chronic pain. Freund demonstrated and argued with Durkhemian about the sociological approach to emotions. Lynch provided a piece of evidence about the correlation between cardiovascular disease and emotionally life-threatening occurrences. Helman supposed a distinction between private and public pain. Conclusion: The consideration of the emotional aspect of pain could lead to effective, emotional, and social responses to pain. On the contrary, the theory of embodiment is based on the sociological view of health and illness. Social epidemiology shows an imbalanced distribution of health, illness, and disability among various social groups. The social support and socio-cultural level can result in several types of pain. It means the status of athletes might define their pain experiences. Gender is one of the important contributing factors affecting the type of pain (i.e., females are more likely to seek health services for pain relief.) Chronic non-cancer pain (CNCP) has become a serious public health issue affecting more than 70 million people globally. CNCP is a serious public health issue which is caused by the lack of awareness about chronic pain management among the general population.

Keywords: pain, sociology, sociological, body

Procedia PDF Downloads 70
509 Analyzing Brand Related Information Disclosure and Brand Value: Further Empirical Evidence

Authors: Yves Alain Ach, Sandra Rmadi Said

Abstract:

An extensive review of literature in relation to brands has shown that little research has focused on the nature and determinants of the information disclosed by companies with respect to the brands they own and use. The objective of this paper is to address this issue. More specifically, the aim is to characterize the nature of the information disclosed by companies in terms of estimating the value of brands and to identify the determinants of that information according to the company’s characteristics most frequently tested by previous studies on the disclosure of information on intangible capital, by studying the practices of a sample of 37 French companies. Our findings suggest that companies prefer to communicate accounting, economic and strategic information in relation to their brands instead of providing financial information. The analysis of the determinants of the information disclosed on brands leads to the conclusion that the groups which operate internationally and have chosen a category 1 auditing firm to communicate more information to investors in their annual report. Our study points out that the sector is not an explanatory variable for voluntary brand disclosure, unlike previous studies on intangible capital. Our study is distinguished by the study of an element that has been little studied in the financial literature, namely the determinants of brand-related information. With regard to the effect of size on brand-related information disclosure, our research does not confirm this link. Many authors point out that large companies tend to publish more voluntary information in order to respond to stakeholder pressure. Our study also establishes that the relationship between brand information supply and performance is insignificant. This relationship is already controversial by previous research, and it shows that higher profitability motivates managers to provide more information, as this strengthens investor confidence and may increase managers' compensation. Our main contribution focuses on the nature of the inherent characteristics of the companies that disclose the most information about brands. Our results show the absence of a link between size and industry on the one hand and the supply of brand information on the other, contrary to previous research. Our analysis highlights three types of information disclosed about brands: accounting, economics and strategy. We, therefore, question the reasons that may lead companies to voluntarily communicate mainly accounting, economic and strategic information in relation to our study from one year to the next and not to communicate detailed information that would allow them to reconstitute the financial value of their brands. Our results can be useful for companies and investors. Our results highlight, to our surprise, the lack of financial information that would allow investors to understand a better valuation of brands. We believe that additional information is needed to improve the quality of accounting and financial information related to brands. The additional information provided in the special report that we recommend could be called a "report on intangible assets”.

Keywords: brand related information, brand value, information disclosure, determinants

Procedia PDF Downloads 84
508 Making Unorganized Social Groups Responsible for Climate Change: Structural Analysis

Authors: Vojtěch Svěrák

Abstract:

Climate change ethics have recently shifted away from individualistic paradigms towards concepts of shared or collective responsibility. Despite this evolving trend, a noticeable gap remains: a lack of research exclusively addressing the moral responsibility of specific unorganized social groups. The primary objective of the article is to fill this gap. The article employs the structuralist methodological approach proposed by some feminist philosophers, utilizing structural analysis to explain the existence of social groups. The argument is made for the integration of this framework with the so-called forward-looking Social Connection Model (SCM) of responsibility, which ascribes responsibilities to individuals based on their participation in social structures. The article offers an extension of this model to justify the responsibility of unorganized social groups. The major finding of the study is that although members of unorganized groups are loosely connected, collectively they instantiate specific external social structures, share social positioning, and the notion of responsibility could be based on that. Specifically, if the structure produces harm or perpetuates injustices, and the group both benefits from and possesses the capacity to significantly influence the structure, a greater degree of responsibility should be attributed to the group as a whole. This thesis is applied and justified within the context of climate change, based on the asymmetrical positioning of different social groups. Climate change creates a triple inequality: in contribution, vulnerability, and mitigation. The study posits that different degrees of group responsibility could be drawn from these inequalities. Two social groups serve as a case study for the article: first, the Pakistan lower class, consisting of people living below the national poverty line, with a low greenhouse gas emissions rate, severe climate change-related vulnerability due to the lack of adaptation measures, and with very limited options to participate in the mitigation of climate change. Second, the so-called polluter elite, defined by members' investments in polluting companies and high-carbon lifestyles, thus with an interest in the continuation of structures leading to climate change. The first identified group cannot be held responsible for climate change, but their group interest lies in structural change and should be collectively maintained. On the other hand, the responsibility of the second identified group is significant and can be fulfilled by a justified demand for some political changes. The proposed approach of group responsibility is suggested to help navigate climate justice discourse and environmental policies, thus helping with the sustainability transition.

Keywords: collective responsibility, climate justice, climate change ethics, group responsibility, social ontology, structural analysis

Procedia PDF Downloads 60
507 AI-Enabled Smart Contracts for Reliable Traceability in the Industry 4.0

Authors: Harris Niavis, Dimitra Politaki

Abstract:

The manufacturing industry was collecting vast amounts of data for monitoring product quality thanks to the advances in the ICT sector and dedicated IoT infrastructure is deployed to track and trace the production line. However, industries have not yet managed to unleash the full potential of these data due to defective data collection methods and untrusted data storage and sharing. Blockchain is gaining increasing ground as a key technology enabler for Industry 4.0 and the smart manufacturing domain, as it enables the secure storage and exchange of data between stakeholders. On the other hand, AI techniques are more and more used to detect anomalies in batch and time-series data that enable the identification of unusual behaviors. The proposed scheme is based on smart contracts to enable automation and transparency in the data exchange, coupled with anomaly detection algorithms to enable reliable data ingestion in the system. Before sensor measurements are fed to the blockchain component and the smart contracts, the anomaly detection mechanism uniquely combines artificial intelligence models to effectively detect unusual values such as outliers and extreme deviations in data coming from them. Specifically, Autoregressive integrated moving average, Long short-term memory (LSTM) and Dense-based autoencoders, as well as Generative adversarial networks (GAN) models, are used to detect both point and collective anomalies. Towards the goal of preserving the privacy of industries' information, the smart contracts employ techniques to ensure that only anonymized pointers to the actual data are stored on the ledger while sensitive information remains off-chain. In the same spirit, blockchain technology guarantees the security of the data storage through strong cryptography as well as the integrity of the data through the decentralization of the network and the execution of the smart contracts by the majority of the blockchain network actors. The blockchain component of the Data Traceability Software is based on the Hyperledger Fabric framework, which lays the ground for the deployment of smart contracts and APIs to expose the functionality to the end-users. The results of this work demonstrate that such a system can increase the quality of the end-products and the trustworthiness of the monitoring process in the smart manufacturing domain. The proposed AI-enabled data traceability software can be employed by industries to accurately trace and verify records about quality through the entire production chain and take advantage of the multitude of monitoring records in their databases.

Keywords: blockchain, data quality, industry4.0, product quality

Procedia PDF Downloads 189
506 The Incoherence of the Philosophers as a Defense of Philosophy against Theology

Authors: Edward R. Moad

Abstract:

Al-Ghazali’s Tahāfat al Falāsifa is widely construed as an attack on philosophy in favor of theological fideism. Consequently, he has been blamed for ‘death of philosophy’ in the Muslim world. ‘Falsifa’ however is not philosophy itself, but rather a range of philosophical doctrines mainly influenced by or inherited form Greek thought. In these terms, this work represents a defense of philosophy against what we could call ‘falsifical’ fideism. In the introduction, Ghazali describes his target audience as, not the falasifa, but a group of pretenders engaged in taqlid to a misconceived understanding of falasifa, including the belief that they were capable of demonstrative certainty in the field of metaphysics. He promises to use falsifa standards of logic (with which he independently agrees), to show that that the falasifa failed to demonstratively prove many of their positions. Whether or not he succeeds in that, the exercise of subjecting alleged proofs to critical scrutiny is quintessentially philosophical, while uncritical adherence to a doctrine, in the name of its being ‘philosophical’, is decidedly unphilosophical. If we are to blame the intellectual decline of the Muslim world on someone’s ‘bad’ way of thinking, rather than more material historical circumstances (which is already a mistake), then blame more appropriately rests with modernist Muslim thinkers who, under the influence of orientalism (and like Ghazali’s philosophical pretenders) mistook taqlid to the falasifa as philosophy itself. The discussion of the Tahāfut takes place in the context of an epistemic (and related social) hierarchy envisioned by the falasifa, corresponding to the faculties of the sense, the ‘estimative imagination’ (wahm), and the pure intellect, along with the respective forms of discourse – rhetoric, dialectic, and demonstration – appropriate to each category of that order. Al-Farabi in his Book of Letters describes a relation between dialectic and demonstration on the one hand, and theology and philosophy on the other. The latter two are distinguished by method rather than subject matter. Theology is that which proceeds dialectically, while philosophy is (or aims to be?) demonstrative. Yet, Al-Farabi tells us, dialectic precedes philosophy like ‘nourishment for the tree precedes its fruit.’ That is, dialectic is part of the process, by which we interrogate common and imaginative notions in the pursuit of clearly understood first principles that we can then deploy in the demonstrative argument. Philosophy is, therefore, something we aspire to through, and from a discursive condition of, dialectic. This stands in apparent contrast to the understanding of Ibn Sina, for whom one arrives at the knowledge of first principles through contact with the Active Intellect. It also stands in contrast to that of Ibn Rushd, who seems to think our knowledge of first principles can only come through reading Aristotle. In conclusion, based on Al-Farabi’s framework, Ghazali’s Tahafut is a truly an exercise in philosophy, and an effort to keep the door open for true philosophy in the Muslim mind, against the threat of a kind of developing theology going by the name of falsifa.

Keywords: philosophy, incoherence, theology, Tahafut

Procedia PDF Downloads 161
505 Magnetic Navigation of Nanoparticles inside a 3D Carotid Model

Authors: E. G. Karvelas, C. Liosis, A. Theodorakakos, T. E. Karakasidis

Abstract:

Magnetic navigation of the drug inside the human vessels is a very important concept since the drug is delivered to the desired area. Consequently, the quantity of the drug required to reach therapeutic levels is being reduced while the drug concentration at targeted sites is increased. Magnetic navigation of drug agents can be achieved with the use of magnetic nanoparticles where anti-tumor agents are loaded on the surface of the nanoparticles. The magnetic field that is required to navigate the particles inside the human arteries is produced by a magnetic resonance imaging (MRI) device. The main factors which influence the efficiency of the usage of magnetic nanoparticles for biomedical applications in magnetic driving are the size and the magnetization of the biocompatible nanoparticles. In this study, a computational platform for the simulation of the optimal gradient magnetic fields for the navigation of magnetic nanoparticles inside a carotid artery is presented. For the propulsion model of the particles, seven major forces are considered, i.e., the magnetic force from MRIs main magnet static field as well as the magnetic field gradient force from the special propulsion gradient coils. The static field is responsible for the aggregation of nanoparticles, while the magnetic gradient contributes to the navigation of the agglomerates that are formed. Moreover, the contact forces among the aggregated nanoparticles and the wall and the Stokes drag force for each particle are considered, while only spherical particles are used in this study. In addition, gravitational forces due to gravity and the force due to buoyancy are included. Finally, Van der Walls force and Brownian motion are taken into account in the simulation. The OpenFoam platform is used for the calculation of the flow field and the uncoupled equations of particles' motion. To verify the optimal gradient magnetic fields, a covariance matrix adaptation evolution strategy (CMAES) is used in order to navigate the particles into the desired area. A desired trajectory is inserted into the computational geometry, which the particles are going to be navigated in. Initially, the CMAES optimization strategy provides the OpenFOAM program with random values of the gradient magnetic field. At the end of each simulation, the computational platform evaluates the distance between the particles and the desired trajectory. The present model can simulate the motion of particles when they are navigated by the magnetic field that is produced by the MRI device. Under the influence of fluid flow, the model investigates the effect of different gradient magnetic fields in order to minimize the distance of particles from the desired trajectory. In addition, the platform can navigate the particles into the desired trajectory with an efficiency between 80-90%. On the other hand, a small number of particles are stuck to the walls and remains there for the rest of the simulation.

Keywords: artery, drug, nanoparticles, navigation

Procedia PDF Downloads 107
504 Healthy Architecture Applied to Inclusive Design for People with Cognitive Disabilities

Authors: Santiago Quesada-García, María Lozano-Gómez, Pablo Valero-Flores

Abstract:

The recent digital revolution, together with modern technologies, is changing the environment and the way people interact with inhabited space. However, in society, the elderly are a very broad and varied group that presents serious difficulties in understanding these modern technologies. Outpatients with cognitive disabilities, such as those suffering from Alzheimer's disease (AD), are distinguished within this cluster. This population group is in constant growth, and they have specific requirements for their inhabited space. According to architecture, which is one of the health humanities, environments are designed to promote well-being and improve the quality of life for all. Buildings, as well as the tools and technologies integrated into them, must be accessible, inclusive, and foster health. In this new digital paradigm, artificial intelligence (AI) appears as an innovative resource to help this population group improve their autonomy and quality of life. Some experiences and solutions, such as those that interact with users through chatbots and voicebots, show the potential of AI in its practical application. In the design of healthy spaces, the integration of AI in architecture will allow the living environment to become a kind of 'exo-brain' that can make up for certain cognitive deficiencies in this population. The objective of this paper is to address, from the discipline of neuroarchitecture, how modern technologies can be integrated into everyday environments and be an accessible resource for people with cognitive disabilities. For this, the methodology has a mixed structure. On the one hand, from an empirical point of view, the research carries out a review of the existing literature about the applications of AI to build space, following the critical review foundations. As a unconventional architectural research, an experimental analysis is proposed based on people with AD as a resource of data to study how the environment in which they live influences their regular activities. The results presented in this communication are part of the progress achieved in the competitive R&D&I project ALZARQ (PID2020-115790RB-I00). These outcomes are aimed at the specific needs of people with cognitive disabilities, especially those with AD, since, due to the comfort and wellness that the solutions entail, they can also be extrapolated to the whole society. As a provisional conclusion, it can be stated that, in the immediate future, AI will be an essential element in the design and construction of healthy new environments. The discipline of architecture has the compositional resources to, through this emerging technology, build an 'exo-brain' capable of becoming a personal assistant for the inhabitants, with whom to interact proactively and contribute to their general well-being. The main objective of this work is to show how this is possible.

Keywords: Alzheimer’s disease, artificial intelligence, healthy architecture, neuroarchitecture, architectural design

Procedia PDF Downloads 61
503 How Strategic Urban Design Promote Sustainable Urban Mobility: A Comparative Analysis of Cities from Global North and Global South

Authors: Rati Sandeep Choudhari

Abstract:

Mobility flows are considered one of the most important elements of urbanisation, with transport infrastructure serving as a backbone of urban fabrics. Although rapid urbanisation and changing land use patterns have led to an increase in urban mobility levels around the globe, mobility, in general, has become an unpleasant experience for city dwellers, making locations around the city inconvenient to access. With public transport featured in almost every sustainable mobility plan in developing countries, the intermodality and integration with appropriate non–motorised transport infrastructure is often neglected. As a result, people choose to use private cars and two-wheelers to travel, rendering public transit systems underutilised, and encroaching onto pedestrian space on streets, thus making urban mobility unsafe and inconvenient for a major section of society. On the other hand, cities in the West, especially in Europe, depend heavily on inter–modal transit systems, allowing people to shift between metros, buses, trams, walking, and cycling to access even the remote locations of the city. Keeping accessibility as the focal point while designing urban mobility plans and policies, these cities have appropriately refined their urban form, optimised urban densities, developed a multimodal transit system, and adopted place-making strategies to foster a sense of place, thus, improving the quality of urban mobility experience in cities. Using a qualitative research approach, the research looks in detail into the existing literature on what kind of strategies can be applied to improve the urban mobility experience for city dwellers. It further studies and draws out a comparative analysis of cities in both developed and developing parts of the world where these strategies have been used to create people-centric mobility systems, fostering a sense of place with respect to urban mobility and how these strategies affected their social, economic, and environmental dynamics. The examples reflect on how different strategies like redefining land use patterns to form close knit neighbourhoods, development of non – motorise transit systems, and their integration with public transport infrastructure and place-making approach has helped in enhancing the quality and experience of mobility infrastructure in cities. The research finally concludes by laying out strategies that can be adopted by cities of the Global South to develop future mobility systems in a people-centric and sustainable way.

Keywords: urban mobility, sustainable transport, strategic planning, people-centric approach

Procedia PDF Downloads 128
502 Investigating the Aerosol Load of Eastern Mediterranean Basin with Sentinel-5p Satellite

Authors: Deniz Yurtoğlu

Abstract:

Aerosols directly affect the radiative balance of the earth by absorbing and/or scattering the sun rays reaching the atmosphere and indirectly affect the balance by acting as a nucleus in cloud formation. The composition, physical, and chemical properties of aerosols vary depending on their sources and the time spent in the atmosphere. The Eastern Mediterranean Basin has a high aerosol load that is formed from different sources; such as anthropogenic activities, desert dust outbreaks, and the spray of sea salt; and the area is subjected to atmospheric transport from other locations on the earth. This region, which includes the deserts of Africa, the Middle East, and the Mediterranean sea, is one of the most affected areas by climate change due to its location and the chemistry of the atmosphere. This study aims to investigate the spatiotemporal deviation of aerosol load in the Eastern Mediterranean Basin between the years 2018-2022 with the help of a new pioneer satellite of ESA (European Space Agency), Sentinel-5P. The TROPOMI (The TROPOspheric Monitoring Instrument) traveling on this low-Earth orbiting satellite is a UV (Ultraviolet)-sensing spectrometer with a resolution of 5.5 km x 3.5 km, which can make measurements even in a cloud-covered atmosphere. By using Absorbing Aerosol Index data produced by this spectrometer and special scripts written in Python language that transforms this data into images, it was seen that the majority of the aerosol load in the Eastern Mediterranean Basin is sourced from desert dust and anthropogenic activities. After retrieving the daily data, which was separated from the NaN values, seasonal analyses match with the normal aerosol variations expected, which are high in warm seasons and lower in cold seasons. Monthly analyses showed that in four years, there was an increase in the amount of Absorbing Aerosol Index in spring and winter by 92.27% (2019-2021) and 39.81% (2019-2022), respectively. On the other hand, in the summer and autumn seasons, a decrease has been observed by 20.99% (2018-2021) and 0.94% (2018-2021), respectively. The overall variation of the mean absorbing aerosol index from TROPOMI between April 2018 to April 2022 reflects a decrease of 115.87% by annual mean from 0.228 to -0.036. However, when the data is analyzed by the annual mean values of the years which have the data from January to December, meaning from 2019 to 2021, there was an increase of 57.82% increase (0.108-0.171). This result can be interpreted as the effect of climate change on the aerosol load and also, more specifically, the effect of forest fires that happened in the summer months of 2021.

Keywords: aerosols, eastern mediterranean basin, sentinel-5p, tropomi, aerosol index, remote sensing

Procedia PDF Downloads 67
501 The Meaning Structures of Political Participation of Young Women: Preliminary Findings in a Practical Phenomenology Study

Authors: Amanda Aliende da Matta, Maria del Pilar Fogueiras Bertomeu, Valeria de Ormaechea Otalora, Maria Paz Sandin Esteban, Miriam Comet Donoso

Abstract:

This communication presents the preliminary emerging themes in a research on political participation of young women. The study follows a qualitative methodology; in particular, the applied hermeneutic phenomenological method, and the general objective of the research is to give an account of the experience of political participation as young women. The study participants are women aged 18 to 35 who have experience in political participation. The techniques of data collection are the descriptive story and the phenomenological interview. With respect to the first methodological steps, these have been: 1) collect and select stories of lived experience in political participation, 2) select descriptions of lived experience (DLEs) in political participation of the chosen stories, 3) to prepare phenomenological interviews from the selected DLEs, 4) to conduct phenomenological thematic analysis (PTA) of the DLEs. We have so far initiated the PTA on 5 vignettes. Hermeneutic phenomenology as a research approach is based on phenomenological philosophy and applied hermeneutics. Phenomenology is a descriptive philosophy of pure experience and essences, through which we seek to capture an experience at its origins without categorizing, interpreting or theorizing it. Hermeneutics, on the other hand, may be defined as a philosophical current that can be applied to data analysis. Max Van Manen wrote that hermeneutic phenomenology is a method of abstemious reflection on the basic structures of the lived experience of human existence. In hermeneutic phenomenology we focus, then, on the way we experience “things” in the first person, seeking to capture the world exactly as we experience it, not as we categorize or conceptualize it. In this study, the empirical methods used were: Lived experience description (written) and conversational interview. For these short stories, participants were asked: “What was your lived experience of participation in politics as a young woman? Can you tell me any stories or anecdotes that you think exemplify or typify your experience?”. The questions were accompanied by a list of guidelines for writing descriptive vignettes. And the analytical method was PTA. Among the provisional results, we found preliminary emerging themes, which could in the advance of the investigation result in meaning structures of political participation of young women. They are the following: - Complicity may be inherent/essential in political participation as a young woman; - Feelings may be essential/inherent in political participation as a young woman; - Hope may be essential in authentic political participation as a young woman; - Frustration may be essential in authentic political participation as a young woman; - Satisfaction may be essential in authentic political participation as a young woman; - There may be tension between individual/collective inherent/essential in political participation as a young woman; - Political participation as a young woman may include moments of public demonstration.

Keywords: applied hermeneutic phenomenology, hermeneutics, phenomenology, political participation

Procedia PDF Downloads 99
500 Microstructural Interactions of Ag and Sc Alloying Additions during Casting and Artificial Ageing to a T6 Temper in a A356 Aluminium Alloy

Authors: Dimitrios Bakavos, Dimitrios Tsivoulas, Chaowalit Limmaneevichitr

Abstract:

Aluminium cast alloys, of the Al-Si system, are widely used for shape castings. Their microstructures can be further improved on one hand, by alloying modification and on the other, by optimised artificial ageing. In this project four hypoeutectic Al-alloys, the A356, A356+ Ag, A356+Sc, and A356+Ag+Sc have been studied. The interactions of Ag and Sc during solidification and artificial ageing at 170°C to a T6 temper have been investigated in details. The evolution of the eutectic microstructure is studied by thermal analysis and interrupted solidification. The ageing kinetics of the alloys has been identified by hardness measurements. The precipitate phases, number density, and chemical composition has been analysed by means of transmission electron microscopy (TEM) and EDS analysis. Furthermore, the SHT effect onto the Si eutectic particles for the four alloys has been investigated by means of optical microscopy, image analysis, and the UTS strength has been compared with the UTS of the alloys after casting. The results suggest that the Ag additions, significantly enhance the ageing kinetics of the A356 alloy. The formation of β” precipitates were kinetically accelerated and an increase of 8% and 5% in peak hardness strength has been observed compared to the base A356 and A356-Sc alloy. The EDS analysis demonstrates that Ag is present on the β” precipitate composition. After prolonged ageing 100 hours at 170°C, the A356-Ag exhibits 17% higher hardness strength compared to the other three alloys. During solidification, Sc additions change the macroscopic eutectic growth mode to the propagation of a defined eutectic front from the mold walls opposite to the heat flux direction. In contrast, Ag has no significance effect on the solidification mode revealing a macroscopic eutectic growth similar to A356 base alloy. However, the mechanical strength of the as cast A356-Ag, A356-Sc, and A356+Ag+Sc additions has increased by 5, 30, and 35 MPa, respectively. The outcome is a tribute to the refining of the eutectic Si that takes place which it is strong in the A356-Sc alloy and more profound when silver and scandium has been combined. Moreover after SHT the Al alloy with the highest mechanical strength, is the one with Ag additions, in contrast to the as-cast condition where the Sc and Sc+Ag alloy was the strongest. The increase of strength is mainly attributed to the dissolution of grain boundary precipitates the increase of the solute content into the matrix, the spherodisation, and coarsening of the eutectic Si. Therefore, we could safely conclude for an A356 hypoeutectic alloy additions of: Ag exhibits a refining effect on the Si eutectic which is improved when is combined with Sc. In addition Ag enhance, the ageing kinetics increases the hardness and retains its strength at prolonged artificial ageing in a Al-7Si 0.3Mg hypoeutectic alloy. Finally the addition of Sc is beneficial due to the refinement of the α-Al grain and modification-refinement of the eutectic Si increasing the strength of the as-cast product.

Keywords: ageing, casting, mechanical strength, precipitates

Procedia PDF Downloads 497
499 Increasing the Dialogue in Workplaces Enhances the Age-Friendly Organisational Culture and Helps Employees Face Work-Related Dilemmas

Authors: Heli Makkonen, Eini Hyppönen

Abstract:

The ageing of employees, the availability of workforce, and employees’ engagement in work are today’s challenges in the field of health care and social services, and particularly in the care of older people. Therefore, it is important to enhance both the attractiveness of the work in the field of older people’s care and the retention of employees in the field, and also to pay attention to the length of careers. The length of careers can be affected, for example, by developing an age-friendly organisational culture. Changing the organisational culture in a workplace is, however, a slow process which requires engagement from employees and enhanced dialogue between employees. This article presents an example of age-friendly organisational culture in an older people’s care unit and presents the results of the development of this organisational culture to meet the identified development challenges. In this research-based development process, cycles used in action research were applied. Three workshops were arranged for employees in a service home for older people. The workshops worked as interventions, and the employees and their manager were given several consecutive assignments to be completed between them. In addition to workshops, the employees benchmarked two other service homes. In the workshops, data was collected by observing and documenting the conversations. After that, thematic analysis was used to identify the factors connected to an age-friendly organisational culture. By analysing the data and comparing it to previous studies, some dilemmas we recognised that were hindering or enhancing the attractiveness of work and the retention of employees in this nursing home. After each intervention, the process was reflected and evaluated, and the next steps were planned. The areas of development identified in the study were related to, for example, the flexibility of work, holistic ergonomics, the physical environment at the workplace, and the workplace culture. Some of the areas of development were taken over by the work community and carried out in cooperation with e.g. occupational health care. We encouraged the work community, and the employees provided us with information about their progress. In this research project, the focus was on the development of the workplace culture and, in particular, on the development of the culture of interaction. The workshops showed employees’ attitudes and strong opinions, which can be a challenge from the point of view of the attractiveness of work and the retention of employees in the field. On the other hand, the data revealed that the work community has an interest in developing the dialogue in the work community. Enhancing the dialogue gave the employees the opportunity and resources to face even challenging dilemmas related to the attractiveness of work and the retention of employees in the field. The psychological safety was also enhanced at the same time. The results of this study are part of a broader study that aims at building a model for extending older employees’ careers.

Keywords: age-friendliness, attractiveness of work, dialogue, older people, organisational culture, workplace culture

Procedia PDF Downloads 76
498 Model-Driven and Data-Driven Approaches for Crop Yield Prediction: Analysis and Comparison

Authors: Xiangtuo Chen, Paul-Henry Cournéde

Abstract:

Crop yield prediction is a paramount issue in agriculture. The main idea of this paper is to find out efficient way to predict the yield of corn based meteorological records. The prediction models used in this paper can be classified into model-driven approaches and data-driven approaches, according to the different modeling methodologies. The model-driven approaches are based on crop mechanistic modeling. They describe crop growth in interaction with their environment as dynamical systems. But the calibration process of the dynamic system comes up with much difficulty, because it turns out to be a multidimensional non-convex optimization problem. An original contribution of this paper is to propose a statistical methodology, Multi-Scenarios Parameters Estimation (MSPE), for the parametrization of potentially complex mechanistic models from a new type of datasets (climatic data, final yield in many situations). It is tested with CORNFLO, a crop model for maize growth. On the other hand, the data-driven approach for yield prediction is free of the complex biophysical process. But it has some strict requirements about the dataset. A second contribution of the paper is the comparison of these model-driven methods with classical data-driven methods. For this purpose, we consider two classes of regression methods, methods derived from linear regression (Ridge and Lasso Regression, Principal Components Regression or Partial Least Squares Regression) and machine learning methods (Random Forest, k-Nearest Neighbor, Artificial Neural Network and SVM regression). The dataset consists of 720 records of corn yield at county scale provided by the United States Department of Agriculture (USDA) and the associated climatic data. A 5-folds cross-validation process and two accuracy metrics: root mean square error of prediction(RMSEP), mean absolute error of prediction(MAEP) were used to evaluate the crop prediction capacity. The results show that among the data-driven approaches, Random Forest is the most robust and generally achieves the best prediction error (MAEP 4.27%). It also outperforms our model-driven approach (MAEP 6.11%). However, the method to calibrate the mechanistic model from dataset easy to access offers several side-perspectives. The mechanistic model can potentially help to underline the stresses suffered by the crop or to identify the biological parameters of interest for breeding purposes. For this reason, an interesting perspective is to combine these two types of approaches.

Keywords: crop yield prediction, crop model, sensitivity analysis, paramater estimation, particle swarm optimization, random forest

Procedia PDF Downloads 231
497 Polymeric Composites with Synergetic Carbon and Layered Metallic Compounds for Supercapacitor Application

Authors: Anukul K. Thakur, Ram Bilash Choudhary, Mandira Majumder

Abstract:

In this technologically driven world, it is requisite to develop better, faster and smaller electronic devices for various applications to keep pace with fast developing modern life. In addition, it is also required to develop sustainable and clean sources of energy in this era where the environment is being threatened by pollution and its severe consequences. Supercapacitor has gained tremendous attention in the recent years because of its various attractive properties such as it is essentially maintenance-free, high specific power, high power density, excellent pulse charge/discharge characteristics, exhibiting a long cycle-life, require a very simple charging circuit and safe operation. Binary and ternary composites of conducting polymers with carbon and other layered transition metal dichalcogenides have shown tremendous progress in the last few decades. Compared with bulk conducting polymer, these days conducting polymers have gained more attention because of their high electrical conductivity, large surface area, short length for the ion transport and superior electrochemical activity. These properties make them very suitable for several energy storage applications. On the other hand, carbon materials have also been studied intensively, owing to its rich specific surface area, very light weight, excellent chemical-mechanical property and a wide range of the operating temperature. These have been extensively employed in the fabrication of carbon-based energy storage devices and also as an electrode material in supercapacitors. Incorporation of carbon materials into the polymers increases the electrical conductivity of the polymeric composite so formed due to high electrical conductivity, high surface area and interconnectivity of the carbon. Further, polymeric composites based on layered transition metal dichalcogenides such as molybdenum disulfide (MoS2) are also considered important because they are thin indirect band gap semiconductors with a band gap around 1.2 to 1.9eV. Amongst the various 2D materials, MoS2 has received much attention because of its unique structure consisting of a graphene-like hexagonal arrangement of Mo and S atoms stacked layer by layer to give S-Mo-S sandwiches with weak Van-der-Waal forces between them. It shows higher intrinsic fast ionic conductivity than oxides and higher theoretical capacitance than the graphite.

Keywords: supercapacitor, layered transition-metal dichalcogenide, conducting polymer, ternary, carbon

Procedia PDF Downloads 256
496 The Inverse Problem in Energy Beam Processes Using Discrete Adjoint Optimization

Authors: Aitor Bilbao, Dragos Axinte, John Billingham

Abstract:

The inverse problem in Energy Beam (EB) Processes consists of defining the control parameters, in particular the 2D beam path (position and orientation of the beam as a function of time), to arrive at a prescribed solution (freeform surface). This inverse problem is well understood for conventional machining, because the cutting tool geometry is well defined and the material removal is a time independent process. In contrast, EB machining is achieved through the local interaction of a beam of particular characteristics (e.g. energy distribution), which leads to a surface-dependent removal rate. Furthermore, EB machining is a time-dependent process in which not only the beam varies with the dwell time, but any acceleration/deceleration of the machine/beam delivery system, when performing raster paths will influence the actual geometry of the surface to be generated. Two different EB processes, Abrasive Water Machining (AWJM) and Pulsed Laser Ablation (PLA), are studied. Even though they are considered as independent different technologies, both can be described as time-dependent processes. AWJM can be considered as a continuous process and the etched material depends on the feed speed of the jet at each instant during the process. On the other hand, PLA processes are usually defined as discrete systems and the total removed material is calculated by the summation of the different pulses shot during the process. The overlapping of these shots depends on the feed speed and the frequency between two consecutive shots. However, if the feed speed is sufficiently slow compared with the frequency, then consecutive shots are close enough and the behaviour can be similar to a continuous process. Using this approximation a generic continuous model can be described for both processes. The inverse problem is usually solved for this kind of process by simply controlling dwell time in proportion to the required depth of milling at each single pixel on the surface using a linear model of the process. However, this approach does not always lead to the good solution since linear models are only valid when shallow surfaces are etched. The solution of the inverse problem is improved by using a discrete adjoint optimization algorithm. Moreover, the calculation of the Jacobian matrix consumes less computation time than finite difference approaches. The influence of the dynamics of the machine on the actual movement of the jet is also important and should be taken into account. When the parameters of the controller are not known or cannot be changed, a simple approximation is used for the choice of the slope of a step profile. Several experimental tests are performed for both technologies to show the usefulness of this approach.

Keywords: abrasive waterjet machining, energy beam processes, inverse problem, pulsed laser ablation

Procedia PDF Downloads 275
495 Achieving Net Zero Energy Building in a Hot Climate Using Integrated Photovoltaic and Parabolic Trough Collectors

Authors: Adel A. Ghoneim

Abstract:

In most existing buildings in hot climate, cooling loads lead to high primary energy consumption and consequently high CO2 emissions. These can be substantially decreased with integrated renewable energy systems. Kuwait is characterized by its dry hot long summer and short warm winter. Kuwait receives annual total radiation more than 5280 MJ/m2 with approximately 3347 h of sunshine. Solar energy systems consist of PV modules and parabolic trough collectors are considered to satisfy electricity consumption, domestic water heating, and cooling loads of an existing building. This paper presents the results of an extensive program of energy conservation and energy generation using integrated photovoltaic (PV) modules and parabolic trough collectors (PTC). The program conducted on an existing institutional building intending to convert it into a Net-Zero Energy Building (NZEB) or near net Zero Energy Building (nNZEB). The program consists of two phases; the first phase is concerned with energy auditing and energy conservation measures at minimum cost and the second phase considers the installation of photovoltaic modules and parabolic trough collectors. The 2-storey building under consideration is the Applied Sciences Department at the College of Technological Studies, Kuwait. Single effect lithium bromide water absorption chillers are implemented to provide air conditioning load to the building. A numerical model is developed to evaluate the performance of parabolic trough collectors in Kuwait climate. Transient simulation program (TRNSYS) is adapted to simulate the performance of different solar system components. In addition, a numerical model is developed to assess the environmental impacts of building integrated renewable energy systems. Results indicate that efficient energy conservation can play an important role in converting the existing buildings into NZEBs as it saves a significant portion of annual energy consumption of the building. The first phase results in an energy conservation of about 28% of the building consumption. In the second phase, the integrated PV completely covers the lighting and equipment loads of the building. On the other hand, parabolic trough collectors of optimum area of 765 m2 can satisfy a significant portion of the cooling load, i.e about73% of the total building cooling load. The annual avoided CO2 emission is evaluated at the optimum conditions to assess the environmental impacts of renewable energy systems. The total annual avoided CO2 emission is about 680 metric ton/year which confirms the environmental impacts of these systems in Kuwait.

Keywords: building integrated renewable systems, Net-Zero energy building, solar fraction, avoided CO2 emission

Procedia PDF Downloads 611
494 Numerical Investigation of Multiphase Flow Structure for the Flue Gas Desulfurization

Authors: Cheng-Jui Li, Chien-Chou Tseng

Abstract:

This study adopts Computational Fluid Dynamics (CFD) technique to build the multiphase flow numerical model where the interface between the flue gas and desulfurization liquid can be traced by Eulerian-Eulerian model. Inside the tower, the contact of the desulfurization liquid flow from the spray nozzles and flue gas flow can trigger chemical reactions to remove the sulfur dioxide from the exhaust gas. From experimental observations of the industrial scale plant, the desulfurization mechanism depends on the mixing level between the flue gas and the desulfurization liquid. In order to significantly improve the desulfurization efficiency, the mixing efficiency and the residence time can be increased by perforated sieve trays. Hence, the purpose of this research is to investigate the flow structure of sieve trays for the flue gas desulfurization by numerical simulation. In this study, there is an outlet at the top of FGD tower to discharge the clean gas and the FGD tower has a deep tank at the bottom, which is used to collect the slurry liquid. In the major desulfurization zone, the desulfurization liquid and flue gas have a complex mixing flow. Because there are four perforated plates in the major desulfurization zone, which spaced 0.4m from each other, and the spray array is placed above the top sieve tray, which includes 33 nozzles. Each nozzle injects desulfurization liquid that consists of the Mg(OH)2 solution. On each sieve tray, the outside diameter, the hole diameter, and the porosity are 0.6m, 20 mm and 34.3%. The flue gas flows into the FGD tower from the space between the major desulfurization zone and the deep tank can finally become clean. The desulfurization liquid and the liquid slurry goes to the bottom tank and is discharged as waste. When the desulfurization solution flow impacts the sieve tray, the downward momentum will be converted to the upper surface of the sieve tray. As a result, a thin liquid layer can be developed above the sieve tray, which is the so-called the slurry layer. And the volume fraction value within the slurry layer is around 0.3~0.7. Therefore, the liquid phase can't be considered as a discrete phase under the Eulerian-Lagrangian framework. Besides, there is a liquid column through the sieve trays. The downward liquid column becomes narrow as it interacts with the upward gas flow. After the flue gas flows into the major desulfurization zone, the flow direction of the flue gas is upward (+y) in the tube between the liquid column and the solid boundary of the FGD tower. As a result, the flue gas near the liquid column may be rolled down to slurry layer, which developed a vortex or a circulation zone between any two sieve trays. The vortex structure between two sieve trays results in a sufficient large two-phase contact area. It also increases the number of times that the flue gas interacts with the desulfurization liquid. On the other hand, the sieve trays improve the two-phase mixing, which may improve the SO2 removal efficiency.

Keywords: Computational Fluid Dynamics (CFD), Eulerian-Eulerian Model, Flue Gas Desulfurization (FGD), perforated sieve tray

Procedia PDF Downloads 284
493 Topic-Specific Differences and Lexical Variations in the Use of Violence Metaphors: A Cognitive Linguistic Study of YouTube Breast Cancer Discourse in New Zealand and Pakistan

Authors: Sara Malik, Andreea. S. Calude, Joseph Ulatowski

Abstract:

This paper explores how speakers from New Zealand and Pakistan with breast cancer use violence metaphors to communicate the intensity of their experiences during various stages of illness. With the theoretical foundation in Conceptual Metaphor Theory and the use of Metaphor Identification Procedure for metaphor analysis, this study investigates how speakers with breast cancer use violence metaphors in different cultural contexts. it collected a corpus of forty-six personal narratives from New Zealand and thirty-six from Pakistan, posted between 2011 and 2023 on YouTube by breast cancer organisations, such as ‘NZ Breast Cancer Foundation’ and ‘Pink Ribbon Pakistan’. The data was transcribed using the Whisper AI tool and then curated to include only patients’ discourse, further organised into eight narrative topics: testing phase, treatment phase, remission phase, family support, campaigns and awareness efforts, government support and funding, general information and religious discourse. In this talk, it discuss two aspects of the use of violence metaphors, a) differences in the use of violence metaphors across various narrative topics, and b) lexical variations in the choice of such metaphors. The findings suggest that violence metaphors were used differently across various stages of illness experience. For instance, during the ‘testing phase,’ violence metaphors were employed to convey a sense of punishment as reflected in statements like, ‘Feeling like it was a death sentence, an immediate death sentence’ (NZ Example) and ‘Jese hi aap ko na breast cancer ka pata chalta hai logon ko yeh hona shuru ho jata hai ke oh bas ab to moat ka parwana mil gaya hai’ (Because as soon as you find out you have breast cancer people start to feel that you have received a death warrant) (PK Example). On the other hand, violence metaphor during the ‘treatment phase’ highlighted negative experiences related to chemotherapy as seen in statements like ‘The first lot of chemo I had was disastrous’ (NZ Example) and ‘...chemotherapy ke to, it's the worst of all, it's like a healing poison’ (chemotherapy, it's the worst of all, it's like a healing poison) (PK Example). Second, lexical variations revealed how ‘sunburn’ (a common phenomenon in the NZ) was used as a metaphor to describe the effects of radiotherapy, whereas in the discourse from Pakistan, a more general term, 'burn,' was used instead. In this talk, we will explore the possible reasons behind the different word choices made by speakers from both countries to describe the same process. This study contributes to understanding the use of violence metaphors across various narrative topics of the illness experience and explains how and why speakers from two different countries use lexical variations to describe the same process.

Keywords: metaphors, breast cancer discourse, cognitive linguistics, lexical variations, New zealand english, pakistani urdu

Procedia PDF Downloads 31
492 Investigations of Effective Marketing Metric Strategies: The Case of St. George Brewery Factory, Ethiopia

Authors: Mekdes Getu Chekol, Biniam Tedros Kahsay, Rahwa Berihu Haile

Abstract:

The main objective of this study is to investigate the marketing strategy practice in the Case of St. George Brewery Factory in Addis Ababa. One of the core activities in a Business Company to stay in business is having a well-developed marketing strategy. It assessed how the marketing strategies were practiced in the company to achieve its goals aligned with segmentation, target market, positioning, and the marketing mix elements to satisfy customer requirements. Using primary and secondary data, the study is conducted by using both qualitative and quantitative approaches. The primary data was collected through open and closed-ended questionnaires. Considering the size of the population is small, the selection of the respondents was carried out by using a census. The finding shows that the company used all the 4 Ps of the marketing mix elements in its marketing strategies and provided quality products at affordable prices by promoting its products by using high and effective advertising mechanisms. The product availability and accessibility are admirable with the practices of both direct and indirect distribution channels. On the other hand, the company has identified its target customers, and the company’s market segmentation practice is geographical location. Communication effectiveness between the marketing department and other departments is very good. The adjusted R2 model explains 61.6% of the marketing strategy practice variance by product, price, promotion, and place. The remaining 38.4% of variation in the dependent variable was explained by other factors not included in this study. The result reveals that all four independent variables, product, price, promotion, and place, have a positive beta sign, proving that predictor variables have a positive effect on that of the predicting dependent variable marketing strategy practice. Even though the marketing strategies of the company are effectively practiced, there are some problems that the company faces while implementing them. These are infrastructure problems, economic problems, intensive competition in the market, shortage of raw materials, seasonality of consumption, socio-cultural problems, and the time and cost of awareness creation for the customers. Finally, the authors suggest that the company better develop a long-range view and try to implement a more structured approach to attain information about potential customers, competitor’s actions, and market intelligence within the industry. In addition, we recommend conducting the study by increasing the sample size and including different marketing factors.

Keywords: marketing strategy, market segmentation, target marketing, market positioning, marketing mix

Procedia PDF Downloads 60