Search results for: higher order inertia
924 CSR Communication Strategies: Stakeholder and Institutional Theories Perspective
Authors: Stephanie Gracelyn Rahaman, Chew Yin Teng, Manjit Singh Sandhu
Abstract:
Corporate scandals have made stakeholders apprehensive of large companies and expect greater transparency in CSR matters. However, companies find it challenging to strategically communicate CSR to intended stakeholders and in the process may fall short on maximizing on CSR efforts. Given that stakeholders have the ability to either reward good companies or take legal action or boycott against corporate brands who do not act socially responsible, companies must create shared understanding of their CSR activities. As a result, communication has become a strategy for many companies to demonstrate CSR engagement and to minimize stakeholder skepticism. The main objective of this research is to examine the types of CSR communication strategies and predictors that guide CSR communication strategies. Employing Morsing & Schultz’s guide on CSR communication strategies, the study integrates stakeholder and institutional theory to develop a conceptual framework. The conceptual framework hypothesized that stakeholder (instrumental and normative) and institutional (regulatory environment, nature of business, mimetic intention, CSR focus and corporate objectives) dimensions would drive CSR communication strategies. Preliminary findings from semi-structured interviews in Malaysia are consistent with the conceptual model in that stakeholder and institutional expectations guide CSR communication strategies. Findings show that most companies use two-way communication strategies. Companies that identified employees, the public or customers as key stakeholders have started to embrace social media to be in-sync with new trends of communication. This is especially with the Gen Y which is their priority. Some companies creatively use multiple communication channels because they recognize different stakeholders favor different communication channels. Therefore, it appears that companies use two-way communication strategies to complement the perceived limitation of one-way communication strategies as some companies prefer a more interactive platform to strategically engage stakeholders in CSR communication. In addition to stakeholders, institutional expectations also play a vital role in influencing CSR communication. Due to industry peer pressures, corporate objectives (attract international investors and customers), companies may be more driven to excel in social performance. For these reasons companies tend to go beyond the basic mandatory requirement, excel in CSR activities and be known as companies that champion CSR. In conclusion, companies use more two-way than one-way communication and companies use a combination of one and two-way communication to target different stakeholders resulting from stakeholder and institutional dimensions. Finally, in order to find out if the conceptual framework actually fits the Malaysian context, companies’ responses for expected organizational outcomes from communicating CSR were gathered from the interview transcripts. Thereafter, findings are presented to show some of the key organizational outcomes (visibility and brand recognition, portray responsible image, attract prospective employees, positive word-of-mouth, etc.) that companies in Malaysia expect from CSR communication. Based on these findings the conceptual framework has been refined to show the new identified organizational outcomes.Keywords: CSR communication, CSR communication strategies, stakeholder theory, institutional theory, conceptual framework, Malaysia
Procedia PDF Downloads 291923 An Analysis of Economical Drivers and Technical Challenges for Large-Scale Biohydrogen Deployment
Authors: Rouzbeh Jafari, Joe Nava
Abstract:
This study includes learnings from an engineering practice normally performed on large scale biohydrogen processes. If properly scale-up is done, biohydrogen can be a reliable pathway for biowaste valorization. Most of the studies on biohydrogen process development have used model feedstock to investigate process key performance indicators (KPIs). This study does not intend to compare different technologies with model feedstock. However, it reports economic drivers and technical challenges which help in developing a road map for expanding biohydrogen economy deployment in Canada. BBA is a consulting firm responsible for the design of hydrogen production projects. Through executing these projects, activity has been performed to identify, register and mitigate technical drawbacks of large-scale hydrogen production. Those learnings, in this study, have been applied to the biohydrogen process. Through data collected by a comprehensive literature review, a base case has been considered as a reference, and several case studies have been performed. Critical parameters of the process were identified and through common engineering practice (process design, simulation, cost estimate, and life cycle assessment) impact of these parameters on the commercialization risk matrix and class 5 cost estimations were reported. The process considered in this study is food waste and woody biomass dark fermentation. To propose a reliable road map to develop a sustainable biohydrogen production process impact of critical parameters was studied on the end-to-end process. These parameters were 1) feedstock composition, 2) feedstock pre-treatment, 3) unit operation selection, and 4) multi-product concept. A couple of emerging technologies also were assessed such as photo-fermentation, integrated dark fermentation, and using ultrasound and microwave to break-down feedstock`s complex matrix and increase overall hydrogen yield. To properly report the impact of each parameter KPIs were identified as 1) Hydrogen yield, 2) energy consumption, 3) secondary waste generated, 4) CO2 footprint, 5) Product profile, 6) $/kg-H2 and 5) environmental impact. The feedstock is the main parameter defining the economic viability of biohydrogen production. Through parametric studies, it was found that biohydrogen production favors feedstock with higher carbohydrates. The feedstock composition was varied, by increasing one critical element (such as carbohydrate) and monitoring KPIs evolution. Different cases were studied with diverse feedstock, such as energy crops, wastewater slug, and lignocellulosic waste. The base case process was applied to have reference KPIs values and modifications such as pretreatment and feedstock mix-and-match were implemented to investigate KPIs changes. The complexity of the feedstock is the main bottleneck in the successful commercial deployment of the biohydrogen process as a reliable pathway for waste valorization. Hydrogen yield, reaction kinetics, and performance of key unit operations highly impacted as feedstock composition fluctuates during the lifetime of the process or from one case to another. In this case, concept of multi-product becomes more reliable. In this concept, the process is not designed to produce only one target product such as biohydrogen but will have two or multiple products (biohydrogen and biomethane or biochemicals). This new approach is being investigated by the BBA team and the results will be shared in another scientific contribution.Keywords: biohydrogen, process scale-up, economic evaluation, commercialization uncertainties, hydrogen economy
Procedia PDF Downloads 114922 Verification of Low-Dose Diagnostic X-Ray as a Tool for Relating Vital Internal Organ Structures to External Body Armour Coverage
Authors: Natalie A. Sterk, Bernard van Vuuren, Petrie Marais, Bongani Mthombeni
Abstract:
Injuries to the internal structures of the thorax and abdomen remain a leading cause of death among soldiers. Body armour is a standard issue piece of military equipment designed to protect the vital organs against ballistic and stab threats. When configured for maximum protection, the excessive weight and size of the armour may limit soldier mobility and increase physical fatigue and discomfort. Providing soldiers with more armour than necessary may, therefore, hinder their ability to react rapidly in life-threatening situations. The capability to determine the optimal trade-off between the amount of essential anatomical coverage and hindrance on soldier performance may significantly enhance the design of armour systems. The current study aimed to develop and pilot a methodology for relating internal anatomical structures with actual armour plate coverage in real-time using low-dose diagnostic X-ray scanning. Several pilot scanning sessions were held at Lodox Systems (Pty) Ltd head-office in South Africa. Testing involved using the Lodox eXero-dr to scan dummy trunk rigs at various degrees and heights of measurement; as well as human participants, wearing correctly fitted body armour while positioned in supine, prone shooting, seated and kneeling shooting postures. The verification of sizing and metrics obtained from the Lodox eXero-dr were then confirmed through a verification board with known dimensions. Results indicated that the low-dose diagnostic X-ray has the capability to clearly identify the vital internal structures of the aortic arch, heart, and lungs in relation to the position of the external armour plates. Further testing is still required in order to fully and accurately identify the inferior liver boundary, inferior vena cava, and spleen. The scans produced in the supine, prone, and seated postures provided superior image quality over the kneeling posture. The X-ray-source and-detector distance from the object must be standardised to control for possible magnification changes and for comparison purposes. To account for this, specific scanning heights and angles were identified to allow for parallel scanning of relevant areas. The low-dose diagnostic X-ray provides a non-invasive, safe, and rapid technique for relating vital internal structures with external structures. This capability can be used for the re-evaluation of anatomical coverage required for essential protection while optimising armour design and fit for soldier performance.Keywords: body armour, low-dose diagnostic X-ray, scanning, vital organ coverage
Procedia PDF Downloads 128921 Benjaminian Translatability and Elias Canetti's Life Component: The Other German Speaking Modernity
Authors: Noury Bakrim
Abstract:
Translatability is one of Walter Benjamin’s most influential notions, it is somehow representing the philosophy of language and history of what we might call and what we indeed coined as ‘the other German Speaking Modernity’ which could be shaped as a parallel thought form to the Marxian-Hegelian philosophy of history, the one represented by the school of Frankfurt. On the other hand, we should consider the influence of the plural German speaking identity and the Nietzschian and Goethean heritage, this last being focused on a positive will of power: the humanised human being. Having in perspective the benjaminian notion of translatability (Übersetzbarkeit), to be defined as an internal permanent hermeneutical possibility as well as a phenomenological potential of a translation relation, we are in fact touching this very double limit of both historical and linguistic reason. By life component, we mean the changing conditions of genetic and neurolinguistic post-partum functions, to be grasped as an individuation beyond the historical determinism and teleology of an event. It is, so to speak, the retrospective/introspective canettian auto-fiction, the benjaminian crystallization of the language experience in the now-time of writing/transmission. Furthermore, it raises various questioning points when it comes to translatability, they are basically related to psycholinguistic separate poles, the fatherly ladino Spanish and the motherly Vienna German, but relating more in particular to the permanent ontological quest of a world loss/belonging. Another level of this quest would be the status of Veza Canetti-Taubner Calderón, german speaking Author, Canetti’s ‘literary wife’, writer’s love, his inverted logos, protective and yet controversial ‘official private life partner’, the permanence of the jewish experience in the exiled german language. It sheds light on a traumatic relation of an inadequate/possible language facing the reconstruction of an oral life, the unconscious split of the signifier and above all on the frustrating status of writing in Canetti’s work : Using a suffering/suffered written German to save his remembered acquisition of his tongue/mother tongue by saving the vanishing spoken multilingual experience. While Canetti’s only novel ‘Die Blendung’ designates that fictional referential dynamics focusing on the nazi worldless horizon: the figure of Kien is an onomastic signifier, the anti-Canetti figure, the misunderstood legacy of Kant, the system without thought. Our postulate would be the double translatability of his auto-fiction inventing the bios oral signifier basing on the new praxemes created by Canetti’s german as observed in the English, French translations of his memory corpus. We aim at conceptualizing life component and translatability as two major features of a german speaking modernity.Keywords: translatability, language biography, presentification, bioeme, life Order
Procedia PDF Downloads 427920 Innovation Eco-Systems and Cities: Sustainable Innovation and Urban Form
Authors: Claudia Trillo
Abstract:
Regional innovation eco-ecosystems are composed of a variety of interconnected urban innovation eco-systems, mutually reinforcing each other and making the whole territorial system successful. Combining principles drawn from the new economic growth theory and from the socio-constructivist approach to the economic growth, with the new geography of innovation emerging from the networked nature of innovation districts, this paper explores the spatial configuration of urban innovation districts, with the aim of unveiling replicable spatial patterns and transferable portfolios of urban policies. While some authors suggest that cities should be considered ideal natural clusters, supporting cross-fertilization and innovation thanks to the physical setting they provide to the construction of collective knowledge, still a considerable distance persists between regional development strategies and urban policies. Moreover, while public and private policies supporting entrepreneurship normally consider innovation as the cornerstone of any action aimed at uplifting the competitiveness and economic success of a certain area, a growing body of literature suggests that innovation is non-neutral, hence, it should be constantly assessed against equity and social inclusion. This paper draws from a robust qualitative empirical dataset gathered through 4-years research conducted in Boston to provide readers with an evidence-based set of recommendations drawn from the lessons learned through the investigation of the chosen innovation districts in the Boston area. The evaluative framework used for assessing the overall performance of the chosen case studies stems from the Habitat III Sustainable Development Goals rationale. The concept of inclusive growth has been considered essential to assess the social innovation domain in each of the chosen cases. The key success factors for the development of the Boston innovation ecosystem can be generalized as follows: 1) a quadruple helix model embedded in the physical structure of the two cities (Boston and Cambridge), in which anchor Higher Education (HE) institutions continuously nurture the Entrepreneurial Environment. 2) an entrepreneurial approach emerging from the local governments, eliciting risk-taking and bottom-up civic participation in tackling key issues in the city. 3) a networking structure of some intermediary actors supporting entrepreneurial collaboration, cross-fertilization and co-creation, which collaborate at multiple-scales thus enabling positive spillovers from the stronger to the weaker contexts. 4) awareness of the socio-economic value of the built environment as enabler of cognitive networks allowing activation of the collective intelligence. 5) creation of civic-led spaces enabling grassroot collaboration and cooperation. Evidence shows that there is not a single magic recipe for the successful implementation of place-based and social innovation-driven strategies. On the contrary, the variety of place-grounded combinations of micro and macro initiatives, embedded in the social and spatial fine grain of places and encompassing a diversity of actors, can create the conditions enabling places to thrive and local economic activities to grow in a sustainable way.Keywords: innovation-driven sustainable Eco-systems , place-based sustainable urban development, sustainable innovation districts, social innovation, urban policie
Procedia PDF Downloads 110919 Correlation Studies and Heritability Estimates among Onion (Allium Cepa L.) Cultivars of North Western Nigeria
Authors: L. Abubakar, B. M. Sokoto, I. U. Mohammed, M. S. Na’allah, A. Mohammad, A. N. Garba, T. S. Bubuche
Abstract:
Onion (Allium cepa var. cepa L.), is the most important species of the Allium group belonging to family Alliaceae and genus Allium. It can be regarded as the single important vegetable species in the world after tomatoes. Despite the similarities, which bring the species together, the genus is a strikingly diverse one, with more than five hundred species, which are perennial and mostly bulbous plants. Out of these, only seven species are in cultivation, and five are the most important species of the cultivated Allium. However, Allium cepa (onion) and Allium sativum (Garlic) are the two major cultivated species grown all over the world of which the onion crop is the most important. Heritability defined as the proportion of the observed total variability that is genetic, and its estimates from variance components give more useful information of genotypic variation from the total phenotypic differences and environmental effects on the individuals or families. It therefore guide the breeder with respect to the ease with which selection of traits can be carried out. Heritability estimates guide the breeder with respect to ease of selection of traits while correlations suggest how selection among characters can be practiced. Correlations explain relationship between characters and suggest how selection among characters can be practiced in breeding programmes. Highly significant correlations have been reported, between yield, maturity, rings/bulb and storage loss in onions. Similarly significant positive correlation exists between total bulb yield and plant height, leaf number/plant, bulb diameter and bulb yield/plant. Moderate positive correlations have been observed between maturity date and yield, dry matter content was highly correlated with soluble solids, and higher correlations were also observed between storage loss and soluble solids. The objective of the study is to determine heritability estimates and correlations for characters among onion cultivars of North Western Nigeria. This is envisaged will assist in the breeding of superior onion cultivars within the zone. Thirteen onion cultivars were collected during an expedition covering north western Nigeria and Southern part of Niger Republic during 2013, which are areas noted for onion production. The cultivars were evaluated at two locations; Sokoto, in Sokoto State and Jega in Kebbi State all in Nigeria during the 2013/14 onion season (dry season) under irrigation. Combined analysis of the results revealed fresh bulb yield is highly significantly positively correlated with bulb height and cured bulb yield, and significant positive correlation with plant height and bulb diameter. It also recorded significant negative correlation with mean No. of leaves/plant and non significant negative correlation with bolting %. Cured bulb yield (marketable yield) had highly significant positive correlation with mean bulb weight and fresh bulb yield/ha, with significant positive correlation with bulb height. It also recorded highly significant negative correlation with No. of leaves/plant and significant negative correlation with bolting % and non significant positive correlation with plant height and non significant negative correlation with bulb diameter. High broad sense heritability estimates were recorded for plant height, fresh bulb yield, number of leaves/plant, bolting % and cured bulb yield. Medium to low broad sense heritabilities were also observed for mean bulb weight, plant height and bulb diameter.Keywords: correlation, heritability, onions, North Western Nigeria
Procedia PDF Downloads 406918 Nature of Body Image Distortion in Eating Disorders
Authors: Katri K. Cornelissen, Lise Gulli Brokjob, Kristofor McCarty, Jiri Gumancik, Martin J. Tovee, Piers L. Cornelissen
Abstract:
Recent research has shown that body size estimation of healthy women is driven by independent attitudinal and perceptual components. The attitudinal component represents psychological concerns about body, coupled to low self-esteem and a tendency towards depressive symptomatology, leading to over-estimation of body size, independent of the Body Mass Index (BMI) someone actually has. The perceptual component is a normal bias known as contraction bias, which, for bodies is dependent on actual BMI. Women with a BMI less than the population norm tend to overestimate their size, while women with a BMI greater than the population norm tend to underestimate their size. Women whose BMI is close to the population mean are most accurate. This is indexed by a regression of estimated BMI on actual BMI with a slope less than one. It is well established that body dissatisfaction, i.e. an attitudinal distortion, leads to body size overestimation in eating disordered individuals. However, debate persists as to whether women with eating disorders may also suffer a perceptual body distortion. Therefore, the current study was set to ask whether women with eating disorders exhibit the normal contraction bias when they estimate their own body size. If they do not, this would suggest differences in the way that women with eating disorders process the perceptual aspects of body shape and size in comparison to healthy controls. 100 healthy controls and 33 women with a history of eating disorders were recruited. Critically, it was ensured that both groups of participants represented comparable and adequate ranges of actual BMI (e.g. ~18 to ~40). Of those with eating disorders, 19 had a history of anorexia nervosa, 6 bulimia nervosa, and 8 OSFED. 87.5% of the women with a history of eating disorders self-reported that they were either recovered or recovering, and 89.7% of them self-reported that they had had one or more instances of relapse. The mean time lapsed since first diagnosis was 5 years and on average participants had experienced two relapses. Participants were asked to fill number of psychometric measures (EDE-Q, BSQ, RSE, BDI) to establish the attitudinal component of their body image as well as their tendency to internalize socio-cultural body ideals. Additionally, participants completed a method of adjustment psychophysical task, using photorealistic avatars calibrated for BMI, in order to provide an estimate of their own body size and shape. The data from the healthy controls replicate previous findings, revealing independent contributions to body size estimation from both attitudinal and perceptual (i.e. contraction bias) body image components, as described above. For the eating disorder group, once the adequacy of their actual BMI ranges was established, a regression of estimated BMI on actual BMI had a slope greater than 1, significantly different to that from controls. This suggests that (some) eating disordered individuals process the perceptual aspects of body image differently from healthy controls. It therefore is necessary to develop interventions which are specific to the perceptual processing of body shape and size for the management of (some) individuals with eating disorders.Keywords: body image distortion, perception, recovery, relapse, BMI, eating disorders
Procedia PDF Downloads 70917 Evaluation of the Effectiveness of Crisis Management Support Bases in Tehran
Authors: Sima Hajiazizi
Abstract:
Tehran is a capital of Iran, with the capitals of the world to natural disasters such as earthquake and flood vulnerable has known. City has stated on three faults, Ray, Mosha, and north according to report of JICA in 2000, the most casualties and destruction was the result of active fault Ray. In 2003, the prevention and management of crisis in Tehran to conduct prevention and rehabilitation of the city, under the Ministry has active. Given the breadth and lack of appropriate access in the city, was considered decentralized management for crisis management support, in each region, in order to position the crisis management headquarters at the time of crises and implementation of programs for prevention and education of the citizens and also to position the bases given in some areas of the neighboring provinces at the time of the accident for help and a number of databases to store food and equipment needed at the time of the disaster. In this study, the bases for one, six, nine and eleven regions of Tehran in the field of management and training are evaluated. Selected areas had local accident and experience of practice for disaster management and local training has been experiencing challenges. The research approach was used qualitative research methods underlying Ground theory. At first, the information obtained through the study of documents and Semi-structured interviews by administrators, officials of training and participant observation in the classroom, line by line, and then it was coded in two stages, by comparing and questioning concepts, categories and extract according to the indicators is obtained from literature studies, subjects were been central. Main articles according to the frequency and importance of the phenomenon were called and they were drawn diagram paradigm and at the end with the intersections phenomena and their causes with indicators extracted from the texts, approach each phenomenon and the effectiveness of the bases was measured. There are two phenomenons in management; 1. The inability to manage the vast and complex crisis events and to resolve minor incidents due to the mismatch between managers. 2. Weaknesses in the implementation of preventive measures and preparedness to manage crisis is causal of situations, fields and intervening. There are five phenomenons in the field of education; 1. In the six-region participation and interest is high. 2. In eleven-region training partnerships for crisis management were to low that next by maneuver in schools and local initiatives such as advertising and use of aid groups have increased. 3. In nine-region, contributions to education in the area of crisis management at the beginning were low that initiatives like maneuver in schools and communities to stimulate and increase participation have increased sensitivity. 4. Managers have been disagreement with the same training in all areas. Finally for the issues that are causing the main issues, with the help of concepts extracted from the literature, recommendations are provided.Keywords: crises management, crisis management support bases, vulnerability, crisis management headquarters, prevention
Procedia PDF Downloads 178916 Construction and Analysis of Tamazight (Berber) Text Corpus
Authors: Zayd Khayi
Abstract:
This paper deals with the construction and analysis of the Tamazight text corpus. The grammatical structure of the Tamazight remains poorly understood, and a lack of comparative grammar leads to linguistic issues. In order to fill this gap, even though it is small, by constructed the diachronic corpus of the Tamazight language, and elaborated the program tool. In addition, this work is devoted to constructing that tool to analyze the different aspects of the Tamazight, with its different dialects used in the north of Africa, specifically in Morocco. It also focused on three Moroccan dialects: Tamazight, Tarifiyt, and Tachlhit. The Latin version was good choice because of the many sources it has. The corpus is based on the grammatical parameters and features of that language. The text collection contains more than 500 texts that cover a long historical period. It is free, and it will be useful for further investigations. The texts were transformed into an XML-format standardization goal. The corpus counts more than 200,000 words. Based on the linguistic rules and statistical methods, the original user interface and software prototype were developed by combining the technologies of web design and Python. The corpus presents more details and features about how this corpus provides users with the ability to distinguish easily between feminine/masculine nouns and verbs. The interface used has three languages: TMZ, FR, and EN. Selected texts were not initially categorized. This work was done in a manual way. Within corpus linguistics, there is currently no commonly accepted approach to the classification of texts. Texts are distinguished into ten categories. To describe and represent the texts in the corpus, we elaborated the XML structure according to the TEI recommendations. Using the search function may provide us with the types of words we would search for, like feminine/masculine nouns and verbs. Nouns are divided into two parts. The gender in the corpus has two forms. The neutral form of the word corresponds to masculine, while feminine is indicated by a double t-t affix (the prefix t- and the suffix -t), ex: Tarbat (girl), Tamtut (woman), Taxamt (tent), and Tislit (bride). However, there are some words whose feminine form contains only the prefix t- and the suffix –a, ex: Tasa (liver), tawja (family), and tarwa (progenitors). Generally, Tamazight masculine words have prefixes that distinguish them from other words. For instance, 'a', 'u', 'i', ex: Asklu (tree), udi (cheese), ighef (head). Verbs in the corpus are for the first person singular and plural that have suffixes 'agh','ex', 'egh', ex: 'ghrex' (I study), 'fegh' (I go out), 'nadagh' (I call). The program tool permits the following characteristics of this corpus: list of all tokens; list of unique words; lexical diversity; realize different grammatical requests. To conclude, this corpus has only focused on a small group of parts of speech in Tamazight language verbs, nouns. Work is still on the adjectives, prounouns, adverbs and others.Keywords: Tamazight (Berber) language, corpus linguistic, grammar rules, statistical methods
Procedia PDF Downloads 72915 Bayesian Structural Identification with Systematic Uncertainty Using Multiple Responses
Authors: André Jesus, Yanjie Zhu, Irwanda Laory
Abstract:
Structural health monitoring is one of the most promising technologies concerning aversion of structural risk and economic savings. Analysts often have to deal with a considerable variety of uncertainties that arise during a monitoring process. Namely the widespread application of numerical models (model-based) is accompanied by a widespread concern about quantifying the uncertainties prevailing in their use. Some of these uncertainties are related with the deterministic nature of the model (code uncertainty) others with the variability of its inputs (parameter uncertainty) and the discrepancy between a model/experiment (systematic uncertainty). The actual process always exhibits a random behaviour (observation error) even when conditions are set identically (residual variation). Bayesian inference assumes that parameters of a model are random variables with an associated PDF, which can be inferred from experimental data. However in many Bayesian methods the determination of systematic uncertainty can be problematic. In this work systematic uncertainty is associated with a discrepancy function. The numerical model and discrepancy function are approximated by Gaussian processes (surrogate model). Finally, to avoid the computational burden of a fully Bayesian approach the parameters that characterise the Gaussian processes were estimated in a four stage process (modular Bayesian approach). The proposed methodology has been successfully applied on fields such as geoscience, biomedics, particle physics but never on the SHM context. This approach considerably reduces the computational burden; although the extent of the considered uncertainties is lower (second order effects are neglected). To successfully identify the considered uncertainties this formulation was extended to consider multiple responses. The efficiency of the algorithm has been tested on a small scale aluminium bridge structure, subjected to a thermal expansion due to infrared heaters. Comparison of its performance with responses measured at different points of the structure and associated degrees of identifiability is also carried out. A numerical FEM model of the structure was developed and the stiffness from its supports is considered as a parameter to calibrate. Results show that the modular Bayesian approach performed best when responses of the same type had the lowest spatial correlation. Based on previous literature, using different types of responses (strain, acceleration, and displacement) should also improve the identifiability problem. Uncertainties due to parametric variability, observation error, residual variability, code variability and systematic uncertainty were all recovered. For this example the algorithm performance was stable and considerably quicker than Bayesian methods that account for the full extent of uncertainties. Future research with real-life examples is required to fully access the advantages and limitations of the proposed methodology.Keywords: bayesian, calibration, numerical model, system identification, systematic uncertainty, Gaussian process
Procedia PDF Downloads 330914 Prosecution as Persecution: Exploring the Enduring Legacy of Judicial Harassment of Human Rights Defenders and Political Opponents in Zimbabwe, Cases from 2013-2016
Authors: Bellinda R. Chinowawa
Abstract:
As part of a wider strategy to stifle civil society, Governments routinely resort to judicial harassment through the use of civil and criminal to impugn the integrity of human rights defenders and that of perceived political opponents. This phenomenon is rife in militarised or autocratic regimes where there is no tolerance for dissenting voices. Zimbabwe, ostensibly a presidential republic founded on the values of transparency, equality, freedom, is characterised by brutal suppression of perceived political opponents and those who assert their basic human rights. This is done through a wide range of tactics including unlawful arrests and detention, torture and other cruel, inhuman degrading treatment and enforced disappearances. Professionals including, journalists and doctors are similarly not spared from state attack. For human rights defenders, the most widely used tool of repression is that of judicial harassment where the judicial system is used to persecute them. This can include the levying of criminal charges, civil lawsuits and unnecessary administrative proceedings. Charges preferred against range from petty offences such as criminal nuisance to more serious charges of terrorism and subverting a constitutional government. Additionally, government sponsored individuals and organisations file strategic lawsuits with pecuniary implications order to intimidate and silence critics and engender self-censorship. Some HRDs are convicted and sentenced to prison terms, despite not being criminals in a true sense. While others are acquitted judicial harassment diverts energy and resources away from their human rights work. Through a consideration of statistical data reported by human rights organisations and face to face interviews with a cross section of human rights defenders, the article will map the incidence of judicial harassment in Zimbabwe. The article will consider the multi-level sociological and contextual factors which influence the Government of Zimbabwe to have easy recourse to criminal law and the debilitating effect of these actions on HRDs. These factors include the breakdown of the rule of law resulting in state capture of the judiciary, the proven efficacy of judicial harassment from colonial times to date, and the lack of an adequate redress mechanism at international level. By mapping the use of the judiciary as a tool of repression, from the inception of modern day Zimbabwe to date, it is hoped that HRDs will realise that they are part of a greater community of activists throughout the ages and should emboldened in the realisation that it is an age old tactic used by fallen regimes which should not deter them from calling for accountability.Keywords: autocratic regime, colonial legacy, judicial harassment, human rights defenders
Procedia PDF Downloads 234913 The Administration of Infection Diseases During the Pandemic COVID-19 and the Role of the Differential Diagnosis with Biomarkers VB10
Authors: Sofia Papadimitriou
Abstract:
INTRODUCTION: The differential diagnosis between acute viral and bacterial infections is an important cost-effectiveness parameter at the stage of the treatment process in order to achieve the maximum benefits in therapeutic intervention by combining the minimum cost to ensure the proper use of antibiotics.The discovery of sensitive and robust molecular diagnostic tests in response to the role of the host in infections has enhanced the accurate diagnosis and differentiation of infections. METHOD: The study used a sample of six independent blood samples (total=756) which are associated with human proteins-proteins, each of which at the transcription stage expresses a different response in the host network between viral and bacterial infections.Τhe individual blood samples are subjected to a sequence of computer filters that identify a gene panel corresponding to an autonomous diagnostic score. The data set and the correspondence of the gene panel to the diagnostic patents a new Bangalore -Viral Bacterial (BL-VB). FINDING: We use a biomarker based on the blood of 10 genes(Panel-VB) that are an important prognostic value for the detection of viruses from bacterial infections with a weighted average AUROC of 0.97(95% CL:0.96-0.99) in eleven independent samples (sets n=898). We discovered a base with a patient score (VB 10 ) according to the table, which is a significant diagnostic value with a weighted average of AUROC 0.94(95% CL: 0.91-0.98) in 2996 patient samples from 56 public sets of data from 19 different countries. We also studied VB 10 in a new cohort of South India (BL-VB,n=56) and found 97% accuracy in confirmed cases of viral and bacterial infections. We found that VB 10 (a)accurately identifies the type of infection even in unspecified cases negative to the culture (b) shows its clinical condition recovery and (c) applies to all age groups, covering a wide range of acute bacterial and viral infectious, including non-specific pathogens. We applied our VB 10 rating to publicly available COVID 19 data and found that our rating diagnosed viral infection in patient samples. RESULTS: Τhe results of the study showed the diagnostic power of the biomarker VB 10 as a diagnostic test for the accurate diagnosis of acute infections in recovery conditions. We look forward to helping you make clinical decisions about prescribing antibiotics and integrating them into your policies management of antibiotic stewardship efforts. CONCLUSIONS: Overall, we are developing a new property of the RNA-based biomarker and a new blood test to differentiate between viral and bacterial infections to assist a physician in designing the optimal treatment regimen to contribute to the proper use of antibiotics and reduce the burden on antimicrobial resistance, AMR.Keywords: acute infections, antimicrobial resistance, biomarker, blood transcriptome, systems biology, classifier diagnostic score
Procedia PDF Downloads 159912 Carboxyfullerene-Modified Titanium Dioxide Nanoparticles in Singlet Oxygen and Hydroxyl Radicals Scavenging Activity
Authors: Kai-Cheng Yang, Yen-Ling Chen, Er-Chieh Cho, Kuen-Chan Lee
Abstract:
Titanium dioxide nanomaterials offer superior protection for human skin against the full spectrum of ultraviolet light. However, some literature reviews indicated that it might be associated with adverse effects such as cytotoxicity or reactive oxygen species (ROS) due to their nanoscale. The surface of fullerene is covered with π electrons constituting aromatic structures, which can effectively scavenge large amount of radicals. Unfortunately, fullerenes are poor solubility in water, severe aggregation, and toxicity in biological applications when dispersed in solvent have imposed the limitations to the use of fullerenes. Carboxyfullerene acts as the scavenger of radicals for several years. Some reports indicate that carboxyfullerene not only decrease the concentration of free radicals in ambience but also prevent cells from reducing the number or apoptosis under UV irradiation. The aim of this study is to decorate fullerene –C70-carboxylic acid (C70-COOH) on the surface of titanium dioxide nanoparticles (P25) for the purpose of scavenging ROS during the irradiation. The modified material is prepared through the esterification of C70-COOH with P25 (P25/C70-COOH). The binding edge and structure are studied by using Transmission electron microscope (TEM) and Fourier transform infrared (FTIR). The diameter of P25 is about 30 nm and C70-COOH is found to be conjugated on the edge of P25 in aggregation morphology with the size of ca. 100 nm. In the next step, the FTIR was used to confirm the binding structure between P25 and C70-COOH. There are two new peaks are shown at 1427 and 1720 cm-1 for P25/C70-COOH, resulting from the C–C stretch and C=O stretch formed during esterification with dilute sulfuric acid. The IR results further confirm the chemically bonded interaction between C70-COOH and P25. In order to provide the evidence of scavenging radical ability of P25/C70-COOH, we chose pyridoxine (Vit.B6) and terephthalic acid (TA) to react with singlet oxygen and hydroxyl radicals. We utilized these chemicals to observe the radicals scavenging statement via detecting the intensity of ultraviolet adsorption or fluorescence emission. The UV spectra are measured by using different concentration of C70-COOH modified P25 with 1mM pyridoxine under UV irradiation for various duration times. The results revealed that the concentration of pyridoxine was increased when cooperating with P25/C70-COOH after three hours as compared with control (only P25). It indicates fewer radicals could be reacted with pyridoxine because of the absorption via P25/C70-COOH. The fluorescence spectra are observed by measuring P25/C70-COOH with 1mM terephthalic acid under UV irradiation for various duration times. The fluorescence intensity of TAOH was decreased in ten minutes when cooperating with P25/C70-COOH. Here, it was found that the fluorescence intensity was increased after thirty minutes, which could be attributed to the saturation of C70-COOH in the absorption of radicals. However, the results showed that the modified P25/C70-COOH could reduce the radicals in the environment. Therefore, we expect that P25/C70-COOH is a potential materials in using for antioxidant.Keywords: titanium dioxide, fullerene, radical scavenging activity, antioxidant
Procedia PDF Downloads 405911 Residual Plastic Deformation Capacity in Reinforced Concrete Beams Subjected to Drop Weight Impact Test
Authors: Morgan Johansson, Joosef Leppanen, Mathias Flansbjer, Fabio Lozano, Josef Makdesi
Abstract:
Concrete is commonly used for protective structures and how impact loading affects different types of concrete structures is an important issue. Often the knowledge gained from static loading is also used in the design of impulse loaded structures. A large plastic deformation capacity is essential to obtain a large energy absorption in an impulse loaded structure. However, the structural response of an impact loaded concrete beam may be very different compared to a statically loaded beam. Consequently, the plastic deformation capacity and failure modes of the concrete structure can be different when subjected to dynamic loads; and hence it is not sure that the observations obtained from static loading are also valid for dynamic loading. The aim of this paper is to investigate the residual plastic deformation capacity in reinforced concrete beams subjected to drop weight impact tests. A test-series consisting of 18 simply supported beams (0.1 x 0.1 x 1.18 m, ρs = 0.7%) with a span length of 1.0 m and subjected to a point load in the beam mid-point, was carried out. 2x6 beams were first subjected to drop weight impact tests, and thereafter statically tested until failure. The drop in weight had a mass of 10 kg and was dropped from 2.5 m or 5.0 m. During the impact tests, a high-speed camera was used with 5 000 fps and for the static tests, a camera was used with 0.5 fps. Digital image correlation (DIC) analyses were conducted and from these the velocities of the beam and the drop weight, as well as the deformations and crack propagation of the beam, were effectively measured. Additionally, for the static tests, the applied load and midspan deformation were measured. The load-deformation relations for the beams subjected to an impact load were compared with 6 reference beams that were subjected to static loading only. The crack pattern obtained were compared using DIC, and it was concluded that the resulting crack formation depended much on the test method used. For the static tests, only bending cracks occurred. For the impact loaded beams, though, distinctive diagonal shear cracks also formed below the zone of impact and less wide shear cracks were observed in the region half-way to the support. Furthermore, due to wave propagation effects, bending cracks developed in the upper part of the beam during initial loading. The results showed that the plastic deformation capacity increased for beams subjected to drop weight impact tests from a high drop height of 5.0 m. For beams subjected to an impact from a low drop height of 2.5 m, though, the plastic deformation capacity was in the same order of magnitude as for the statically loaded reference beams. The beams tested were designed to fail due to bending when subjected to a static load. However, for the impact tested beams, one beam exhibited a shear failure at a significantly reduced load level when it was tested statically; indicating that there might be a risk of reduced residual load capacity for impact loaded structures.Keywords: digital image correlation (DIC), drop weight impact, experiments, plastic deformation capacity, reinforced concrete
Procedia PDF Downloads 153910 Evaluating Energy Transition of a complex of buildings in a historic site of Rome toward Zero-Emissions for a Sustainable Future
Authors: Silvia Di Turi, Nicolandrea Calabrese, Francesca Caffari, Giulia Centi, Francesca Margiotta, Giovanni Murano, Laura Ronchetti, Paolo Signoretti, Lisa Volpe, Domenico Palladino
Abstract:
Recent European policies have been set ambitious targets aimed at significantly reducing CO2 emissions by 2030, with a long-term vision of transforming existing buildings into Zero-Emissions Buildings (ZEmB) by 2050. This vision represents a key point for the energy transition as the whole building stock currently accounts for 36% of total energy consumption across the Europe, mainly due to their poor energy performance. The challenge towards Zero-Emissions Buildings is particularly felt in Italy, where a significant number of buildings with historical significance or situated within protected/constrained areas can be found. Furthermore, an estimated 70% of the national building stock are built before 1976, indicating a widespread issue of poor energy performance. Addressing the energy ineƯiciency of these buildings is crucial to refining a comprehensive energy renovation approach aimed at facilitating their energy transition. In this framework the current study focuses on analysing a challenging complex of buildings to be totally restored through significant energy renovation interventions. The goal is to recover these disused buildings situated in a significant archaeological zone of Rome, contributing to the restoration and reintegration of this historically valuable site, while also oƯering insights useful for achieving zeroemission requirements for buildings within such contexts. In pursuit of meeting the stringent zero-emission requirements, a comprehensive study was carried out to assess the complex of buildings, envisioning substantial renovation measures on building envelope and plant systems and incorporating renewable energy system solutions, always respecting and preserving the historic site. An energy audit of the complex of buildings was performed to define the actual energy consumption for each energy service by adopting the hourly calculation methods. Subsequently, significant energy renovation interventions on both building envelope and mechanical systems have been examined respecting the historical value and preservation of site. These retrofit strategies have been investigated with threefold aims: 1) to recover the existing buildings ensuring the energy eƯiciency of the whole complex of buildings, 2) to explore which solutions have allowed achieving and facilitating the ZEmB status, 3) to balance the energy transition requirements with the sustainable aspect in order to preserve the historic value of the buildings and site. This study has pointed out the potentiality and the technical challenges associated with implementing renovation solutions for such buildings, representing one of the first attempt towards realizing this ambitious target for this type of building.Keywords: energy conservation and transition, complex of buildings in historic site, zero-emission buildings, energy efficiency recovery
Procedia PDF Downloads 80909 The Design of a Phase I/II Trial of Neoadjuvant RT with Interdigitated Multiple Fractions of Lattice RT for Large High-grade Soft-Tissue Sarcoma
Authors: Georges F. Hatoum, Thomas H. Temple, Silvio Garcia, Xiaodong Wu
Abstract:
Soft Tissue Sarcomas (STS) represent a diverse group of malignancies with heterogeneous clinical and pathological features. The treatment of extremity STS aims to achieve optimal local tumor control, improved survival, and preservation of limb function. The National Comprehensive Cancer Network guidelines, based on the cumulated clinical data, recommend radiation therapy (RT) in conjunction with limb-sparing surgery for large, high-grade STS measuring greater than 5 cm in size. Such treatment strategy can offer a cure for patients. However, when recurrence occurs (in nearly half of patients), the prognosis is poor, with a median survival of 12 to 15 months and with only palliative treatment options available. The spatially-fractionated-radiotherapy (SFRT), with a long history of treating bulky tumors as a non-mainstream technique, has gained new attention in recent years due to its unconventional therapeutic effects, such as bystander/abscopal effects. Combining single fraction of GRID, the original form of SFRT, with conventional RT was shown to have marginally increased the rate of pathological necrosis, which has been recognized to have a positive correlation to overall survival. In an effort to consistently increase the pathological necrosis rate over 90%, multiple fractions of Lattice RT (LRT), a newer form of 3D SFRT, interdigitated with the standard RT as neoadjuvant therapy was conducted in a preliminary clinical setting. With favorable results of over 95% of necrosis rate in a small cohort of patients, a Phase I/II clinical study was proposed to exam the safety and feasibility of this new strategy. Herein the design of the clinical study is presented. In this single-arm, two-stage phase I/II clinical trial, the primary objectives are >80% of the patients achieving >90% tumor necrosis and to evaluation the toxicity; the secondary objectives are to evaluate the local control, disease free survival and overall survival (OS), as well as the correlation between clinical response and the relevant biomarkers. The study plans to accrue patients over a span of two years. All patient will be treated with the new neoadjuvant RT regimen, in which one of every five fractions of conventional RT is replaced by a LRT fraction with vertices receiving dose ≥10Gy while keeping the tumor periphery at or close to 2 Gy per fraction. Surgical removal of the tumor is planned to occur 6 to 8 weeks following the completion of radiation therapy. The study will employ a Pocock-style early stopping boundary to ensure patient safety. The patients will be followed and monitored for a period of five years. Despite much effort, the rarity of the disease has resulted in limited novel therapeutic breakthroughs. Although a higher rate of treatment-induced tumor necrosis has been associated with improved OS, with the current techniques, only 20% of patients with large, high-grade tumors achieve a tumor necrosis rate exceeding 50%. If this new neoadjuvant strategy is proven effective, an appreciable improvement in clinical outcome without added toxicity can be anticipated. Due to the rarity of the disease, it is hoped that such study could be orchestrated in a multi-institutional setting.Keywords: lattice RT, necrosis, SFRT, soft tissue sarcoma
Procedia PDF Downloads 61908 Assessing Acute Toxicity and Endocrine Disruption Potential of Selected Packages Internal Layers Extracts
Authors: N. Szczepanska, B. Kudlak, G. Yotova, S. Tsakovski, J. Namiesnik
Abstract:
In the scientific literature related to the widely understood issue of packaging materials designed to have contact with food (food contact materials), there is much information on raw materials used for their production, as well as their physiochemical properties, types, and parameters. However, not much attention is given to the issues concerning migration of toxic substances from packaging and its actual influence on the health of the final consumer, even though health protection and food safety are the priority tasks. The goal of this study was to estimate the impact of particular foodstuff packaging type, food production, and storage conditions on the degree of leaching of potentially toxic compounds and endocrine disruptors to foodstuffs using the acute toxicity test Microtox and XenoScreen YES YAS assay. The selected foodstuff packaging materials were metal cans used for fish storage and tetrapak. Five stimulants respectful to specific kinds of food were chosen in order to assess global migration: distilled water for aqueous foods with a pH above 4.5; acetic acid at 3% in distilled water for acidic aqueous food with pH below 4.5; ethanol at 5% for any food that may contain alcohol; dimethyl sulfoxide (DMSO) and artificial saliva were used in regard to the possibility of using it as an simulation medium. For each packaging three independent variables (temperature and contact time) factorial design simulant was performed. Xenobiotics migration from epoxy resins was studied at three different temperatures (25°C, 65°C, and 121°C) and extraction time of 12h, 48h and 2 weeks. Such experimental design leads to 9 experiments for each food simulant as conditions for each experiment are obtained by combination of temperature and contact time levels. Each experiment was run in triplicate for acute toxicity and in duplicate for estrogen disruption potential determination. Multi-factor analysis of variation (MANOVA) was used to evaluate the effects of the three main factors solvent, temperature (temperature regime for cup), contact time and their interactions on the respected dependent variable (acute toxicity or estrogen disruption potential). From all stimulants studied the most toxic were can and tetrapak lining acetic acid extracts that are indication for significant migration of toxic compounds. This migration increased with increase of contact time and temperature and justified the hypothesis that food products with low pH values cause significant damage internal resin filling. Can lining extracts of all simulation medias excluding distilled water and artificial saliva proved to contain androgen agonists even at 25°C and extraction time of 12h. For tetrapak extracts significant endocrine potential for acetic acid, DMSO and saliva were detected.Keywords: food packaging, extraction, migration, toxicity, biotest
Procedia PDF Downloads 183907 Intracommunity Attitudes Toward the Gatekeeping of Asexuality in the LGBTQ+ Community on Tumblr
Authors: A.D. Fredline, Beverly Stiles
Abstract:
This is a qualitative investigation that examines the social media site, Tumblr, for the goal of analyzing the controversy regarding the inclusion of asexuality in the LGBTQ+ community. As platforms such as Tumblr permit the development of communities for marginalized groups, social media serves as a core component to exclusionary practices and boundary negotiations for community membership. This research is important because there is a paucity of research on the topic and a significant gap in the literature with regards to intracommunity gatekeeping. However, discourse on the topic is blatantly apparent on social media platforms. The objectives are to begin to bridge the gap in the literature by examining attitudes towards the inclusion of asexuality within the LGBTQ+ community. In order to analyze the attitudes developed towards the inclusion of asexuality in the LGBTQ+ community, eight publicly available blogs on Tumblr.com were selected from both the “inclusionist” and “exclusionist” perspectives. Blogs selected were found through a basic search for “inclusionist” and “exclusionist” on the Tumblr website. Out of the first twenty blogs listed for each set of results, those centrally focused on asexuality discourse were selected. For each blog, the fifty most recent postings were collected. Analysis of the collected postings exposed three central themes from the exclusionist perspective as well as for the inclusionist perspective. Findings indicate that from the inclusionist perspective, asexuality belongs to the LGBTQ+ community. One primary argument from this perspective is that asexual individuals face opposition for their identity just as do other identities included in the community. This opposition is said to take a variety of forms, such as verbal shaming, assumption of illness and corrective rape. Another argument is that the LGBTQ+ community and asexuals face a common opponent in cisheterosexism as asexuals struggle with the assumed and expected sexualization. A final central theme is that denying asexual inclusion leads to the assumption of heteronormativity. Findings also indicate that from the exclusionist perspective, asexuality does not belong to the LGBTQ+ community. One central theme from this perspective is the equivalization of cisgender heteroromantic asexuals with cisgender heterosexuals. As straight individuals are not allowed in the community, exclusionists argue that asexuals engaged in opposite gender partnerships should not be included. Another debate is that including asexuality in the community sexualizes all other identities by assuming sexual orientation is inherently sexual rather than romantic. Finally, exclusionists also argue that asexuality encourages childhood labeling and forces sexual identities on children, something not promoted by the LGBTQ+ community. Conclusions drawn from analyzing both perspectives is that integration may be a possibility, but complexities add another layer of discourse. For example, both inclusionists and exclusionists agree that privileged identities do not belong to the LGBTQ+ community. The focus of discourse is whether or not asexuals are privileged. Clearly, both sides of the debate have the same vision of what binds the community together. The question that remains is who belongs to that community.Keywords: asexuality, exclusionists, inclusionists, Tumblr
Procedia PDF Downloads 190906 Polarimetric Study of System Gelatin / Carboxymethylcellulose in the Food Field
Authors: Sihem Bazid, Meriem El Kolli, Aicha Medjahed
Abstract:
Proteins and polysaccharides are the two types of biopolymers most frequently used in the food industry to control the mechanical properties and structural stability and organoleptic properties of the products. The textural and structural properties of these two types of blend polymers depend on their interaction and their ability to form organized structures. From an industrial point of view, a better understanding of mixtures protein / polysaccharide is an important issue since they are already heavily involved in processed food. It is in this context that we have chosen to work on a model system composed of a fibrous protein mixture (gelatin)/anionic polysaccharide (sodium carboxymethylcellulose). Gelatin, one of the most popular biopolymers, is widely used in food, pharmaceutical, cosmetic and photographic applications, because of its unique functional and technological properties. Sodium Carboxymethylcellulose (NaCMC) is an anionic linear polysaccharide derived from cellulose. It is an important industrial polymer with a wide range of applications. The functional properties of this anionic polysaccharide can be modified by the presence of proteins with which it might interact. Another factor may also manage the interaction of protein-polysaccharide mixtures is the triple helix of the gelatin. Its complex synthesis method results in an extracellular assembly containing several levels. Collagen can be in a soluble state or associate into fibrils, which can associate in fiber. Each level corresponds to an organization recognized by the cellular and metabolic system. Gelatin allows this approach, the formation of gelatin gel has triple helical folding of denatured collagen chains, this gel has been the subject of numerous studies, and it is now known that the properties depend only on the rate of triple helices forming the network. Chemical modification of this system is quite controlled. Observe the dynamics of the triple helix may be relevant in understanding the interactions involved in protein-polysaccharides mixtures. Gelatin is central to any industrial process, understand and analyze the molecular dynamics induced by the triple helix in the transitions gelatin, can have great economic importance in all fields and especially the food. The goal is to understand the possible mechanisms involved depending on the nature of the mixtures obtained. From a fundamental point of view, it is clear that the protective effect of NaCMC on gelatin and conformational changes of the α helix are strongly influenced by the nature of the medium. Our goal is to minimize the maximum the α helix structure changes to maintain more stable gelatin and protect against denaturation that occurs during such conversion processes in the food industry. In order to study the nature of interactions and assess the properties of mixtures, polarimetry was used to monitor the optical parameters and to assess the rate of helicity gelatin.Keywords: gelatin, sodium carboxymethylcellulose, interaction gelatin-NaCMC, the rate of helicity, polarimetry
Procedia PDF Downloads 317905 Analysis of Short Counter-Flow Heat Exchanger (SCFHE) Using Non-Circular Micro-Tubes Operated on Water-CuO Nanofluid
Authors: Avdhesh K. Sharma
Abstract:
Key, in the development of energy-efficient micro-scale heat exchanger devices, is to select large heat transfer surface to volume ratio without much expanse on re-circulated pumps. The increased interest in short heat exchanger (SHE) is due to accessibility of advanced technologies for manufacturing of micro-tubes in range of 1 micron m - 1 mm. Such SHE using micro-tubes are highly effective for high flux heat transfer technologies. Nanofluids, are used to enhance the thermal conductivity of re-circulated coolant and thus enhances heat transfer rate further. Higher viscosity associated with nanofluid expands more pumping power. Thus, there is a trade-off between heat transfer rate and pressure drop with geometry of micro-tubes. Herein, a novel design of short counter flow heat exchanger (SCFHE) using non-circular micro-tubes flooded with CuO-water nanofluid is conceptualized by varying the ratio of surface area to cross-sectional area of micro-tubes. A framework for comparative analysis of SCFHE using micro-tubes non-circular shape flooded by CuO-water nanofluid is presented. In SCFHE concept, micro-tubes having various geometrical shapes (viz., triangular, rectangular and trapezoidal) has been arranged row-wise to facilitate two aspects: (1) allowing easy flow distribution for cold and hot stream, and (2) maximizing the thermal interactions with neighboring channels. Adequate distribution of rows for cold and hot flow streams enables above two aspects. For comparative analysis, a specific volume or cross-section area is assigned to each elemental cell (which includes flow area and area corresponds to half wall thickness). A specific volume or cross-section area is assumed to be constant for each elemental cell (which includes flow area and half wall thickness area) and variation in surface area is allowed by selecting different geometry of micro-tubes in SCFHE. Effective thermal conductivity model for CuO-water nanofluid has been adopted, while the viscosity values for water based nanofluids are obtained empirically. Correlations for Nusselt number (Nu) and Poiseuille number (Po) for micro-tubes have been derived or adopted. Entrance effect is accounted for. Thermal and hydrodynamic performances of SCFHE are defined in terms of effectiveness and pressure drop or pumping power, respectively. For defining the overall performance index of SCFHE, two links are employed. First one relates heat transfer between the fluid streams q and pumping power PP as (=qj/PPj); while another link relates effectiveness eff and pressure drop dP as (=effj/dPj). For analysis, the inlet temperatures of hot and cold streams are varied in usual range of 20dC-65dC. Fully turbulent regime is seldom encountered in micro-tubes and transition of flow regime occurs much early (i.e., ~Re=1000). Thus, Re is fixed at 900, however, the uncertainty in Re due to addition of nanoparticles in base fluid is quantified by averaging of Re. Moreover, for minimizing error, volumetric concentration is limited to range 0% to ≤4% only. Such framework may be helpful in utilizing maximum peripheral surface area of SCFHE without any serious severity on pumping power and towards developing advanced short heat exchangers.Keywords: CuO-water nanofluid, non-circular micro-tubes, performance index, short counter flow heat exchanger
Procedia PDF Downloads 218904 Strategic Asset Allocation Optimization: Enhancing Portfolio Performance Through PCA-Driven Multi-Objective Modeling
Authors: Ghita Benayad
Abstract:
Asset allocation, which affects the long-term profitability of portfolios by distributing assets to fulfill a range of investment objectives, is the cornerstone of investment management in the dynamic and complicated world of financial markets. This paper offers a technique for optimizing strategic asset allocation with the goal of improving portfolio performance by addressing the inherent complexity and uncertainty of the market through the use of Principal Component Analysis (PCA) in a multi-objective modeling framework. The study's first section starts with a critical evaluation of conventional asset allocation techniques, highlighting how poorly they are able to capture the intricate relationships between assets and the volatile nature of the market. In order to overcome these challenges, the project suggests a PCA-driven methodology that isolates important characteristics influencing asset returns by decreasing the dimensionality of the investment universe. This decrease provides a stronger basis for asset allocation decisions by facilitating a clearer understanding of market structures and behaviors. Using a multi-objective optimization model, the project builds on this foundation by taking into account a number of performance metrics at once, including risk minimization, return maximization, and the accomplishment of predetermined investment goals like regulatory compliance or sustainability standards. This model provides a more comprehensive understanding of investor preferences and portfolio performance in comparison to conventional single-objective optimization techniques. While applying the PCA-driven multi-objective optimization model to historical market data, aiming to construct portfolios better under different market situations. As compared to portfolios produced from conventional asset allocation methodologies, the results show that portfolios optimized using the proposed method display improved risk-adjusted returns, more resilience to market downturns, and better alignment with specified investment objectives. The study also looks at the implications of this PCA technique for portfolio management, including the prospect that it might give investors a more advanced framework for navigating financial markets. The findings suggest that by combining PCA with multi-objective optimization, investors may obtain a more strategic and informed asset allocation that is responsive to both market conditions and individual investment preferences. In conclusion, this capstone project improves the field of financial engineering by creating a sophisticated asset allocation optimization model that integrates PCA with multi-objective optimization. In addition to raising concerns about the condition of asset allocation today, the proposed method of portfolio management opens up new avenues for research and application in the area of investment techniques.Keywords: asset allocation, portfolio optimization, principle component analysis, multi-objective modelling, financial market
Procedia PDF Downloads 51903 Pibid and Experimentation: A High School Case Study
Authors: Chahad P. Alexandre
Abstract:
PIBID-Institutional Program of Scholarships to Encourage Teaching - is a Brazilian government program that counts today with 48.000 students. It's goal is to motivate the students to stay in the teaching undergraduate programs and to help fill the gap of 100.000 teachers that are needed today in the under graduated schools. The major lack of teachers today is in physics, chemistry, mathematics, and biology. At IFSP-Itapetininga we formatted our physics PIBID based on practical activities. Our students are divided in two São Paulo state government high schools in the same city. The project proposes class activities based on experimentation, observation and understanding of physical phenomena. The didactical experiments are always in relation with the content that the teacher is working, he is the supervisor of the program in the school. Always before an experiment is proposed a little questionnaire to learn about the students preconceptions and one is filled latter to evaluate if now concepts have been created. This procedure is made in order to compare their previous knowledge and how it changed after the experiment is developed. The primary goal of our project is to make the Physics class more attractive to the students and to develop in high school students the interest in learning physics and to show the relation of Physics to the day by day and to the technological world. The objective of the experimental activities is to facilitate the understanding of the concepts that are worked on classes because under experimentation the PIBID scholarship student stimulate the curiosity of the high school student and with this he can develop the capacity to understand and identify the physical phenomena with concrete examples. Knowing how to identify this phenomena and where they are present at the high school student life makes the learning process more significant and pleasant. This proposal make achievable to the students to practice science, to appropriate of complex, in the traditional classes, concepts and overcoming the common preconception that physics is something distant and that is present only on books. This preconception is extremely harmful in the process of scientific knowledge construction. This kind of learning – through experimentation – make the students not only accumulate knowledge but also appropriate it, also to appropriate experimental procedures and even the space that is provided by the school. The PIBID scholarship students, as future teachers also have the opportunity to try experimentation classes, to intervene in the classes and to have contact with their future career. This opportunity allows the students to make important reflection about the practices realized and consequently about the learning methods. Due to this project, we found out that the high school students stay more time focused in the experiment compared to the traditional explanation teachers´ class. As a result in a class, as a participative activity, the students got more involved and participative. We also found out that the physics under graduated students drop out percentage is smaller in our Institute than before the PIBID program started.Keywords: innovation, projects, PIBID, physics, pre-service teacher experiences
Procedia PDF Downloads 347902 Variation of Lexical Choice and Changing Need of Identity Expression
Authors: Thapasya J., Rajesh Kumar
Abstract:
Language plays complex roles in society. The previous studies on language and society explain their interconnected, complementary and complex interactions and, those studies were primarily focused on the variations in the language. Variation being the fundamental nature of languages, the question of personal and social identity navigated through language variation and established that there is an interconnection between language variation and identity. This paper analyses the sociolinguistic variation in language at the lexical level and how the lexical choice of the speaker(s) affects in shaping their identity. It obtains primary data from the lexicon of the Mappila dialect of Malayalam spoken by the members of Mappila (Muslim) community of Kerala. The variation in the lexical choice is analysed by collecting data from the speech samples of 15 minutes from four different age groups of Mappila dialect speakers. Various contexts were analysed and the frequency of borrowed words in each instance is calculated to reach a conclusion on how the variation is happening in the speech community. The paper shows how the lexical choice of the speakers could be socially motivated and involve in shaping and changing identities. Lexical items or vocabulary clearly signal the group identity and personal identity. Mappila dialect of Malayalam was rich in frequent use of borrowed words from Arabic, Persian and Urdu. There was a deliberate attempt to show their identity as a Mappila community member, which was derived from the socio-political situation during those days. This made a clear variation between the Mappila dialect and other dialects of Malayalam at the surface level, which was motivated to create and establish the identity of a person as the member of Mappila community. Historically, these kinds of linguistic variation were highly motivated because of the socio-political factors and, intertwined with the historical facts about the origin and spread of Islamism in the region; people from the Mappila community highly motivated to project their identity as a Mappila because of the social insecurities they had to face before accepting that religion. Thus the deliberate inclusion of Arabic, Persian and Urdu words in their speech helped in showing their identity. However, the socio-political situations and factors at the origin of Mappila community have been changed over a period of time. The social motivation for indicating their identity as a Mappila no longer exist and thus the frequency of borrowed words from Arabic, Persian and Urdu have been reduced from their speech. Apart from the religious terms, the borrowed words from these languages are very few at present. The analysis is carried out by the changes in the language of the people according to their age and found to have significant variations between generations and literacy plays a major role in this variation process. The need of projecting a specific identity of an individual would vary according to the change in the socio-political scenario and a variation in language can shape the identity in order to go with the varying socio-political situation in any language.Keywords: borrowings, dialect, identity, lexical choice, literacy, variation
Procedia PDF Downloads 241901 Applying Miniaturized near Infrared Technology for Commingled and Microplastic Waste Analysis
Authors: Monika Rani, Claudio Marchesi, Stefania Federici, Laura E. Depero
Abstract:
Degradation of the aquatic environment by plastic litter, especially microplastics (MPs), i.e., any water-insoluble solid plastic particle with the longest dimension in the range 1µm and 1000 µm (=1 mm) size, is an unfortunate indication of the advancement of the Anthropocene age on Earth. Microplastics formed due to natural weathering processes are termed as secondary microplastics, while when these are synthesized in industries, they are called primary microplastics. Their presence from the highest peaks to the deepest points in oceans explored and their resistance to biological and chemical decay has adversely affected the environment, especially marine life. Even though the presence of MPs in the marine environment is well-reported, a legitimate and authentic analytical technique to sample, analyze, and quantify the MPs is still under progress and testing stages. Among the characterization techniques, vibrational spectroscopic techniques are largely adopted in the field of polymers. And the ongoing miniaturization of these methods is on the way to revolutionize the plastic recycling industry. In this scenario, the capability and the feasibility of a miniaturized near-infrared (MicroNIR) spectroscopy combined with chemometrics tools for qualitative and quantitative analysis of urban plastic waste collected from a recycling plant and microplastic mixture fragmented in the lab were investigated. Based on the Resin Identification Code, 250 plastic samples were used for macroplastic analysis and to set up a library of polymers. Subsequently, MicroNIR spectra were analysed through the application of multivariate modelling. Principal Components Analysis (PCA) was used as an unsupervised tool to find trends within the data. After the exploratory PCA analysis, a supervised classification tool was applied in order to distinguish the different plastic classes, and a database containing the NIR spectra of polymers was made. For the microplastic analysis, the three most abundant polymers in the plastic litter, PE, PP, PS, were mechanically fragmented in the laboratory to micron size. The distinctive arrangement of blends of these three microplastics was prepared in line with a designed ternary composition plot. After the PCA exploratory analysis, a quantitative model Partial Least Squares Regression (PLSR) allowed to predict the percentage of microplastics in the mixtures. With a complete dataset of 63 compositions, PLS was calibrated with 42 data-points. The model was used to predict the composition of 21 unknown mixtures of the test set. The advantage of the consolidated NIR Chemometric approach lies in the quick evaluation of whether the sample is macro or micro, contaminated, coloured or not, and with no sample pre-treatment. The technique can be utilized with bigger example volumes and even considers an on-site evaluation and in this manner satisfies the need for a high-throughput strategy.Keywords: chemometrics, microNIR, microplastics, urban plastic waste
Procedia PDF Downloads 168900 Integration of Gravity and Seismic Methods in the Geometric Characterization of a Dune Reservoir: Case of the Zouaraa Basin, NW Tunisia
Authors: Marwa Djebbi, Hakim Gabtni
Abstract:
Gravity is a continuously advancing method that has become a mature technology for geological studies. Increasingly, it has been used to complement and constrain traditional seismic data and even used as the only tool to get information of the sub-surface. In fact, in some regions the seismic data, if available, are of poor quality and hard to be interpreted. Such is the case for the current study area. The Nefza zone is part of the Tellian fold and thrust belt domain in the north west of Tunisia. It is essentially made of a pile of allochthonous units resulting from a major Neogene tectonic event. Its tectonic and stratigraphic developments have always been subject of controversies. Considering the geological and hydrogeological importance of this area, a detailed interdisciplinary study has been conducted integrating geology, seismic and gravity techniques. The interpretation of Gravity data allowed the delimitation of the dune reservoir and the identification of the regional lineaments contouring the area. It revealed the presence of three gravity lows that correspond to the dune of Zouara and Ouchtata separated along with a positive gravity axis espousing the Ain Allega_Aroub Er Roumane axe. The Bouguer gravity map illustrated the compartmentalization of the Zouara dune into two depressions separated by a NW-SE anomaly trend. This constitution was confirmed by the vertical derivative map which showed the individualization of two depressions with slightly different anomaly values. The horizontal gravity gradient magnitude was performed in order to determine the different geological features present in the studied area. The latest indicated the presence of NE-SW parallel folds according to the major Atlasic direction. Also, NW-SE and EW trends were identified. The maxima tracing confirmed this direction by the presence of NE-SW faults, mainly the Ghardimaou_Cap Serrat accident. The quality of the available seismic sections and the absence of borehole data in the region, except few hydraulic wells that been drilled and showing the heterogeneity of the substratum of the dune, required the process of gravity modeling of this challenging area that necessitates to be modeled for the geometrical characterization of the dune reservoir and determine the different stratigraphic series underneath these deposits. For more detailed and accurate results, the scale of study will be reduced in coming research. A more concise method will be elaborated; the 4D microgravity survey. This approach is considered as an expansion of gravity method and its fourth dimension is time. It will allow a continuous and repeated monitoring of fluid movement in the subsurface according to the micro gal (μgall) scale. The gravity effect is a result of a monthly variation of the dynamic groundwater level which correlates with rainfall during different periods.Keywords: 3D gravity modeling, dune reservoir, heterogeneous substratum, seismic interpretation
Procedia PDF Downloads 305899 Qualitative Characterization of Proteins in Common and Quality Protein Maize Corn by Mass Spectrometry
Authors: Benito Minjarez, Jesse Haramati, Yury Rodriguez-Yanez, Florencio Recendiz-Hurtado, Juan-Pedro Luna-Arias, Salvador Mena-Munguia
Abstract:
During the last decades, the world has experienced a rapid industrialization and an expanding economy favoring a demographic boom. As a consequence, countries around the world have focused on developing new strategies related to the production of different farm products in order to meet future demands. Consequently, different strategies have been developed seeking to improve the major food products for both humans and livestock. Corn, after wheat and rice, is the third most important crop globally and is the primary food source for both humans and livestock in many regions around the globe. In addition, maize (Zea mays) is an important source of protein accounting for up to 60% of the daily human protein supply. Generally, many of the cereal grains have proteins with relatively low nutritional value, when they are compared with proteins from meat. In the case of corn, much of the protein is found in the endosperm (75 to 85%) and is deficient in two essential amino acids, lysine, and tryptophan. This deficiency results in an imbalance of amino acids and low protein content; normal maize varieties have less than half of the recommended amino acids for human nutrition. In addition, studies have shown that this deficiency has been associated with symptoms of growth impairment, anemia, hypoproteinemia, and fatty liver. Due to the fact that most of the presently available maize varieties do not contain the quality and quantity of proteins necessary for a balanced diet, different countries have focused on the research of quality protein maize (QPM). Researchers have characterized QPM noting that these varieties may contain between 70 to 100% more residues of the amino acids essential for animal and human nutrition, lysine, and tryptophan, than common corn. Several countries in Africa, Latin America, as well as China, have incorporated QPM in their agricultural development plan. Large parts of these countries have chosen a specific QPM variety based on their local needs and climate. Reviews have described the breeding methods of maize and have revealed the lack of studies on genetic and proteomic diversity of proteins in QPM varieties, and their genetic relationships with normal maize varieties. Therefore, molecular marker identification using tools such as mass spectrometry may accelerate the selection of plants that carry the desired proteins with high lysine and tryptophan concentration. To date, QPM maize lines have played a very important role in alleviating the malnutrition, and better characterization of these lines would provide a valuable nutritional enhancement for use in the resource-poor regions of the world. Thus, the objectives of this study were to identify proteins in QPM maize in comparison with a common maize line as a control.Keywords: corn, mass spectrometry, QPM, tryptophan
Procedia PDF Downloads 293898 Integrating Non-Psychoactive Phytocannabinoids and Their Cyclodextrin Inclusion Complexes into the Treatment of Glioblastoma
Authors: Kyriaki Hatziagapiou, Konstantinos Bethanis, Olti Nikola, Elias Christoforides, Eleni Koniari, Eleni Kakouri, George Lambrou, Christina Kanaka-Gantenbein
Abstract:
Glioblastoma multiforme (GBM) remains a serious health challenge, as current therapeutic modalities continue to yield unsatisfactory results, with the average survival rarely exceeding 1-2 years. Natural compounds still provide some of the most promising approaches for discovering new drugs. The non-psychotropic cannabidiol (CBD) deriving from Cannabis sativa L. provides such promise. CBD is endowed with anticancer, antioxidant, and genoprotective properties as established in vitro and in in vivo experiments. CBD’s selectivity towards cancer cells and its safe profile suggest its usage in cancer therapies. However, the bioavailability of oral CBD is low due to poor aqueous solubility, erratic gastrointestinal absorption, and significant first-pass metabolism, hampering its therapeutic potential and resulting in a variable pharmacokinetic profile. In this context, CBD can take great advantage of nanomedicine-based formulation strategies. Cyclodextrins (CDs) are cyclic oligosaccharides used in the pharmaceutical industry to incorporate apolar molecules inside their hydrophobic cavity, increasing their stability, water solubility, and bioavailability or decreasing their side effects. CBD-inclusion complexes with CDs could be a good strategy to improve its properties, like solubility and stability to harness its full therapeutic potential. The current research aims to study the potential cytotoxic effect of CBD and CBD-CDs complexes CBD-RMβCD (randomly methylated β-cyclodextrin) and CBD-HPβCD (hydroxypropyl-b-CD) on the A172 glioblastoma cell line. CBD is diluted in 10% DMSO, and CBD/CDs solutions are prepared by mixing solid CBD, solid CDs, and dH2O. For the biological assays, A172 cells are incubated at a range of concentrations of CBD, CBD-RMβCD and CBD-HPβCD, RMβCD, and HPβCD (0,03125-4 mg/ml) at 24, 48, and 72 hours. Analysis of cell viability after incubation with the compounds is performed with Alamar Blue viability assay. CBD’s dilution to DMSO 10% was inadequate, as crystals are observed; thus cytotoxicity experiments are not assessed. CBD’s solubility is enhanced in the presence of both CDs. CBD/CDs exert significant cytotoxicity in a dose and time-dependent manner (p < 0.005 for exposed cells to any concentration at 48, 72, and 96 hours versus cells not exposed); as their concentration and time of exposure increases, the reduction of resazurin to resofurin decreases, indicating a reduction in cell viability. The cytotoxic effect is more pronounced in cells exposed to CBD-HPβCD for all concentrations and time-points. RMβCD and HPβCD at the highest concentration of 4 mg/ml also exerted antitumor action per se since manifesting cell growth inhibition. The results of our study could afford the basis of research regarding the use of natural products and their inclusion complexes as anticancer agents and the shift to targeted therapy with higher efficacy and limited toxicity. Acknowledgments: The research is partly funded by ΙΚΥ (State Scholarships Foundation) – Post-doc Scholarships-Partnership Agreement 2014-2020.Keywords: cannabidiol, cyclodextrins, glioblastoma, hydroxypropyl-b-Cyclodextrin, randomly-methylated-β-cyclodextrin
Procedia PDF Downloads 185897 Particle Size Characteristics of Aerosol Jets Produced by a Low Powered E-Cigarette
Authors: Mohammad Shajid Rahman, Tarik Kaya, Edgar Matida
Abstract:
Electronic cigarettes, also known as e-cigarettes, may have become a tool to improve smoking cessation due to their ability to provide nicotine at a selected rate. Unlike traditional cigarettes, which produce toxic elements from tobacco combustion, e-cigarettes generate aerosols by heating a liquid solution (commonly a mixture of propylene glycol, vegetable glycerin, nicotine and some flavoring agents). However, caution still needs to be taken when using e-cigarettes due to the presence of addictive nicotine and some harmful substances produced from the heating process. Particle size distribution (PSD) and associated velocities generated by e-cigarettes have significant influence on aerosol deposition in different regions of human respiratory tracts. On another note, low actuation power is beneficial in aerosol generating devices since it exhibits a reduced emission of toxic chemicals. In case of e-cigarettes, lower heating powers can be considered as powers lower than 10 W compared to a wide range of powers (0.6 to 70.0 W) studied in literature. Due to the importance regarding inhalation risk reduction, deeper understanding of particle size characteristics of e-cigarettes demands thorough investigation. However, comprehensive study on PSD and velocities of e-cigarettes with a standard testing condition at relatively low heating powers is still lacking. The present study aims to measure particle number count and size distribution of undiluted aerosols of a latest fourth-generation e-cigarette at low powers, within 6.5 W using real-time particle counter (time-of-flight method). Also, temporal and spatial evolution of particle size and velocity distribution of aerosol jets are examined using phase Doppler anemometry (PDA) technique. To the authors’ best knowledge, application of PDA in e-cigarette aerosol measurement is rarely reported. In the present study, preliminary results about particle number count of undiluted aerosols measured by time-of-flight method depicted that an increase of heating power from 3.5 W to 6.5 W resulted in an enhanced asymmetricity in PSD, deviating from log-normal distribution. This can be considered as an artifact of rapid vaporization, condensation and coagulation processes on aerosols caused by higher heating power. A novel mathematical expression, combining exponential, Gaussian and polynomial (EGP) distributions, was proposed to describe asymmetric PSD successfully. The value of count median aerodynamic diameter and geometric standard deviation laid within a range of about 0.67 μm to 0.73 μm, and 1.32 to 1.43, respectively while the power varied from 3.5 W to 6.5 W. Laser Doppler velocimetry (LDV) and PDA measurement suggested a typical centerline streamwise mean velocity decay of aerosol jet along with a reduction of particle sizes. In the final submission, a thorough literature review, detailed description of experimental procedure and discussion of the results will be provided. Particle size and turbulent characteristics of aerosol jets will be further examined, analyzing arithmetic mean diameter, volumetric mean diameter, volume-based mean diameter, streamwise mean velocity and turbulence intensity. The present study has potential implications in PSD simulation and validation of aerosol dosimetry model, leading to improving related aerosol generating devices.Keywords: E-cigarette aerosol, laser doppler velocimetry, particle size distribution, particle velocity, phase Doppler anemometry
Procedia PDF Downloads 54896 The Effect of Improvement Programs in the Mean Time to Repair and in the Mean Time between Failures on Overall Lead Time: A Simulation Using the System Dynamics-Factory Physics Model
Authors: Marcel Heimar Ribeiro Utiyama, Fernanda Caveiro Correia, Dario Henrique Alliprandini
Abstract:
The importance of the correct allocation of improvement programs is of growing interest in recent years. Due to their limited resources, companies must ensure that their financial resources are directed to the correct workstations in order to be the most effective and survive facing the strong competition. However, to our best knowledge, the literature about allocation of improvement programs does not analyze in depth this problem when the flow shop process has two capacity constrained resources. This is a research gap which is deeply studied in this work. The purpose of this work is to identify the best strategy to allocate improvement programs in a flow shop with two capacity constrained resources. Data were collected from a flow shop process with seven workstations in an industrial control and automation company, which process 13.690 units on average per month. The data were used to conduct a simulation with the System Dynamics-Factory Physics model. The main variables considered, due to their importance on lead time reduction, were the mean time between failures and the mean time to repair. The lead time reduction was the output measure of the simulations. Ten different strategies were created: (i) focused time to repair improvement, (ii) focused time between failures improvement, (iii) distributed time to repair improvement, (iv) distributed time between failures improvement, (v) focused time to repair and time between failures improvement, (vi) distributed time to repair and between failures improvement, (vii) hybrid time to repair improvement, (viii) hybrid time between failures improvements, (ix) time to repair improvement strategy towards the two capacity constrained resources, (x) time between failures improvement strategy towards the two capacity constrained resources. The ten strategies tested are variations of the three main strategies for improvement programs named focused, distributed and hybrid. Several comparisons among the effect of the ten strategies in lead time reduction were performed. The results indicated that for the flow shop analyzed, the focused strategies delivered the best results. When it is not possible to perform a large investment on the capacity constrained resources, companies should use hybrid approaches. An important contribution to the academy is the hybrid approach, which proposes a new way to direct the efforts of improvements. In addition, the study in a flow shop with two strong capacity constrained resources (more than 95% of utilization) is an important contribution to the literature. Another important contribution is the problem of allocation with two CCRs and the possibility of having floating capacity constrained resources. The results provided the best improvement strategies considering the different strategies of allocation of improvement programs and different positions of the capacity constrained resources. Finally, it is possible to state that both strategies, hybrid time to repair improvement and hybrid time between failures improvement, delivered best results compared to the respective distributed strategies. The main limitations of this study are mainly regarding the flow shop analyzed. Future work can further investigate different flow shop configurations like a varying number of workstations, different number of products or even different positions of the two capacity constrained resources.Keywords: allocation of improvement programs, capacity constrained resource, hybrid strategy, lead time, mean time to repair, mean time between failures
Procedia PDF Downloads 126895 The Influence of Thermal Radiation and Chemical Reaction on MHD Micropolar Fluid in The Presence of Heat Generation/Absorption
Authors: Binyam Teferi
Abstract:
Numerical and theoretical analysis of mixed convection flow of magneto- hydrodynamics micropolar fluid with stretching capillary in the presence of thermal radiation, chemical reaction, viscous dissipation, and heat generation/ absorption have been studied. The non-linear partial differential equations of momentum, angular velocity, energy, and concentration are converted into ordinary differential equations using similarity transformations which can be solved numerically. The dimensionless governing equations are solved by using Runge Kutta fourth and fifth order along with the shooting method. The effect of physical parameters viz., micropolar parameter, unsteadiness parameter, thermal buoyancy parameter, concentration buoyancy parameter, Hartmann number, spin gradient viscosity parameter, microinertial density parameter, thermal radiation parameter, Prandtl number, Eckert number, heat generation or absorption parameter, Schmidt number and chemical reaction parameter on flow variables viz., the velocity of the micropolar fluid, microrotation, temperature, and concentration has been analyzed and discussed graphically. MATLAB code is used to analyze numerical and theoretical facts. From the simulation study, it can be concluded that an increment of micropolar parameter, Hartmann number, unsteadiness parameter, thermal and concentration buoyancy parameter results in decrement of velocity flow of micropolar fluid; microrotation of micropolar fluid decreases with an increment of micropolar parameter, unsteadiness parameter, microinertial density parameter, and spin gradient viscosity parameter; temperature profile of micropolar fluid decreases with an increment of thermal radiation parameter, Prandtl number, micropolar parameter, unsteadiness parameter, heat absorption, and viscous dissipation parameter; concentration of micropolar fluid decreases as unsteadiness parameter, Schmidt number and chemical reaction parameter increases. Furthermore, computational values of local skin friction coefficient, local wall coupled coefficient, local Nusselt number, and local Sherwood number for different values of parameters have been investigated. In this paper, the following important results are obtained; An increment of micropolar parameter and Hartmann number results in a decrement of velocity flow of micropolar fluid. Microrotation decreases with an increment of the microinertial density parameter. Temperature decreases with an increasing value of the thermal radiation parameter and viscous dissipation parameter. Concentration decreases as the values of Schmidt number and chemical reaction parameter increases. The coefficient of local skin friction is enhanced with an increase in values of both the unsteadiness parameter and micropolar parameter. Increasing values of unsteadiness parameter and micropolar parameter results in an increment of the local couple stress. An increment of values of unsteadiness parameter and thermal radiation parameter results in an increment of the rate of heat transfer. As the values of Schmidt number and unsteadiness parameter increases, Sherwood number decreases.Keywords: thermal radiation, chemical reaction, viscous dissipation, heat absorption/ generation, similarity transformation
Procedia PDF Downloads 134