Search results for: single African air transport market
1096 Pharmacovigilance in Hospitals: Retrospective Study at the Pharmacovigilance Service of UHE-Oran, Algeria
Authors: Nadjet Mekaouche, Hanane Zitouni, Fatma Boudia, Habiba Fetati, A. Saleh, A. Lardjam, H. Geniaux, A. Coubret, H. Toumi
Abstract:
Medicines have undeniably played a major role in prolonging shelf life and improving quality. The absolute efficacy of the drug remains a lever for innovation, its benefit/risk balance is not always assured and it does not always have the expected effects. Prior to marketing, knowledge about adverse drug reactions is incomplete. Once on the market, phase IV drug studies begin. For years, the drug was prescribed with less care to a large number of very heterogeneous patients and often in combination with other drugs. It is at this point that previously unknown adverse effects may appear, hence the need for the implementation of a pharmacovigilance system. Pharmacovigilance represents all methods for detecting, evaluating, informing and preventing the risks of adverse drug reactions. The most severe adverse events occur frequently in hospital and that a significant proportion of adverse events result in hospitalizations. In addition, the consequences of hospital adverse events in terms of length of stay, mortality and costs are considerable. It, therefore, appears necessary to develop ‘hospital pharmacovigilance’ aimed at reducing the incidence of adverse reactions in hospitals. The most widely used monitoring method in pharmacovigilance is spontaneous notification. However, underreporting of adverse drug reactions is common in many countries and is a major obstacle to pharmacovigilance assessment. It is in this context that this study aims to describe the experience of the pharmacovigilance service at the University Hospital of Oran (EHUO). This is a retrospective study extending from 2011 to 2017, carried out on archived records of declarations collected at the level of the EHUO Pharmacovigilance Department. Reporting was collected by two methods: ‘spontaneous notification’ and ‘active pharmacovigilance’ targeting certain clinical services. We counted 217 statements. It involved 56% female patients and 46% male patients. Age ranged from 5 to 78 years with an average of 46 years. The most common adverse reaction was drug toxidermy. For the drugs in question, they were essentially according to the ATC classification of anti-infectives followed by anticancer drugs. As regards the evolution of declarations by year, a low rate of notification was noted in 2011. That is why we decided to set up an active approach at the level of some services where a resident of reference attended the staffs every week. This has resulted in an increase in the number of reports. The declarations came essentially from the services where the active approach was installed. This highlights the need for ongoing communication between all relevant health actors to stimulate reporting and secure drug treatments.Keywords: adverse drug reactions, hospital, pharmacovigilance, spontaneous notification
Procedia PDF Downloads 1721095 Profile of Programmed Death Ligand-1 (PD-L1) Expression and PD-L1 Gene Amplification in Indonesian Colorectal Cancer Patients
Authors: Akterono Budiyati, Gita Kusumo, Teguh Putra, Fritzie Rexana, Antonius Kurniawan, Aru Sudoyo, Ahmad Utomo, Andi Utama
Abstract:
The presence of the programmed death ligand-1 (PD-L1) has been used in multiple clinical trials and approved as biomarker for selecting patients more likely to respond to immune checkpoint inhibitors. However, the expression of PD-L1 is regulated in different ways, which leads to a different significance of its presence. Positive PD-L1 within tumors may result from two mechanisms, induced PD-L1 expression by T-cell presence or genetic mechanism that lead to constitutive PD-L1 expression. Amplification of PD-L1 genes was found as one of genetic mechanism which causes an increase in PD-L1 expression. In case of colorectal cancer (CRC), targeting immune checkpoint inhibitor has been recommended for patients with microsatellite instable (MSI). Although the correlation between PD-L1 expression and MSI status has been widely studied, so far the precise mechanism of PD-L1 gene activation in CRC patients, particularly in MSI population have yet to be clarified. In this present study we have profiled 61 archived formalin fixed paraffin embedded CRC specimens of patients from Medistra Hospital, Jakarta admitted in 2010 - 2016. Immunohistochemistry was performed to measure expression of PD-L1 in tumor cells as well as MSI status using antibodies against PD-L1 and MMR (MLH1, MSH2, PMS2 and MSH6), respectively. PD-L1 expression was measured on tumor cells with cut off of 1% whereas loss of nuclear MMR protein expressions in tumor cells but not in normal or stromal cells indicated presence of MSI. Subset of PD-L1 positive patients was then assessed for copy number variations (CNVs) using single Tube TaqMan Copy Number Assays Gene CD247PD-L1. We also observed KRAS mutation to profile possible genetic mechanism leading to the presence or absence of PD-L1 expression. Analysis of 61 CRC patients revealed 15 patients (24%) expressed PD-L1 on their tumor cell membranes. The prevalence of surface membrane PD-L1 was significantly higher in patients with MSI (87%; 7/8) compared to patients with microsatellite stable (MSS) (15%; 8/53) (P=0.001). Although amplification of PD-L1 gene was not found among PD-L1 positive patients, low-level amplification of PD-L1 gene was commonly observed in MSS patients (75%; 6/8) than in MSI patients (43%; 3/7). Additionally, we found 26% of CRC patients harbored KRAS mutations (16/61), so far the distribution of KRAS status did not correlate with PD-L1 expression. Our data suggest genetic mechanism through amplification of PD-L1 seems not to be the mechanism underlying upregulation of PD-L1 expression in CRC patients. However, further studies are warranted to confirm the results.Keywords: colorectal cancer, gene amplification, microsatellite instable, programmed death ligand-1
Procedia PDF Downloads 2211094 Buoyant Gas Dispersion in a Small Fuel Cell Enclosure: A Comparison Study Using Plain and Pressed Louvre Vent Passive Ventilation Schemes
Authors: T. Ghatauray, J. Ingram, P. Holborn
Abstract:
The transition from a ‘carbon rich’ fossil fuel dependent to a ‘sustainable’ and ‘renewable’ hydrogen based society will see the deployment of hydrogen fuel cells (HFC) in transport applications and in the generation of heat and power for buildings, as part of a decentralised power network. Many deployments will be low power HFCs for domestic combined heat and power (CHP) and commercial ‘transportable’ HFCs for environmental situations, such as lighting and telephone towers. For broad commercialisation of small fuel cells to be achieved there needs to be significant confidence in their safety in both domestic and environmental applications. Low power HFCs are housed in protective steel enclosures. Standard enclosures have plain rectangular ventilation openings intended for thermal management of electronics and not the dispersion of a buoyant gas. Degradation of the HFC or supply pipework in use could lead to a low-level leak and a build-up of hydrogen gas in the enclosure. Hydrogen’s wide flammable range (4-75%) is a significant safety concern, with ineffective enclosure ventilation having the potential to cause flammable mixtures to develop with the risk of explosion. Mechanical ventilation is effective at managing enclosure hydrogen concentrations, but drains HFC power and is vulnerable to failure. This is undesirable in low power and remote installations and reliable passive ventilation systems are preferred. Passive ventilation depends upon buoyancy driven flow, with the size, shape and position of ventilation openings critical for producing predictable flows and maintaining low buoyant gas concentrations. With environmentally sited enclosures, ventilation openings with pressed horizontal and angled louvres are preferred to protect the HFC and electronics inside. There is an economic cost to adding louvres, but also a safety concern. A question arises over whether the use of pressed louvre vents impairs enclosure passive ventilation performance, when compared to same opening area plain vents. Comparison small enclosure (0.144m³) tests of same opening area pressed louvre and plain vents were undertaken. A displacement ventilation arrangement was incorporated into the enclosure with opposing upper and lower ventilation openings. A range of vent areas were tested. Helium (used as a safe analogue for hydrogen) was released from a 4mm nozzle at the base of the enclosure to simulate a hydrogen leak at leak rates from 1 to 10 lpm. Helium sensors were used to record concentrations at eight heights in the enclosure. The enclosure was otherwise empty. These tests determined that the use of pressed and angled louvre ventilation openings on the enclosure impaired the passive ventilation flow and increased helium concentrations in the enclosure. High-level stratified buoyant gas layers were also found to be deeper than with plain vent openings and were within the flammable range. The presence of gas within the flammable range is of concern, particularly as the addition of the fuel cell and electronics in the enclosure would further reduce the available volume and increase concentrations. The opening area of louvre vents would need to be greater than equivalent plain vents to achieve comparable ventilation flows or alternative schemes would need to be considered.Keywords: enclosure, fuel cell, helium, hydrogen safety, louvre vent, passive ventilation
Procedia PDF Downloads 2711093 Cultural Awareness, Intercultural Communication Competence and Academic Performance of Foreign Students Towards an Education ASEAN Integration in Global Education
Authors: Rizalito B. Javier
Abstract:
Research has shown that foreign students with higher levels of cultural awareness and intercultural communication competence tend to have better academic performance outcomes. This study aimed to find out the cultural awareness, intercultural communication competence, and academic performance of foreign students and its relationships among variables. Methods used were descriptive-comparative and correlational research design, quota purposive sampling technique while frequency counts and percentages, mean and standard deviation, T, and F-test and chi-square were utilized to analyze the data. The results revealed that the majority of the respondents were under the age bracket of 21-25 years old, mostly males, all single, and mostly citizens of Papua New Guinea, Angolan, Vanuatu, Tanzanian, Nigerian, Korean, Rwanda, and Myanmar. Most language spoken was English, many of them were born again Christians, the majority took BS business management degree program, their studies mainly supported by their parents, they had stayed in the Philippines for 3-4 years, and most of them attended five to six times of cultural awareness/competence workshop-seminars, majority of their parent’s occupations were family own business, and had been earning a family monthly income of P61,0000 and above. The respondents were highly aware of their culture in terms of clients’ issues. The intercultural communication competence of the respondents was slightly aware in terms of intercultural awareness, while the foreign students performed good remarks in their average academic performance. However, the profiles of the participants in terms of age, gender, civil status, nationality, course/degree program taken, support to the study, length of stay, workshop attended, and parents’ occupation have significant differences in the academic performance except for the type of family, language spoken, religion and family monthly income. Moreover, cultural awareness was significantly related to intercultural communication competence, and both were not related to academic performance. It is recommended that foreign students be provided with cultural orientation programs, offered language support services, promoted intercultural exchange activities, and implemented inclusive teaching practices to allow students to effectively navigate and interact with people from different cultural backgrounds, fostering a more inclusive and collaborative learning environment.Keywords: cultural competence, communication competence, intercultural competence, and culture-academic performance.
Procedia PDF Downloads 181092 Movie and Theater Marketing Using the Potentials of Social Networks
Authors: Seyed Reza Naghibulsadat
Abstract:
The nature of communication includes various forms of media productions, which include film and theater. In the current situation, since social networks have emerged, they have brought their own communication capabilities and have features that show speed, public access, lack of media organization and the production of extensive content, and the development of critical thinking; Also, they contain capabilities to develop access to all kinds of media productions, including movies and theater shows; Of course, this works differently in different conditions and communities. In terms of the scale of exploitation, the film has a more general audience, and the theater has a special audience. The film industry is more developed based on more modern technologies, but the theater, based on the older ways of communication, contains more intimate and emotional aspects. ; But in general, the main focus is the development of access to movies and theater shows, which is emphasized by those involved in this field due to the capabilities of social networks. In this research, we will look at these 2 areas and the relevant components for both areas through social networks and also the common points of both types of media production. The main goal of this research is to know the strengths and weaknesses of using social networks for the marketing of movies and theater shows and, at the same time are, also considered the opportunities and threats of this field. The attractions of these two types of media production, with the emergence of social networks, and the ability to change positions, can provide the opportunity to become a media with greater exploitation and higher profitability; But the main consideration is the opinions about these capabilities and the ability to use them for film and theater marketing. The main question of the research is, what are the marketing components for movies and theaters using social media capabilities? What are its strengths and weaknesses? And what opportunities and threats are facing this market? This research has been done with two methods SWOT and meta-analysis. Non-probability sampling has been used with purposeful technique. The results show that a recent approach is an approach based on eliminating threats and weaknesses and emphasizing strengths, and exploiting opportunities in the direction of developing film and theater marketing based on the capabilities of social networks within the framework of local cultural values and presenting achievements on an international scale or It is universal. This introduction leads to the introduction of authentic Iranian culture and foreign enthusiasts in the framework of movies and theater art. Therefore, for this issue, the model for using the capabilities of social networks for movie or theater marketing, according to the results obtained from Respondents, is a model based on SO strategies and, in other words, offensive strategies so that it can take advantage of the internal strengths and made maximum use of foreign situations and opportunities to develop the use of movies and theater performances.Keywords: marketing, movies, theatrical show, social network potentials
Procedia PDF Downloads 761091 Analyzing the Crisis of Liberal Democracy by Investigating Connections Between Deliberative Democratic Theory, Criticism of Neoliberalism and Contemporary Marxist Political Economy
Authors: Inka Maria Vilhelmiina Hiltunen
Abstract:
The crisis of liberal democracy has been recognized from many sites of political literature; scholars of Marxist critical political economy and deliberative democracy, as well as critics of neoliberalism, have become concerned about how either the rise of populism and authoritarianism, institutional decline or the overarching economic rationality erode political democratic citizenship in favor of economic technocracy or conservative protectionism. However, even if these bodies of literature recognize the generalized crisis that haunts Western democracies, dialogue between them has been very limited. That said, drawing from contemporary Marxist perspectives, this article aims at bridging the gap between the criticism of neoliberalism and theories of deliberative democracy. The first section starts by outlining what is meant by neoliberalism, liberal democracy, and the crisis of liberal democracy. The next section explores how contemporary capitalism acts upon society and transforms it. It introduces Jurgen Habermas’ thesis of the ‘colonization of the lifeworld’, Wendy Brown’s analysis of neoliberal rationality and Étienne Balibar’s concepts of ‘absolute capitalism’ and ‘total subsumption,’ that the essay aims at connecting in the last section. The third section is concerned with the deliberative democratic theory and practice. The section highlights the qualitative socio-political impacts of deliberation, as predicted by theorists and shown by empirical studies. The last section draws from contemporary Marxist perspectives to examine the question if deliberative democratic theories and practices can resolve the crisis of liberal democracy in the current financially driven era of neoliberal capitalism. By asking this question, the essay aims to consider what is required to reverse the current global trend of rising inequality. If liberal democracy has declined towards commodified and reactionary forms of politics and if ‘market rationality’ has shaped social agency to the extent that politicians and the public struggle to imagine ‘any alternatives’, the most urgent political task is to bring to life a new political imagination based on democratic ideals of equality, inclusivity, reciprocity, and solidarity, that thereby enables the revision of the transnational institutional design. This part focuses on the hegemonic role of finance and money. The essay concludes by stating that the implementation of substantive global democracy must start from the dissolution of the hegemony of finance, centered on U.S., and from the remaking of the conditions of socioeconomic reproduction world-wide. However, given the still present overarching neoliberal status quo, the essay is skeptical of the ideological feasibility of this remaking.Keywords: deliberative democracy, criticism of neoliberalism, marxist political economy, crisis of liberal democracy
Procedia PDF Downloads 1091090 Power Asymmetry and Major Corporate Social Responsibility Projects in Mhondoro-Ngezi District, Zimbabwe
Authors: A. T. Muruviwa
Abstract:
Empirical studies of the current CSR agenda have been dominated by literature from the North at the expense of the nations from the South where most TNCs are located. Therefore, owing to the limitations of the current discourse that is dominated by Western ideas such as voluntarism, philanthropy, business case and economic gains, scholars have been calling for a new CSR agenda that is South-centred and addresses the needs of developing nations. The development theme has dominated in the recent literature as scholars concerned with the relationship between business and society have tried to understand its relationship with CSR. Despite a plethora of literature on the roles of corporations in local communities and the impact of CSR initiatives, there is lack of adequate empirical evidence to help us understand the nexus between CSR and development. For all the claims made about the positive and negative consequences of CSR, there is surprisingly little information about the outcomes it delivers. This study is a response to these claims made about the developmental aspect of CSR in developing countries. It offers some empirical bases for assessing the major CSR projects that have been fulfilled by a major mining company, Zimplats in Mhondoro-Ngezi Zimbabwe. The neo-liberal idea of capitalism and market dominations has empowered TNCs to stamp their authority in the developing countries. TNCs have made their mark in developing nations as they stamp their global private authority, rivalling or implicitly challenging the state in many functions. This dominance of corporate power raises great concerns over their tendencies of abuses in terms of environmental, social and human rights concerns as well as how to make them increasingly accountable. The hegemonic power of TNCs in the developing countries has had a tremendous impact on the overall CSR practices. While TNCs are key drivers of globalization they may be acting responsibly in their Global Northern home countries where there is a combination of legal mechanisms and the fear of civil society activism associated with corporate scandals. Using a triangulated approach in which both qualitative and quantitative methods were used the study found out that most CSR projects in Zimbabwe are dominated and directed by Zimplats because of the power it possesses. Most of the major CSR projects are beneficial to the mining company as they serve the business plans of the mining company. What was deduced from the study is that the infrastructural development initiatives by Zimplats confirm that CSR is a tool to advance business obligations. This shows that although proponents of CSR might claim that business has a mandate for social obligations to society, we need not to forget the dominant idea that the primary function of CSR is to enhance the firm’s profitability.Keywords: hegemonic power, projects, reciprocity, stakeholders
Procedia PDF Downloads 2521089 Public Procurement Development Stages in Georgia
Authors: Giorgi Gaprindashvili
Abstract:
One of the best examples, in evolution of the public procurement, from post-soviet countries are reforms carried out in Georgia, which brought them close to international standards of procurement. In Georgia, public procurement legislation started functioning in 1998. The reform has passed several stages and came in the form as it is today. It should also be noted, that countries with economy in transition, including Georgia, implemented all the reforms in public procurement based on recommendations and support of World Bank, the United Nations and other international organizations. The first law on public procurement in Georgia was adopted on December 9, 1998 which aimed regulation of the procurement process of budget-organizations, transparent and competitive environment for private companies to access state funds legally. The priorities were identified quite clearly in the wording of the law, but operation/function of this law could not be reached on its level, because of some objective and subjective reasons. The high level of corruption in all levels of governance, can be considered as a main obstacle reason and of course, it is natural, that it had direct impact on the procurement process, as well as on transparency and rational use of state funds. This circumstances were the reasons that reforms in this sphere continued, to improve procurement process, in particular, the first wave of reforms began in 2001. Public procurement agency carried out reform with World Bank with main purpose of smartening the procurement legislation and its harmonization with international treaties and agreements. Also with the support of World Bank various activities were carried out to raise awareness of participants involved in procurement system. Further major changes in the legislation were filed in May 2005, which was also directed towards the improvement and smarten of the procurement process. The third wave of the reform began in 2010, which more or less guaranteed the transparency of the procurement process, which later became the basis for the rational spending of state funds. The reform of the procurement system completely changed the procedures. Carried out reform in Georgia resulted in introducing new electronic tendering system, which benefit the transparency of the process, after this became the basis for the further development of a competitive environment, which become a prerequisite for the state rational spending. Increased number of supplier organizations participating in the procurement process resulted in reduction of the estimated cost and the actual cost from 20% up to 40%, it is quite large saving for the procuring organizations and allows them to use the freed-up funds for their other needs. Assessment of the reforms in Georgia in the field of public procurement can be concluded, that proper regulation of the sector and relevant policy may proceed to rational and transparent spending of the budget from country’s state institutions. Also, the business sector has the opportunity to work in competitive market conditions and to make a preliminary analysis, which is a prerequisite for future strategy and development.Keywords: public administration, public procurement, reforms, transparency
Procedia PDF Downloads 3661088 Factors of Self-Sustainability in Social Entrepreneurship: Case Studies of ACT Group Čakovec and Friskis and Svettis Stockholm
Authors: Filip Majetić, Dražen Šimleša, Jelena Puđak, Anita Bušljeta Tonković, Svitlana Pinchuk
Abstract:
This paper focuses on the self-sustainability aspect of social entrepreneurship (SE). We define SE as a form of entrepreneurship that is social/ecological mission oriented. It means SE organizations start and run businesses and use them to accomplish their social/ecological missions i.e. to solve social/ecological problems or fulfill social/ecological needs. Self-sustainability is defined as the capability of an SE organization to operate by relying on the money earned through trading its products in the free market. For various reasons, the achievement of self-sustainability represents a fundamental (business) challenge for many SE organizations. Those that are not able to operate using the money made through commercial activities, in order to remain active, rely on alternative, non-commercial streams of income such as grants, donations, and public subsidies. Starting from this widespread (business) challenge, we are interested in exploring elements that (could) influence the self-sustainability in SE organizations. Therefore, the research goal is to empirically investigate some of the self-sustainability factors of two notable SE organizations from different socio-economic contexts. A qualitative research, using the multiple case study approach, was conducted. ACT Group Čakovec (ACT) from Croatia was selected for the first case because it represents one of the leading and most self-sustainable SE organization in the region (in 2015 55% of the organization’s budget came from commercial activities); Friskis&Svettis Stockholm (F&S) from Sweden was selected for the second case because it is a rare example of completely self-sustainable SE organization in Europe (100% of the organization’s budget comes from commercial activities). The data collection primarily consists of conducting in-depth interviews. Additionally, the content of some of the organizations' official materials are analyzed (e.g. business reports, marketing materials). The interviewees are selected purposively and include: six highly ranked F&S members who represent five different levels in the hierarchy of their organization; five highly ranked ACT members who represent three different levels in the hierarchy of the organization. All of the interviews contain five themes: a) social values of the organization, b) organization of work, c) non-commercial income sources, d) marketing/collaborations, and e) familiarity with the industry characteristics and trends. The gathered data is thematically analyzed through the coding process for which Atlas.ti software for qualitative data analysis is used. For the purpose of creating thematic categories (codes), the open coding is used. The research results intend to provide new theoretical insights on factors of SE self-sustainability and, preferably, encourage practical improvements in the field.Keywords: Friskis&Svettis, self-sustainability factors, social entrepreneurship, Stockholm
Procedia PDF Downloads 2171087 The Effect of Finding and Development Costs and Gas Price on Basins in the Barnett Shale
Authors: Michael Kenomore, Mohamed Hassan, Amjad Shah, Hom Dhakal
Abstract:
Shale gas reservoirs have been of greater importance compared to shale oil reservoirs since 2009 and with the current nature of the oil market, understanding the technical and economic performance of shale gas reservoirs is of importance. Using the Barnett shale as a case study, an economic model was developed to quantify the effect of finding and development costs and gas prices on the basins in the Barnett shale using net present value as an evaluation parameter. A rate of return of 20% and a payback period of 60 months or less was used as the investment hurdle in the model. The Barnett was split into four basins (Strawn Basin, Ouachita Folded Belt, Forth-worth Syncline and Bend-arch Basin) with analysis conducted on each of the basin to provide a holistic outlook. The dataset consisted of only horizontal wells that started production from 2008 to at most 2015 with 1835 wells coming from the strawn basin, 137 wells from the Ouachita folded belt, 55 wells from the bend-arch basin and 724 wells from the forth-worth syncline. The data was analyzed initially on Microsoft Excel to determine the estimated ultimate recoverable (EUR). The range of EUR from each basin were loaded in the Palisade Risk software and a log normal distribution typical of Barnett shale wells was fitted to the dataset. Monte Carlo simulation was then carried out over a 1000 iterations to obtain a cumulative distribution plot showing the probabilistic distribution of EUR for each basin. From the cumulative distribution plot, the P10, P50 and P90 EUR values for each basin were used in the economic model. Gas production from an individual well with a EUR similar to the calculated EUR was chosen and rescaled to fit the calculated EUR values for each basin at the respective percentiles i.e. P10, P50 and P90. The rescaled production was entered into the economic model to determine the effect of the finding and development cost and gas price on the net present value (10% discount rate/year) as well as also determine the scenario that satisfied the proposed investment hurdle. The finding and development costs used in this paper (assumed to consist only of the drilling and completion costs) were £1 million, £2 million and £4 million while the gas price was varied from $2/MCF-$13/MCF based on Henry Hub spot prices from 2008-2015. One of the major findings in this study was that wells in the bend-arch basin were least economic, higher gas prices are needed in basins containing non-core counties and 90% of the Barnet shale wells were not economic at all finding and development costs irrespective of the gas price in all the basins. This study helps to determine the percentage of wells that are economic at different range of costs and gas prices, determine the basins that are most economic and the wells that satisfy the investment hurdle.Keywords: shale gas, Barnett shale, unconventional gas, estimated ultimate recoverable
Procedia PDF Downloads 2991086 Surge in U. S. Citizens Expatriation: Testing Structual Equation Modeling to Explain the Underlying Policy Rational
Authors: Marco Sewald
Abstract:
Comparing present to past the numbers of Americans expatriating U. S. citizenship have risen. Even though these numbers are small compared to the immigrants, U. S. citizens expatriations have historically been much lower, making the uptick worrisome. In addition, the published lists and numbers from the U.S. government seems incomplete, with many not counted. Different branches of the U. S. government report different numbers and no one seems to know exactly how big the real number is, even though the IRS and the FBI both track and/or publish numbers of Americans who renounce. Since there is no single explanation, anecdotal evidence suggests this uptick is caused by global tax law and increased compliance burdens imposed by the U.S. lawmakers on U.S. citizens abroad. Within a research project the question arose about the reasons why a constant growing number of U.S. citizens are expatriating – the answers are believed helping to explain the underlying governmental policy rational, leading to such activities. While it is impossible to locate former U.S. citizens to conduct a survey on the reasons and the U.S. government is not commenting on the reasons given within the process of expatriation, the chosen methodology is Structural Equation Modeling (SEM), in the first step by re-using current surveys conducted by different researchers within the population of U. S. citizens residing abroad during the last years. Surveys questioning the personal situation in the context of tax, compliance, citizenship and likelihood to repatriate to the U. S. In general SEM allows: (1) Representing, estimating and validating a theoretical model with linear (unidirectional or not) relationships. (2) Modeling causal relationships between multiple predictors (exogenous) and multiple dependent variables (endogenous). (3) Including unobservable latent variables. (4) Modeling measurement error: the degree to which observable variables describe latent variables. Moreover SEM seems very appealing since the results can be represented either by matrix equations or graphically. Results: the observed variables (items) of the construct are caused by various latent variables. The given surveys delivered a high correlation and it is therefore impossible to identify the distinct effect of each indicator on the latent variable – which was one desired result. Since every SEM comprises two parts: (1) measurement model (outer model) and (2) structural model (inner model), it seems necessary to extend the given data by conducting additional research and surveys to validate the outer model to gain the desired results.Keywords: expatriation of U. S. citizens, SEM, structural equation modeling, validating
Procedia PDF Downloads 2191085 Time Estimation of Return to Sports Based on Classification of Health Levels of Anterior Cruciate Ligament Using a Convolutional Neural Network after Reconstruction Surgery
Authors: Zeinab Jafari A., Ali Sharifnezhad B., Mohammad Razi C., Mohammad Haghpanahi D., Arash Maghsoudi
Abstract:
Background and Objective: Sports-related rupture of the anterior cruciate ligament (ACL) and following injuries have been associated with various disorders, such as long-lasting changes in muscle activation patterns in athletes, which might last after ACL reconstruction (ACLR). The rupture of the ACL might result in abnormal patterns of movement execution, extending the treatment period and delaying athletes’ return to sports (RTS). As ACL injury is especially prevalent among athletes, the lengthy treatment process and athletes’ absence from sports are of great concern to athletes and coaches. Thus, estimating safe time of RTS is of crucial importance. Therefore, using a deep neural network (DNN) to classify the health levels of ACL in injured athletes, this study aimed to estimate the safe time for athletes to return to competitions. Methods: Ten athletes with ACLR and fourteen healthy controls participated in this study. Three health levels of ACL were defined: healthy, six-month post-ACLR surgery and nine-month post-ACLR surgery. Athletes with ACLR were tested six and nine months after the ACLR surgery. During the course of this study, surface electromyography (sEMG) signals were recorded from five knee muscles, namely Rectus Femoris (RF), Vastus Lateralis (VL), Vastus Medialis (VM), Biceps Femoris (BF), Semitendinosus (ST), during single-leg drop landing (SLDL) and forward hopping (SLFH) tasks. The Pseudo-Wigner-Ville distribution (PWVD) was used to produce three-dimensional (3-D) images of the energy distribution patterns of sEMG signals. Then, these 3-D images were converted to two-dimensional (2-D) images implementing the heat mapping technique, which were then fed to a deep convolutional neural network (DCNN). Results: In this study, we estimated the safe time of RTS by designing a DCNN classifier with an accuracy of 90 %, which could classify ACL into three health levels. Discussion: The findings of this study demonstrate the potential of the DCNN classification technique using sEMG signals in estimating RTS time, which will assist in evaluating the recovery process of ACLR in athletes.Keywords: anterior cruciate ligament reconstruction, return to sports, surface electromyography, deep convolutional neural network
Procedia PDF Downloads 771084 Intelligent Control of Agricultural Farms, Gardens, Greenhouses, Livestock
Authors: Vahid Bairami Rad
Abstract:
The intelligentization of agricultural fields can control the temperature, humidity, and variables affecting the growth of agricultural products online and on a mobile phone or computer. Smarting agricultural fields and gardens is one of the best and best ways to optimize agricultural equipment and has a 100 percent direct effect on the growth of plants and agricultural products and farms. Smart farms are the topic that we are going to discuss today, the Internet of Things and artificial intelligence. Agriculture is becoming smarter every day. From large industrial operations to individuals growing organic produce locally, technology is at the forefront of reducing costs, improving results and ensuring optimal delivery to market. A key element to having a smart agriculture is the use of useful data. Modern farmers have more tools to collect intelligent data than in previous years. Data related to soil chemistry also allows people to make informed decisions about fertilizing farmland. Moisture meter sensors and accurate irrigation controllers have made the irrigation processes to be optimized and at the same time reduce the cost of water consumption. Drones can apply pesticides precisely on the desired point. Automated harvesting machines navigate crop fields based on position and capacity sensors. The list goes on. Almost any process related to agriculture can use sensors that collect data to optimize existing processes and make informed decisions. The Internet of Things (IoT) is at the center of this great transformation. Internet of Things hardware has grown and developed rapidly to provide low-cost sensors for people's needs. These sensors are embedded in IoT devices with a battery and can be evaluated over the years and have access to a low-power and cost-effective mobile network. IoT device management platforms have also evolved rapidly and can now be used securely and manage existing devices at scale. IoT cloud services also provide a set of application enablement services that can be easily used by developers and allow them to build application business logic. Focus on yourself. These development processes have created powerful and new applications in the field of Internet of Things, and these programs can be used in various industries such as agriculture and building smart farms. But the question is, what makes today's farms truly smart farms? Let us put this question in another way. When will the technologies associated with smart farms reach the point where the range of intelligence they provide can exceed the intelligence of experienced and professional farmers?Keywords: food security, IoT automation, wireless communication, hybrid lifestyle, arduino Uno
Procedia PDF Downloads 551083 Using the Structural Equation Model to Explain the Effect of Supervisory Practices on Regulatory Density
Authors: Jill Round
Abstract:
In the economic system, the financial sector plays a crucial role as an intermediary between market participants, other financial institutions, and customers. Financial institutions such as banks have to make decisions to satisfy the demands of all the participants by keeping abreast of regulatory change. In recent years, progress has been made regarding frameworks, development of rules, standards, and processes to manage risks in the banking sector. The increasing focus of regulators and policymakers placed on risk management, corporate governance, and the organization’s culture is of special interest as it requires a well-resourced risk controlling function, compliance function, and internal audit function. In the past years, the relevance of these functions that make up the so-called Three Lines of Defense has moved from the backroom to the boardroom. The approach of the model can vary based on the various organizational characteristics. Due to the intense regulatory requirements, organizations operating in the financial sector have more mature models. In less regulated industries there is more cloudiness about what tasks are allocated where. All parties strive to achieve their objectives through the effective management of risks and serve the identical stakeholders. Today, the Three Lines of Defense model is used throughout the world. The research looks at trends and emerging issues in the professions of the Three Lines of Defense within the banking sector. The answers are believed to helping to explain the increasing regulatory requirements for the banking sector. While the number of supervisory practices increases the risk management requirements intensify and demand more regulatory compliance at the same time. The Structural Equation Modeling (SEM) is applied by making use of conducted surveys in the research field. It aims to describe (i) the theoretical model regarding the applicable linearity relationships, (ii) the causal relationship between multiple predictors (exogenous) and multiple dependent variables (endogenous), (iii) taking into consideration the unobservable variables and (iv) the measurement errors. The surveys conducted on the research field suggest that the observable variables are caused by various latent variables. The SEM consists of the 1) measurement model and the 2) structural model. There is a detectable correlation regarding the cause-effect relationship among the performed supervisory practices and the increasing scope of regulation. Supervisory practices reinforce the regulatory density. In the past, controls were placed after supervisory practices were conducted or incidents occurred. In further research, it is of interest to examine, whether risk management is proactive, reactive to incidents and supervisory practices or can be both at the same time.Keywords: risk management, structural equation model, supervisory practice, three lines of defense
Procedia PDF Downloads 2231082 Cytokine Profiling in Cultured Endometrial Cells after Hormonal Treatment
Authors: Mark Gavriel, Ariel J. Jaffa, Dan Grisaru, David Elad
Abstract:
The human endometrium-myometrium interface (EMI) is the uterine inner barrier without a separatig layer. It is composed of endometrial epithelial cells (EEC) and endometrial stromal cells (ESC) in the endometrium and myometrial smooth muscle cells (MSMC) in the myometrium. The EMI undergoes structural remodeling during the menstruation cycle which are essential for human reproduction. Recently, we co-cultured a layer-by-layer in vitro model of EEC, ESC and MSMC on a synthetic membrane for mechanobiology experiments. We also treated the model with progesterone and β-estradiol in order to mimic the in vivo receptive uterus In the present study we analyzed the cytokines profile in a single layer of EEC the hormonal treated in vitro model of the EMI. The methodologies of this research include simple tissue-engineering . First, we cultured commercial EEC (RL95-2, ATCC® CRL-1671™) in 24-wellplate. Then, we applied an hormonal stimuli protocol with 17-β-estradiol and progesterone in time dependent concentration according to the human physiology that mimics the menstrual cycle. We collected cell supernatant samples of control, pre-ovulation, ovulation and post-ovulaton periods for analysis of the secreted proteins and cytokines. The cytokine profiling was performed using the Proteome Profiler Human XL Cytokine Array Kit (R&D Systems, Inc., USA) that can detect105 human soluble cytokines. The relative quantification of all the cytokines will be analyzed using xMAP – LUMINEX. We conducted a fishing expedition with the 4 membranes Proteome Profiler. We processed the images, quantified the spots intensity and normalized these values by the negative control and reference spots at the membrane. Analyses of the relative quantities that reflected change higher than 5% of the control points of the kit revealed the The results clearly showed that there are significant changes in the cytokine level for inflammation and angiogenesis pathways. Analysis of tissue-engineered models of the uterine wall will enable deeper investigation of molecular and biomechanical aspects of early reproductive stages (e.g. the window of implantation) or developments of pathologies.Keywords: tissue-engineering, hormonal stimuli, reproduction, multi-layer uterine model, progesterone, β-estradiol, receptive uterine model, fertility
Procedia PDF Downloads 1301081 Increased Stability of Rubber-Modified Asphalt Mixtures to Swelling, Expansion and Rebound Effect during Post-Compaction
Authors: Fernando Martinez Soto, Gaetano Di Mino
Abstract:
The application of rubber into bituminous mixtures requires attention and care during mixing and compaction. Rubber modifies the properties because it reacts in the internal structure of bitumen at high temperatures changing the performance of the mixture (interaction process of solvents with binder-rubber aggregate). The main change is the increasing of the viscosity and elasticity of the binder due to the larger sizes of the rubber particles by dry process but, this positive effect is counteracted by short mixing times, compared to wet technology, and due to the transport processes, curing time and post-compaction of the mixtures. Therefore, negative effects as swelling of rubber particles, rebounding effect of the specimens and thermal changes by different expansion of the structure inside the mixtures, can change the mechanical properties of the rubberized blends. Based on the dry technology, different asphalt-rubber binders using devulcanized or natural rubber (truck and bus tread rubber), have served to demonstrate these effects and how to solve them into two dense-gap graded rubber modified asphalt concrete mixes (RUMAC) to enhance the stability, workability and durability of the compacted samples by Superpave gyratory compactor method. This paper specifies the procedures developed in the Department of Civil Engineering of the University of Palermo during September 2016 to March 2017, for characterizing the post-compaction and mix-stability of the one conventional mixture (hot mix asphalt without rubber) and two gap-graded rubberized asphalt mixes according granulometry for rail sub-ballast layers with nominal size of Ø22.4mm of aggregates according European standard. Thus, the main purpose of this laboratory research is the application of ambient ground rubber from scrap tires processed at conventional temperature (20ºC) inside hot bituminous mixtures (160-220ºC) as a substitute for 1.5%, 2% and 3% by weight of the total aggregates (3.2%, 4.2% and, 6.2% respectively by volumetric part of the limestone aggregates of bulk density equal to 2.81g/cm³) considered, not as a part of the asphalt binder. The reference bituminous mixture was designed with 4% of binder and ± 3% of air voids, manufactured for a conventional bitumen B50/70 at 160ºC-145ºC mix-compaction temperatures to guarantee the workability of the mixes. The proportions of rubber proposed are #60-40% for mixtures with 1.5 to 2% of rubber and, #20-80% for mixture with 3% of rubber (as example, a 60% of Ø0.4-2mm and 40% of Ø2-4mm). The temperature of the asphalt cement is between 160-180 ºC for mixing and 145-160 ºC for compaction, according to the optimal values for viscosity using Brookfield viscometer and 'ring and ball' - penetration tests. These crumb rubber particles act as a rubber-aggregate into the mixture, varying sizes between 0.4mm to 2mm in a first fraction, and 2-4mm as second proportion. Ambient ground rubber with a specific gravity of 1.154g/cm³ is used. The rubber is free of loose fabric, wire, and other contaminants. It was found optimal results in real beams and cylindrical specimens with each HMA mixture reducing the swelling effect. Different factors as temperature, particle sizes of rubber, number of cycles and pressures of compaction that affect the interaction process are explained.Keywords: crumb-rubber, gyratory compactor, rebounding effect, superpave mix-design, swelling, sub-ballast railway
Procedia PDF Downloads 2431080 Reduced General Dispersion Model in Cylindrical Coordinates and Isotope Transient Kinetic Analysis in Laminar Flow
Authors: Masood Otarod, Ronald M. Supkowski
Abstract:
This abstract discusses a method that reduces the general dispersion model in cylindrical coordinates to a second order linear ordinary differential equation with constant coefficients so that it can be utilized to conduct kinetic studies in packed bed tubular catalytic reactors at a broad range of Reynolds numbers. The model was tested by 13CO isotope transient tracing of the CO adsorption of Boudouard reaction in a differential reactor at an average Reynolds number of 0.2 over Pd-Al2O3 catalyst. Detailed experimental results have provided evidence for the validity of the theoretical framing of the model and the estimated parameters are consistent with the literature. The solution of the general dispersion model requires the knowledge of the radial distribution of axial velocity. This is not always known. Hence, up until now, the implementation of the dispersion model has been largely restricted to the plug-flow regime. But, ideal plug-flow is impossible to achieve and flow regimes approximating plug-flow leave much room for debate as to the validity of the results. The reduction of the general dispersion model transpires as a result of the application of a factorization theorem. Factorization theorem is derived from the observation that a cross section of a catalytic bed consists of a solid phase across which the reaction takes place and a void or porous phase across which no significant measure of reaction occurs. The disparity in flow and the heterogeneity of the catalytic bed cause the concentration of reacting compounds to fluctuate radially. These variabilities signify the existence of radial positions at which the radial gradient of concentration is zero. Succinctly, factorization theorem states that a concentration function of axial and radial coordinates in a catalytic bed is factorable as the product of the mean radial cup-mixing function and a contingent dimensionless function. The concentration of adsorbed compounds are also factorable since they are piecewise continuous functions and suffer the same variability but in the reverse order of the concentration of mobile phase compounds. Factorability is a property of packed beds which transforms the general dispersion model to an equation in terms of the measurable mean radial cup-mixing concentration of the mobile phase compounds and mean cross-sectional concentration of adsorbed species. The reduced model does not require the knowledge of the radial distribution of the axial velocity. Instead, it is characterized by new transport parameters so denoted by Ωc, Ωa, Ωc, and which are respectively denominated convection coefficient cofactor, axial dispersion coefficient cofactor, and radial dispersion coefficient cofactor. These cofactors adjust the dispersion equation as compensation for the unavailability of the radial distribution of the axial velocity. Together with the rest of the kinetic parameters they can be determined from experimental data via an optimization procedure. Our data showed that the estimated parameters Ωc, Ωa Ωr, are monotonically correlated with the Reynolds number. This is expected to be the case based on the theoretical construct of the model. Computer generated simulations of methanation reaction on nickel provide additional support for the utility of the newly conceptualized dispersion model.Keywords: factorization, general dispersion model, isotope transient kinetic, partial differential equations
Procedia PDF Downloads 2681079 A Doctrinal Research and Review of Hashtag Trademarks
Authors: Hetvi Trivedi
Abstract:
Technological escalation cannot be negated. The same is true for the benefits of technology. However, such escalation has interfered with the traditional theories of protection under Intellectual Property Rights. Out of the many trends that have disrupted the old-school understanding of Intellectual Property Rights, one is hashtags. What began modestly in the year 2007 has now earned a remarkable status, and coupled with the unprecedented rise in social media the hashtag culture has witnessed a monstrous growth. A tiny symbol on the keypad of phones or computers is now a major trend which also serves companies as a critical investment measure in establishing their brand in the market. Due to this a section of the Intellectual Property Rights- Trademarks is undergoing a humungous transformation with hashtags like #icebucket, #tbt or #smilewithacoke, getting trademark protection. So, as the traditional theories of IP take on the modern trends, it is necessary to understand the change and challenge at a theoretical and proportional level and where need be, question the change. Traditionally, Intellectual Property Rights serves the societal need for intellectual productions that ensure its holistic development as well as cultural, economic, social and technological progress. In a two-pronged effort at ensuring continuity of creativity, IPRs recognize the investment of individual efforts that go into creation by way of offering protection. Commonly placed under two major theories- Utilitarian and Natural, IPRs aim to accord protection and recognition to an individual’s creation or invention which serve as an incentive for further creations or inventions, thus fully protecting the creative, inventive or commercial labour invested in the same. In return, the creator by lending the public the access to the creation reaps various benefits. This way Intellectual Property Rights form a ‘social contract’ between the author and society. IPRs are similarly attached to a social function, whereby individual rights must be weighed against competing rights and to the farthest limit possible, both sets of rights must be treated in a balanced manner. To put it differently, both the society and the creator must be put on an equal footing with neither party’s rights subservient to the other. A close look through doctrinal research, at the recent trend of trademark protection, makes the social function of IPRs seem to be moving far from the basic philosophy. Thus, where technology interferes with the philosophies of law, it is important to check and allow such growth only in moderation, for none is superior than the other. The human expansionist nature may need everything under the sky that can be tweaked slightly to be counted and protected as Intellectual Property- like a common parlance word transformed into a hashtag, however IP in order to survive on its philosophies needs to strike a balance. A unanimous global decision on the judicious use of IPR recognition and protection is the need of the hour.Keywords: hashtag trademarks, intellectual property, social function, technology
Procedia PDF Downloads 1311078 Tip-Apex Distance as a Long-Term Risk Factor for Hospital Readmission Following Intramedullary Fixation of Intertrochanteric Fractures
Authors: Brandon Knopp, Matthew Harris
Abstract:
Purpose: Tip-apex distance (TAD) has long been discussed as a metric for determining risk of failure in the fixation of peritrochanteric fractures. TAD measurements over 25 millimeters (mm) have been associated with higher rates of screw cut out and other complications in the first several months after surgery. However, there is limited evidence for the efficacy of this measurement in predicting the long-term risk of negative outcomes following hip fixation surgery. The purpose of our study was to investigate risk factors including TAD for hospital readmission, loss of pre-injury ambulation and development of complications within 1 year after hip fixation surgery. Methods: A retrospective review of proximal hip fractures treated with single screw intramedullary devices between 2016 and 2020 was performed at a 327-bed regional medical center. Patients included had a postoperative follow-up of at least 12 months or surgery-related complications developing within that time. Results: 44 of the 67 patients in this study met the inclusion criteria with adequate follow-up post-surgery. There was a total of 10 males (22.7%) and 34 females (77.3%) meeting inclusion criteria with a mean age of 82.1 (± 12.3) at the time of surgery. The average TAD in our study population was 19.57mm and the average 1-year readmission rate was 15.9%. 3 out of 6 patients (50%) with a TAD > 25mm were readmitted within one year due to surgery-related complications. In contrast, 3 out of 38 patients (7.9%) with a TAD < 25mm were readmitted within one year due to surgery-related complications (p=0.0254). Individual TAD measurements, averaging 22.05mm in patients readmitted within 1 year of surgery and 19.18mm in patients not readmitted within 1 year of surgery, were not significantly different between the two groups (p=0.2113). Conclusions: Our data indicate a significant improvement in hospital readmission rates up to one year after hip fixation surgery in patients with a TAD < 25mm with a decrease in readmissions of over 40% (50% vs 7.9%). This result builds upon past investigations by extending the follow-up time to 1 year after surgery and utilizing hospital readmissions as a metric for surgical success. With the well-documented physical and financial costs of hospital readmission after hip surgery, our study highlights a reduction of TAD < 25mm as an effective method of improving patient outcomes and reducing financial costs to patients and medical institutions. No relationship was found between TAD measurements and secondary outcomes, including loss of pre-injury ambulation and development of complications.Keywords: hip fractures, hip reductions, readmission rates, open reduction internal fixation
Procedia PDF Downloads 1441077 Equivalences and Contrasts in the Morphological Formation of Echo Words in Two Indo-Aryan Languages: Bengali and Odia
Authors: Subhanan Mandal, Bidisha Hore
Abstract:
The linguistic process whereby repetition of all or part of the base word with or without internal change before or after the base itself takes place is regarded as reduplication. The reduplicated morphological construction annotates with itself a new grammatical category and meaning. Reduplication is a very frequent and abundant phenomenon in the eastern Indian languages from the states of West Bengal and Odisha, i.e. Bengali and Odia respectively. Bengali, an Indo-Aryan language and a part of the Indo-European language family is one of the largest spoken languages in India and is the national language of Bangladesh. Despite this classification, Bengali has certain influences in terms of vocabulary and grammar due to its geographical proximity to Tibeto-Burman and Austro-Asiatic language speaking communities. Bengali along with Odia belonged to a single linguistic branch. But with time and gradual linguistic changes due to various factors, Odia was the first to break away and develop as a separate distinct language. However, less of contrasts and more of similarities still exist among these languages along the line of linguistics, leaving apart the script. This paper deals with the procedure of echo word formations in Bengali and Odia. The morphological research of the two languages concerning the field of reduplication reveals several linguistic processes. The revelation is based on the information elicited from native language speakers and also on the analysis of echo words found in discourse and conversational patterns. For the purpose of partial reduplication analysis, prefixed class and suffixed class word formations are taken into consideration which show specific rule based changes. For example, in suffixed class categorization, both consonant and vowel alterations are found, following the rules: i) CVx à tVX, ii) CVCV à CVCi. Further classifications were also found on sentential studies of both languages which revealed complete reduplication complexities while forming echo words where the head word lose its original meaning. Complexities based on onomatopoetic/phonetic imitation of natural phenomena and not according to any rule-based occurrences were also found. Taking these aspects into consideration which are very prevalent in both the languages, inferences are drawn from the study which bring out many similarities in both the languages in this area in spite of branching away from each other several years ago.Keywords: consonant alteration, onomatopoetic, partial reduplication and complete reduplication, reduplication, vowel alteration
Procedia PDF Downloads 2401076 Records of Lepidopteron Borers (Lepidoptera) on Stored Seeds of Indian Himalayan Conifers
Authors: Pawan Kumar, Pitamber Singh Negi
Abstract:
Many of the regeneration failures in conifers are often being attributed to heavy insect attack and pathogens during the period of seed formation and under storage conditions. Conifer berries and seed insects occur throughout the known range of the hosts and also limit the production of seed for nursery stock. On occasion, even entire seed crops are lost due to insect attacks. The berry and seeds of both the species have been found to be infected with insects. Recently, heavy damage to the berry and seeds of Juniper and Chilgoza Pine was observed in the field as well as in stored conditions, leading to reduction in the viability of seeds to germinate. Both the species are under great threat and regeneration of the species is very low. Due to lack of adequate literature, the study on the damage potential of seed insects was urgently required to know the exact status of the insect-pests attacking seeds/berries of both the pine species so as to develop pest management practices against the insect pests attack. As both the species are also under threat and are fighting for survival, so the study is important to develop management practices for the insect-pests of seeds/berries of Juniper and Chilgoza pine so as to evaluate in the nursery, as these species form major vegetation of their distribution zones. A six-year study on the management of insect pests of seeds of Chilgoza revealed that seeds of this species are prone to insect pests mainly borers. During present investigations, it was recorded that cones of are heavily attacked only by Dioryctria abietella (Lepidoptera: Pyralidae) in natural conditions, but seeds which are economically important are heavily infected, (sometimes up to 100% damage was also recorded) by insect borer, Plodia interpunctella (Lepidoptera: Pyralidae) and is recorded for the first time ‘to author’s best knowledge’ infesting the stored Chilgoza seeds. Similarly, Juniper berries and seeds were heavily attacked only by a single borer, Homaloxestis cholopis (Lepidoptera: Lecithoceridae) recorded as a new report in natural habitat as well as in stored conditions. During the present investigation details of insect pest attack on Juniper and Chilgoza pine seeds and berries was observed and suitable management practices were also developed to contain the insect-pests attack.Keywords: borer, chilgozapine, cones, conifer, Lepidoptera, juniper, management, seed
Procedia PDF Downloads 1461075 A Comparative Study of Sampling-Based Uncertainty Propagation with First Order Error Analysis and Percentile-Based Optimization
Authors: M. Gulam Kibria, Shourav Ahmed, Kais Zaman
Abstract:
In system analysis, the information on the uncertain input variables cause uncertainty in the system responses. Different probabilistic approaches for uncertainty representation and propagation in such cases exist in the literature. Different uncertainty representation approaches result in different outputs. Some of the approaches might result in a better estimation of system response than the other approaches. The NASA Langley Multidisciplinary Uncertainty Quantification Challenge (MUQC) has posed challenges about uncertainty quantification. Subproblem A, the uncertainty characterization subproblem, of the challenge posed is addressed in this study. In this subproblem, the challenge is to gather knowledge about unknown model inputs which have inherent aleatory and epistemic uncertainties in them with responses (output) of the given computational model. We use two different methodologies to approach the problem. In the first methodology we use sampling-based uncertainty propagation with first order error analysis. In the other approach we place emphasis on the use of Percentile-Based Optimization (PBO). The NASA Langley MUQC’s subproblem A is developed in such a way that both aleatory and epistemic uncertainties need to be managed. The challenge problem classifies each uncertain parameter as belonging to one the following three types: (i) An aleatory uncertainty modeled as a random variable. It has a fixed functional form and known coefficients. This uncertainty cannot be reduced. (ii) An epistemic uncertainty modeled as a fixed but poorly known physical quantity that lies within a given interval. This uncertainty is reducible. (iii) A parameter might be aleatory but sufficient data might not be available to adequately model it as a single random variable. For example, the parameters of a normal variable, e.g., the mean and standard deviation, might not be precisely known but could be assumed to lie within some intervals. It results in a distributional p-box having the physical parameter with an aleatory uncertainty, but the parameters prescribing its mathematical model are subjected to epistemic uncertainties. Each of the parameters of the random variable is an unknown element of a known interval. This uncertainty is reducible. From the study, it is observed that due to practical limitations or computational expense, the sampling is not exhaustive in sampling-based methodology. That is why the sampling-based methodology has high probability of underestimating the output bounds. Therefore, an optimization-based strategy to convert uncertainty described by interval data into a probabilistic framework is necessary. This is achieved in this study by using PBO.Keywords: aleatory uncertainty, epistemic uncertainty, first order error analysis, uncertainty quantification, percentile-based optimization
Procedia PDF Downloads 2381074 Monolithic Integrated GaN Resonant Tunneling Diode Pair with Picosecond Switching Time for High-speed Multiple-valued Logic System
Authors: Fang Liu, JiaJia Yao, GuanLin Wu, ZuMaoLi, XueYan Yang, HePeng Zhang, ZhiPeng Sun, JunShuai Xue
Abstract:
The explosive increasing needs of data processing and information storage strongly drive the advancement of the binary logic system to multiple-valued logic system. Inherent negative differential resistance characteristic, ultra-high-speed switching time, and robust anti-irradiation capability make III-nitride resonant tunneling diode one of the most promising candidates for multi-valued logic devices. Here we report the monolithic integration of GaN resonant tunneling diodes in series to realize multiple negative differential resistance regions, obtaining at least three stable operating states. A multiply-by-three circuit is achieved by this combination, increasing the frequency of the input triangular wave from f0 to 3f0. The resonant tunneling diodes are grown by plasma-assistedmolecular beam epitaxy on free-standing c-plane GaN substrates, comprising double barriers and a single quantum well both at the atomic level. Device with a peak current density of 183kA/cm² in conjunction with a peak-to-valley current ratio (PVCR) of 2.07 is observed, which is the best result reported in nitride-based resonant tunneling diodes. Microwave oscillation event at room temperature was discovered with a fundamental frequency of 0.31GHz and an output power of 5.37μW, verifying the high repeatability and robustness of our device. The switching behavior measurement was successfully carried out, featuring rise and fall times in the order of picoseconds, which can be used in high-speed digital circuits. Limited by the measuring equipment and the layer structure, the switching time can be further improved. In general, this article presents a novel nitride device with multiple negative differential regions driven by the resonant tunneling mechanism, which can be used in high-speed multiple value logic field with reduced circuit complexity, demonstrating a new solution of nitride devices to break through the limitations of binary logic.Keywords: GaN resonant tunneling diode, negative differential resistance, multiple-valued logic system, switching time, peak-to-valley current ratio
Procedia PDF Downloads 991073 Development of Vertically Integrated 2D Lake Victoria Flow Models in COMSOL Multiphysics
Authors: Seema Paul, Jesper Oppelstrup, Roger Thunvik, Vladimir Cvetkovic
Abstract:
Lake Victoria is the second largest fresh water body in the world, located in East Africa with a catchment area of 250,000 km², of which 68,800 km² is the actual lake surface. The hydrodynamic processes of the shallow (40–80 m deep) water system are unique due to its location at the equator, which makes Coriolis effects weak. The paper describes a St.Venant shallow water model of Lake Victoria developed in COMSOL Multiphysics software, a general purpose finite element tool for solving partial differential equations. Depth soundings taken in smaller parts of the lake were combined with recent more extensive data to resolve the discrepancies of the lake shore coordinates. The topography model must have continuous gradients, and Delaunay triangulation with Gaussian smoothing was used to produce the lake depth model. The model shows large-scale flow patterns, passive tracer concentration and water level variations in response to river and tracer inflow, rain and evaporation, and wind stress. Actual data of precipitation, evaporation, in- and outflows were applied in a fifty-year simulation model. It should be noted that the water balance is dominated by rain and evaporation and model simulations are validated by Matlab and COMSOL. The model conserves water volume, the celerity gradients are very small, and the volume flow is very slow and irrotational except at river mouths. Numerical experiments show that the single outflow can be modelled by a simple linear control law responding only to mean water level, except for a few instances. Experiments with tracer input in rivers show very slow dispersion of the tracer, a result of the slow mean velocities, in turn, caused by the near-balance of rain with evaporation. The numerical and hydrodynamical model can evaluate the effects of wind stress which is exerted by the wind on the lake surface that will impact on lake water level. Also, model can evaluate the effects of the expected climate change, as manifest in changes to rainfall over the catchment area of Lake Victoria in the future.Keywords: bathymetry, lake flow and steady state analysis, water level validation and concentration, wind stress
Procedia PDF Downloads 2251072 Mechanical Properties of Carbon Fibre Reinforced Thermoplastic Composites Consisting of Recycled Carbon Fibres and Polyamide 6 Fibres
Authors: Mir Mohammad Badrul Hasan, Anwar Abdkader, Chokri Cherif
Abstract:
With the increasing demand and use of carbon fibre reinforced composites (CFRC), disposal of the carbon fibres (CF) and end of life composite parts is gaining tremendous importance on the issue especially of sustainability. Furthermore, a number of processes (e. g. pyrolysis, solvolysis, etc.) are available currently to obtain recycled CF (rCF) from end-of-life CFRC. Since the CF waste or rCF are neither allowed to be thermally degraded nor landfilled (EU Directive 1999/31/EC), profitable recycling and re-use concepts are urgently necessary. Currently, the market for materials based on rCF mainly consists of random mats (nonwoven) made from short fibres. The strengths of composites that can be achieved from injection-molded components and from nonwovens are between 200-404 MPa and are characterized by low performance and suitable for non-structural applications such as in aircraft and vehicle interiors. On the contrary, spinning rCF to yarn constructions offers good potential for higher CFRC material properties due to high fibre orientation and compaction of rCF. However, no investigation is reported till yet on the direct comparison of the mechanical properties of thermoplastic CFRC manufactured from virgin CF filament yarn and spun yarns from staple rCF. There is a lack of understanding on the level of performance of the composites that can be achieved from hybrid yarns consisting of rCF and PA6 fibres. In this drop back, extensive research works are being carried out at the Textile Machinery and High-Performance Material Technology (ITM) on the development of new thermoplastic CFRC from hybrid yarns consisting of rCF. For this purpose, a process chain is developed at the ITM starting from fibre preparation to hybrid yarns manufacturing consisting of staple rCF by mixing with thermoplastic fibres. The objective is to apply such hybrid yarns for the manufacturing of load bearing textile reinforced thermoplastic CFRCs. In this paper, the development of innovative multi-component core-sheath hybrid yarn structures consisting of staple rCF and polyamide 6 (PA 6) on a DREF-3000 friction spinning machine is reported. Furthermore, Unidirectional (UD) CFRCs are manufactured from the developed hybrid yarns, and the mechanical properties of the composites such as tensile and flexural properties are analyzed. The results show that the UD composite manufactured from the developed hybrid yarns consisting of staple rCF possesses approximately 80% of the tensile strength and E-module to those produced from virgin CF filament yarn. The results show a huge potential of the DREF-3000 friction spinning process to develop composites from rCF for high-performance applications.Keywords: recycled carbon fibres, hybrid yarn, friction spinning, thermoplastic composite
Procedia PDF Downloads 2531071 Effect of Automatic Self Transcending Meditation on Perceived Stress and Sleep Quality in Adults
Authors: Divya Kanchibhotla, Shashank Kulkarni, Shweta Singh
Abstract:
Chronic stress and sleep quality reduces mental health and increases the risk of developing depression and anxiety as well. There is increasing evidence for the utility of meditation as an adjunct clinical intervention for conditions like depression and anxiety. The present study is an attempt to explore the impact of Sahaj Samadhi Meditation (SSM), a category of Automatic Self Transcending Meditation (ASTM), on perceived stress and sleep quality in adults. The study design was a single group pre-post assessment. Perceived Stress Scale (PSS) and the Pittsburgh Sleep Quality Index (PSQI) were used in this study. Fifty-two participants filled PSS, and 60 participants filled PSQI at the beginning of the program (day 0), after two weeks (day 16) and at two months (day 60). Significant pre-post differences for the perceived stress level on Day 0 - Day 16 (p < 0.01; Cohen's d = 0.46) and Day 0 - Day 60 (p < 0.01; Cohen's d = 0.76) clearly demonstrated that by practicing SSM, participants experienced reduction in the perceived stress. The effect size of the intervention observed on the 16th day of assessment was small to medium, but on the 60th day, a medium to large effect size of the intervention was observed. In addition to this, significant pre-post differences for the sleep quality on Day 0 - Day 16 and Day 0 - Day 60 (p < 0.05) clearly demonstrated that by practicing SSM, participants experienced improvement in the sleep quality. Compared with Day 0 assessment, participants demonstrated significant improvement in the quality of sleep on Day 16 and Day 60. The effect size of the intervention observed on the 16th day of assessment was small, but on the 60th day, a small to medium effect size of the intervention was observed. In the current study we found out that after practicing SSM for two months, participants reported a reduction in the perceived stress, they felt that they are more confident about their ability to handle personal problems, were able to cope with all the things that they had to do, felt that they were on top of the things, and felt less angered. Participants also reported that their overall sleep quality improved; they took less time to fall asleep; they had less disturbances in sleep and less daytime dysfunction due to sleep deprivation. The present study provides clear evidence of the efficacy and safety of non-pharmacological interventions such as SSM in reducing stress and improving sleep quality. Thus, ASTM may be considered a useful intervention to reduce psychological distress in healthy, non-clinical populations, and it can be an alternative remedy for treating poor sleep among individuals and decreasing the use of harmful sedatives.Keywords: automatic self transcending meditation, Sahaj Samadhi meditation, sleep, stress
Procedia PDF Downloads 1331070 The Influence of Operational Changes on Efficiency and Sustainability of Manufacturing Firms
Authors: Dimitrios Kafetzopoulos
Abstract:
Nowadays, companies are more concerned with adopting their own strategies for increased efficiency and sustainability. Dynamic environments are fertile fields for developing operational changes. For this purpose, organizations need to implement an advanced management philosophy that boosts changes to companies’ operation. Changes refer to new applications of knowledge, ideas, methods, and skills that can generate unique capabilities and leverage an organization’s competitiveness. So, in order to survive and compete in the global and niche markets, companies should incorporate the adoption of operational changes into their strategy with regard to their products and their processes. Creating the appropriate culture for changes in terms of products and processes helps companies to gain a sustainable competitive advantage in the market. Thus, the purpose of this study is to investigate the role of both incremental and radical changes into operations of a company, taking into consideration not only product changes but also process changes, and continues by measuring the impact of these two types of changes on business efficiency and sustainability of Greek manufacturing companies. The above discussion leads to the following hypotheses: H1: Radical operational changes have a positive impact on firm efficiency. H2: Incremental operational changes have a positive impact on firm efficiency. H3: Radical operational changes have a positive impact on firm sustainability. H4: Incremental operational changes have a positive impact on firm sustainability. In order to achieve the objectives of the present study, a research study was carried out in Greek manufacturing firms. A total of 380 valid questionnaires were received while a seven-point Likert scale was used to measure all the questionnaire items of the constructs (radical changes, incremental changes, efficiency and sustainability). The constructs of radical and incremental operational changes, each one as one variable, has been subdivided into product and process changes. Non-response bias, common method variance, multicollinearity, multivariate normal distribution and outliers have been checked. Moreover, the unidimensionality, reliability and validity of the latent factors were assessed. Exploratory Factor Analysis and Confirmatory Factor Analysis were applied to check the factorial structure of the constructs and the factor loadings of the items. In order to test the research hypotheses, the SEM technique was applied (maximum likelihood method). The goodness of fit of the basic structural model indicates an acceptable fit of the proposed model. According to the present study findings, radical operational changes and incremental operational changes significantly influence both efficiency and sustainability of Greek manufacturing firms. However, it is in the dimension of radical operational changes, meaning those in process and product, that the most significant contributors to firm efficiency are to be found, while its influence on sustainability is low albeit statistically significant. On the contrary, incremental operational changes influence sustainability more than firms’ efficiency. From the above, it is apparent that the embodiment of the concept of the changes into the products and processes operational practices of a firm has direct and positive consequences for what it achieves from efficiency and sustainability perspective.Keywords: incremental operational changes, radical operational changes, efficiency, sustainability
Procedia PDF Downloads 1341069 The Grammar of the Content Plane as a Style Marker in Forensic Authorship Attribution
Authors: Dayane de Almeida
Abstract:
This work aims at presenting a study that demonstrates the usability of categories of analysis from Discourse Semiotics – also known as Greimassian Semiotics in authorship cases in forensic contexts. It is necessary to know if the categories examined in semiotic analysis (the ‘grammar’ of the content plane) can distinguish authors. Thus, a study with 4 sets of texts from a corpus of ‘not on demand’ written samples (those texts differ in formality degree, purpose, addressees, themes, etc.) was performed. Each author contributed with 20 texts, separated into 2 groups of 10 (Author1A, Author1B, and so on). The hypothesis was that texts from a single author were semiotically more similar to each other than texts from different authors. The assumptions and issues that led to this idea are as follows: -The features analyzed in authorship studies mostly relate to the expression plane: they are manifested on the ‘surface’ of texts. If language is both expression and content, content would also have to be considered for more accurate results. Style is present in both planes. -Semiotics postulates the content plane is structured in a ‘grammar’ that underlies expression, and that presents different levels of abstraction. This ‘grammar’ would be a style marker. -Sociolinguistics demonstrates intra-speaker variation: an individual employs different linguistic uses in different situations. Then, how to determine if someone is the author of several texts, distinct in nature (as it is the case in most forensic sets), when it is known intra-speaker variation is dependent on so many factors?-The idea is that the more abstract the level in the content plane, the lower the intra-speaker variation, because there will be a greater chance for the author to choose the same thing. If two authors recurrently chose the same options, differently from one another, it means each one’s option has discriminatory power. -Size is another issue for various attribution methods. Since most texts in real forensic settings are short, methods relying only on the expression plane tend to fail. The analysis of the content plane as proposed by greimassian semiotics would be less size-dependable. -The semiotic analysis was performed using the software Corpus Tool, generating tags to allow the counting of data. Then, similarities and differences were quantitatively measured, through the application of the Jaccard coefficient (a statistical measure that compares the similarities and differences between samples). The results showed the hypothesis was confirmed and, hence, the grammatical categories of the content plane may successfully be used in questioned authorship scenarios.Keywords: authorship attribution, content plane, forensic linguistics, greimassian semiotics, intraspeaker variation, style
Procedia PDF Downloads 2401068 Identification of Igneous Intrusions in South Zallah Trough-Sirt Basin
Authors: Mohamed A. Saleem
Abstract:
Using mostly seismic data, this study intends to show some examples of igneous intrusions found in some areas of the Sirt Basin and explore the period of their emplacement as well as the interrelationships between these sills. The study area is located in the south of the Zallah Trough, south-west Sirt basin, Libya. It is precisely between the longitudes 18.35ᵒ E and 19.35ᵒ E, and the latitudes 27.8ᵒ N and 28.0ᵒ N. Based on a variety of criteria that are usually used as marks on the igneous intrusions, twelve igneous intrusions (Sills), have been detected and analysed using 3D seismic data. One or more of the following were used as identification criteria: the high amplitude reflectors paired with abrupt reflector terminations, vertical offsets, or what is described as a dike-like connection, the violation, the saucer form, and the roughness. Because of their laying between the hosting layers, the majority of these intrusions are classified as sills. Another distinguishing feature is the intersection geometry link between some of these sills. Every single sill has given a name just to distinguish the sills from each other such as S-1, S-2, and …S-12. To avoid the repetition of description, the common characteristics and some statistics of these sills are shown in summary tables, while the specific characters that are not common and have been noticed for each sill are shown individually. The sills, S-1, S-2, and S-3, are approximately parallel to one other, with the shape of these sills being governed by the syncline structure of their host layers. The faults that dominated the strata (pre-upper Cretaceous strata) have a significant impact on the sills; they caused their discontinuity, while the upper layers have a shape of anticlines. S-1 and S-10 are the group's deepest and highest sills, respectively, with S-1 seated near the basement's top and S-10 extending into the sequence of the upper cretaceous. The dramatic escalation of sill S-4 can be seen in N-S profiles. The majority of the interpreted sills are influenced and impacted by a large number of normal faults that strike in various directions and propagate vertically from the surface to the basement's top. This indicates that the sediment sequences were existed before the sill’s intrusion, were deposited, and that the younger faults occurred more recently. The pre-upper cretaceous unit is the current geological depth for the Sills S-1, S-2 … S-9, while Sills S-10, S-11, and S-12 are hosted by the Cretaceous unit. Over the sills S-1, S-2, and S-3, which are the deepest sills, the pre-upper cretaceous surface has a slightly forced folding, these forced folding is also noticed above the right and left tips of sill S-8 and S-6, respectively, while the absence of these marks on the above sequences of layers supports the idea that the aforementioned sills were emplaced during the early upper cretaceous period.Keywords: Sirt Basin, Zallah Trough, igneous intrusions, seismic data
Procedia PDF Downloads 1121067 Interface Designer as Cultural Producer: A Dialectic Materialist Approach to the Role of Visual Designer in the Present Digital Era
Authors: Cagri Baris Kasap
Abstract:
In this study, how interface designers can be viewed as producers of culture in the current era will be interrogated from a critical theory perspective. Walter Benjamin was a German Jewish literary critical theorist who, during 1930s, was engaged in opposing and criticizing the Nazi use of art and media. ‘The Author as Producer’ is an essay that Benjamin has read at the Communist Institute for the Study of Fascism in Paris. In this article, Benjamin relates directly to the dialectics between base and superstructure and argues that authors, normally placed within the superstructure should consider how writing and publishing is production and directly related to the base. Through it, he discusses what it could mean to see author as producer of his own text, as a producer of writing, understood as an ideological construct that rests on the apparatus of production and distribution. So Benjamin concludes that the author must write in ways that relate to the conditions of production, he must do so in order to prepare his readers to become writers and even make this possible for them by engineering an ‘improved apparatus’ and must work toward turning consumers to producers and collaborators. In today’s world, it has become a leading business model within Web 2.0 services of multinational Internet technologies and culture industries like Amazon, Apple and Google, to transform readers, spectators, consumers or users into collaborators and co-producers through platforms such as Facebook, YouTube and Amazon’s CreateSpace Kindle Direct Publishing print-on-demand, e-book and publishing platforms. However, the way this transformation happens is tightly controlled and monitored by combinations of software and hardware. In these global-market monopolies, it has become increasingly difficult to get insight into how one’s writing and collaboration is used, captured, and capitalized as a user of Facebook or Google. In the lens of this study, it could be argued that this criticism could very well be considered by digital producers or even by the mass of collaborators in contemporary social networking software. How do software and design incorporate users and their collaboration? Are they truly empowered, are they put in a position where they are able to understand the apparatus and how their collaboration is part of it? Or has the apparatus become a means against the producers? Thus, when using corporate systems like Google and Facebook, iPhone and Kindle without any control over the means of production, which is closed off by opaque interfaces and licenses that limit our rights of use and ownership, we are already the collaborators that Benjamin calls for. For example, the iPhone and the Kindle combine a specific use of technology to distribute the relations between the ‘authors’ and the ‘prodUsers’ in ways that secure their monopolistic business models by limiting the potential of the technology.Keywords: interface designer, cultural producer, Walter Benjamin, materialist aesthetics, dialectical thinking
Procedia PDF Downloads 142