Search results for: real utopias
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5283

Search results for: real utopias

663 Building User Behavioral Models by Processing Web Logs and Clustering Mechanisms

Authors: Madhuka G. P. D. Udantha, Gihan V. Dias, Surangika Ranathunga

Abstract:

Today Websites contain very interesting applications. But there are only few methodologies to analyze User navigations through the Websites and formulating if the Website is put to correct use. The web logs are only used if some major attack or malfunctioning occurs. Web Logs contain lot interesting dealings on users in the system. Analyzing web logs has become a challenge due to the huge log volume. Finding interesting patterns is not as easy as it is due to size, distribution and importance of minor details of each log. Web logs contain very important data of user and site which are not been put to good use. Retrieving interesting information from logs gives an idea of what the users need, group users according to their various needs and improve site to build an effective and efficient site. The model we built is able to detect attacks or malfunctioning of the system and anomaly detection. Logs will be more complex as volume of traffic and the size and complexity of web site grows. Unsupervised techniques are used in this solution which is fully automated. Expert knowledge is only used in validation. In our approach first clean and purify the logs to bring them to a common platform with a standard format and structure. After cleaning module web session builder is executed. It outputs two files, Web Sessions file and Indexed URLs file. The Indexed URLs file contains the list of URLs accessed and their indices. Web Sessions file lists down the indices of each web session. Then DBSCAN and EM Algorithms are used iteratively and recursively to get the best clustering results of the web sessions. Using homogeneity, completeness, V-measure, intra and inter cluster distance and silhouette coefficient as parameters these algorithms self-evaluate themselves to input better parametric values to run the algorithms. If a cluster is found to be too large then micro-clustering is used. Using Cluster Signature Module the clusters are annotated with a unique signature called finger-print. In this module each cluster is fed to Associative Rule Learning Module. If it outputs confidence and support as value 1 for an access sequence it would be a potential signature for the cluster. Then the access sequence occurrences are checked in other clusters. If it is found to be unique for the cluster considered then the cluster is annotated with the signature. These signatures are used in anomaly detection, prevent cyber attacks, real-time dashboards that visualize users, accessing web pages, predict actions of users and various other applications in Finance, University Websites, News and Media Websites etc.

Keywords: anomaly detection, clustering, pattern recognition, web sessions

Procedia PDF Downloads 288
662 The Essential but Uncertain Role of the Vietnamese Association of Cities of Vietnam in Promoting Community-Based Housing Upgrading

Authors: T. Nguyen, H. Rennie, S. Vallance, M. Mackay

Abstract:

Municipal Associations, also called Unions, Leagues or Federations of municipalities have been established worldwide to represent the interests and needs of urban governments in the face of increasing urban issues. In 2008, the Association of Cities of Vietnam (ACVN) joined the Asian Coalition of Community Action Program (ACCA program) and introduced the community-based upgrading approach to help Vietnamese cities to address urban upgrading issues. While this community-based upgrading approach has only been implemented in a small number of Vietnamese cities and its replication has faced certain challenges, it is worthy to explore insights on how the Association of cities of Vietnam played its role in implementing some reportedly successful projects. This paper responds to this inquiry and presents results extracted from the author’s PhD study that sets out with a general objective to critically examine how social capital dimensions (i.e., bonding, bridging and linking) were formed, mobilized and maintained in a local collective and community-based upgrading process. Methodologically, the study utilized the given general categorization of bonding, bridging and linking capitals to explore and confirm how social capital operated in the real context of a community-based upgrading process, particularly in the context of Vietnam. To do this, the study conducted two exploratory and qualitative case studies of housing projects in Friendship neighbourhood (Vinh city) and Binh Dong neighbourhood (Tan An city). This paper presents the findings of the Friendship neighbourhood case study, focusing on the role of the Vietnamese municipal association in forming, mobilizing and maintaining bonding, bridging and linking capital for a community-based upgrading process. The findings highlight the essential but uncertain role of ACVN - the organization that has a hybrid legitimacy status - in such a process. The results improve our understanding both practically and theoretically. Practically, the results offer insights into the performance of a municipal association operating in a transitioning socio-political context of Vietnam. Theoretically, the paper questions the necessity of categorizing social capital dimensions (i.e., bonding, bridging and linking) by suggesting a holistic approach of looking at social capital for urban governance issues within the Vietnamese context and perhaps elsewhere.

Keywords: bonding capital, bridging capital, municipal association, linking capital, social capital, housing upgrading

Procedia PDF Downloads 149
661 The Impact of Undisturbed Flow Speed on the Correlation of Aerodynamic Coefficients as a Function of the Angle of Attack for the Gyroplane Body

Authors: Zbigniew Czyz, Krzysztof Skiba, Miroslaw Wendeker

Abstract:

This paper discusses the results of aerodynamic investigation of the Tajfun gyroplane body designed by a Polish company, Aviation Artur Trendak. This gyroplane has been studied as a 1:8 scale model. Scaling objects for aerodynamic investigation is an inherent procedure in any kind of designing. If scaling, the criteria of similarity need to be satisfied. The basic criteria of similarity are geometric, kinematic and dynamic. Despite the results of aerodynamic research are often reduced to aerodynamic coefficients, one should pay attention to how values of coefficients behave if certain criteria are to be satisfied. To satisfy the dynamic criterion, for example, the Reynolds number should be focused on. This is the correlation of inertial to viscous forces. With the multiplied flow speed by the specific dimension as a numerator (with a constant kinematic viscosity coefficient), flow speed in a wind tunnel research should be increased as many times as an object is decreased. The aerodynamic coefficients specified in this research depend on the real forces that act on an object, its specific dimension, medium speed and variations in its density. Rapid prototyping with a 3D printer was applied to create the research object. The research was performed with a T-1 low-speed wind tunnel (its diameter of the measurement volume is 1.5 m) and a six-element aerodynamic internal scales, WDP1, at the Institute of Aviation in Warsaw. This T-1 wind tunnel is low-speed continuous operation with open space measurement. The research covered a number of the selected speeds of undisturbed flow, i.e. V = 20, 30 and 40 m/s, corresponding to the Reynolds numbers (as referred to 1 m) Re = 1.31∙106, 1.96∙106, 2.62∙106 for the angles of attack ranging -15° ≤ α ≤ 20°. Our research resulted in basic aerodynamic characteristics and observing the impact of undisturbed flow speed on the correlation of aerodynamic coefficients as a function of the angle of attack of the gyroplane body. If the speed of undisturbed flow in the wind tunnel changes, the aerodynamic coefficients are significantly impacted. At speed from 20 m/s to 30 m/s, drag coefficient, Cx, changes by 2.4% up to 9.9%, whereas lift coefficient, Cz, changes by -25.5% up to 15.7% if the angle of attack of 0° excluded or by -25.5% up to 236.9% if the angle of attack of 0° included. Within the same speed range, the coefficient of a pitching moment, Cmy, changes by -21.1% up to 7.3% if the angles of attack -15° and -10° excluded or by -142.8% up to 618.4% if the angle of attack -15° and -10° included. These discrepancies in the coefficients of aerodynamic forces definitely need to consider while designing the aircraft. For example, if load of certain aircraft surfaces is calculated, additional correction factors definitely need to be applied. This study allows us to estimate the discrepancies in the aerodynamic forces while scaling the aircraft. This work has been financed by the Polish Ministry of Science and Higher Education.

Keywords: aerodynamics, criteria of similarity, gyroplane, research tunnel

Procedia PDF Downloads 396
660 The Role of People and Data in Complex Spatial-Related Long-Term Decisions: A Case Study of Capital Project Management Groups

Authors: Peter Boyes, Sarah Sharples, Paul Tennent, Gary Priestnall, Jeremy Morley

Abstract:

Significant long-term investment projects can involve complex decisions. These are often described as capital projects, and the factors that contribute to their complexity include budgets, motivating reasons for investment, stakeholder involvement, interdependent projects, and the delivery phases required. The complexity of these projects often requires management groups to be established involving stakeholder representatives; these teams are inherently multidisciplinary. This study uses two university campus capital projects as case studies for this type of management group. Due to the interaction of projects with wider campus infrastructure and users, decisions are made at varying spatial granularity throughout the project lifespan. This spatial-related context brings complexity to the group decisions. Sensemaking is the process used to achieve group situational awareness of a complex situation, enabling the team to arrive at a consensus and make a decision. The purpose of this study is to understand the role of people and data in the complex spatial related long-term decision and sensemaking processes. The paper aims to identify and present issues experienced in practical settings of these types of decision. A series of exploratory semi-structured interviews with members of the two projects elicit an understanding of their operation. From two stages of thematic analysis, inductive and deductive, emergent themes are identified around the group structure, the data usage, and the decision making within these groups. When data were made available to the group, there were commonly issues with the perception of veracity and validity of the data presented; this impacted the ability of group to reach consensus and, therefore, for decisions to be made. Similarly, there were different responses to forecasted or modelled data, shaped by the experience and occupation of the individuals within the multidisciplinary management group. This paper provides an understanding of further support required for team sensemaking and decision making in complex capital projects. The paper also discusses the barriers found to effective decision making in this setting and suggests opportunities to develop decision support systems in this team strategic decision-making process. Recommendations are made for further research into the sensemaking and decision-making process of this complex spatial-related setting.

Keywords: decision making, decisions under uncertainty, real decisions, sensemaking, spatial, team decision making

Procedia PDF Downloads 132
659 Visual Aid and Imagery Ramification on Decision Making: An Exploratory Study Applicable in Emergency Situations

Authors: Priyanka Bharti

Abstract:

Decades ago designs were based on common sense and tradition, but after an enhancement in visualization technology and research, we are now able to comprehend the cognitive ability involved in the decoding of the visual information. However, many fields in visuals need intense research to deliver an efficient explanation for the events. Visuals are an information representation mode through images, symbols and graphics. It plays an impactful role in decision making by facilitating quick recognition, comprehension, and analysis of a situation. They enhance problem-solving capabilities by enabling the processing of more data without overloading the decision maker. As research proves that, visuals offer an improved learning environment by a factor of 400 compared to textual information. Visual information engages learners at a cognitive level and triggers the imagination, which enables the user to process the information faster (visuals are processed 60,000 times faster in the brain than text). Appropriate information, visualization, and its presentation are known to aid and intensify the decision-making process for the users. However, most literature discusses the role of visual aids in comprehension and decision making during normal conditions alone. Unlike emergencies, in a normal situation (e.g. our day to day life) users are neither exposed to stringent time constraints nor face the anxiety of survival and have sufficient time to evaluate various alternatives before making any decision. An emergency is an unexpected probably fatal real-life situation which may inflict serious ramifications on both human life and material possessions unless corrective measures are taken instantly. The situation demands the exposed user to negotiate in a dynamic and unstable scenario in the absence or lack of any preparation, but still, take swift and appropriate decisions to save life/lives or possessions. But the resulting stress and anxiety restricts cue sampling, decreases vigilance, reduces the capacity of working memory, causes premature closure in evaluating alternative options, and results in task shedding. Limited time, uncertainty, high stakes and vague goals negatively affect cognitive abilities to take appropriate decisions. More so, theory of natural decision making by experts has been understood with far more depth than that of an ordinary user. Therefore, in this study, the author aims to understand the role of visual aids in supporting rapid comprehension to take appropriate decisions during an emergency situation.

Keywords: cognition, visual, decision making, graphics, recognition

Procedia PDF Downloads 269
658 Retrospective Cartography of Tbilisi and Surrounding Area

Authors: Dali Nikolaishvili, Nino Khareba, Mariam Tsitsagi

Abstract:

Tbilisi has been a capital of Georgia since the 5ᵗʰ century. City area was covered by forest in historical past. Nowadays the situation has been changing dramatically. Dozens of problems are caused by damages/destruction of green cover and solution, at one glance, seems to be uncomplicated (planting trees and creating green quarters), but on the other hand, according to the increasing tendency, the built up of areas still remains unsolved. Finding out the ways to overcome such obstacles is important even for protecting the health of society. Making of Retrospective cartography of the forest area of Tbilisi with use of GIS technology and remote sensing was the main aim of the research. Research about the dynamic of forest-cover in Tbilisi and its surroundings included the following steps: assessment of the dynamic of forest in Tbilisi and its surroundings. The survey was mainly based on the retrospective mapping method. Using of GIS technology, studying, comparing and identifying the narrative sources was the next step. And the last one was analyzed of the changes from the 80s to the present days on the basis of decryption of remotely sensed images. After creating a unified cartographic basis, the mapping and plans of different periods have been linked to this geodatabase. Data about green parks, individual old plants existing in the private yards and respondents' Information (according to a questionnaire created in advance) was added to the basic database, the general plan of Tbilisi and Scientific works as well. On the basis of analysis of historic, including cartographic sources, forest-cover maps for different periods of time were made. In addition, was made the catalog of individual green parks (location, area, typical composition, name and so on), which was the basis of creating several thematic maps. Areas with a high rate of green area degradation were identified. Several maps depicting the dynamics of forest cover of Tbilisi were created and analyzed. The methods of linking the data of the old cartographic sources to the modern basis were developed too, the result of which may be used in Urban Planning of Tbilisi. Understanding, perceiving and analyzing the real condition of green cover in Tbilisi and its problems, in turn, will help to take appropriate measures for the maintenance of ancient plants, to develop forests and to plan properly parks, squares, and recreational sites. Because the healthy environment is the main condition of human health and implies to the rational development of the city.

Keywords: catalogue of green area, GIS, historical cartography, cartography, remote sensing, Tbilisi

Procedia PDF Downloads 137
657 Imbalance on the Croatian Housing Market in the Aftermath of an Economic Crisis

Authors: Tamara Slišković, Tomislav Sekur

Abstract:

This manuscript examines factors that affect demand and supply of the housing market in Croatia. The period from the beginning of this century, until 2008, was characterized by a strong expansion of construction, housing and real estate market in general. Demand for residential units was expanding, and this was supported by favorable lending conditions of banks. Indicators on the supply side, such as the number of newly built houses and the construction volume index were also increasing. Rapid growth of demand, along with the somewhat slower supply growth, led to the situation in which new apartments were sold before the completion of residential buildings. This resulted in a rise of housing price which was indication of a clear link between the housing prices with the supply and demand in the housing market. However, after 2008 general economic conditions in Croatia worsened and demand for housing has fallen dramatically, while supply descended at much slower pace. Given that there is a gap between supply and demand, it can be concluded that the housing market in Croatia is in imbalance. Such trend is accompanied by a relatively small decrease in housing price. The final result of such movements is the large number of unsold housing units at relatively high price levels. For this reason, it can be argued that housing prices are sticky and that, consequently, the price level in the aftermath of a crisis does not correspond to the discrepancy between supply and demand on the Croatian housing market. The degree of rigidity of the housing price can be determined by inclusion of the housing price as the explanatory variable in the housing demand function. Other independent variables are demographic variable (e.g. the number of households), the interest rate on housing loans, households' disposable income and rent. The equilibrium price is reached when the demand for housing equals its supply, and the speed of adjustment of actual prices to equilibrium prices reveals the extent to which the prices are rigid. The latter requires inclusion of the housing prices with time lag as an independent variable in estimating demand function. We also observe the supply side of the housing market, in order to explain to what extent housing prices explain the movement of new construction activity, and other variables that describe the supply. In this context, we test whether new construction on the Croatian market is dependent on current prices or prices with a time lag. Number of dwellings is used to approximate new construction (flow variable), while the housing prices (current or lagged), quantity of dwellings in the previous period (stock variable) and a series of costs related to new construction are independent variables. We conclude that the key reason for the imbalance in the Croatian housing market should be sought in the relative relationship of price elasticities of supply and demand.

Keywords: Croatian housing market, economic crisis, housing prices, supply imbalance, demand imbalance

Procedia PDF Downloads 276
656 Rapid Detection of Cocaine Using Aggregation-Induced Emission and Aptamer Combined Fluorescent Probe

Authors: Jianuo Sun, Jinghan Wang, Sirui Zhang, Chenhan Xu, Hongxia Hao, Hong Zhou

Abstract:

In recent years, the diversification and industrialization of drug-related crimes have posed significant threats to public health and safety globally. The widespread and increasingly younger demographics of drug users and the persistence of drug-impaired driving incidents underscore the urgency of this issue. Drug detection, a specialized forensic activity, is pivotal in identifying and analyzing substances involved in drug crimes. It relies on pharmacological and chemical knowledge and employs analytical chemistry and modern detection techniques. However, current drug detection methods are limited by their inability to perform semi-quantitative, real-time field analyses. They require extensive, complex laboratory-based preprocessing, expensive equipment, and specialized personnel and are hindered by long processing times. This study introduces an alternative approach using nucleic acid aptamers and Aggregation-Induced Emission (AIE) technology. Nucleic acid aptamers, selected artificially for their specific binding to target molecules and stable spatial structures, represent a new generation of biosensors following antibodies. Rapid advancements in AIE technology, particularly in tetraphenyl ethene-based luminous, offer simplicity in synthesis and versatility in modifications, making them ideal for fluorescence analysis. This work successfully synthesized, isolated, and purified an AIE molecule and constructed a probe comprising the AIE molecule, nucleic acid aptamers, and exonuclease for cocaine detection. The probe demonstrated significant relative fluorescence intensity changes and selectivity towards cocaine over other drugs. Using 4-Butoxytriethylammonium Bromide Tetraphenylethene (TPE-TTA) as the fluorescent probe, the aptamer as the recognition unit, and Exo I as an auxiliary, the system achieved rapid detection of cocaine within 5 mins in aqueous and urine, with detection limits of 1.0 and 5.0 µmol/L respectively. The probe-maintained stability and interference resistance in urine, enabling quantitative cocaine detection within a certain concentration range. This fluorescent sensor significantly reduces sample preprocessing time, offers a basis for rapid onsite cocaine detection, and promises potential for miniaturized testing setups.

Keywords: drug detection, aggregation-induced emission (AIE), nucleic acid aptamer, exonuclease, cocaine

Procedia PDF Downloads 64
655 Dynamic Characterization of Shallow Aquifer Groundwater: A Lab-Scale Approach

Authors: Anthony Credoz, Nathalie Nief, Remy Hedacq, Salvador Jordana, Laurent Cazes

Abstract:

Groundwater monitoring is classically performed in a network of piezometers in industrial sites. Groundwater flow parameters, such as direction, sense and velocity, are deduced from indirect measurements between two or more piezometers. Groundwater sampling is generally done on the whole column of water inside each borehole to provide concentration values for each piezometer location. These flow and concentration values give a global ‘static’ image of potential plume of contaminants evolution in the shallow aquifer with huge uncertainties in time and space scales and mass discharge dynamic. TOTAL R&D Subsurface Environmental team is challenging this classical approach with an innovative dynamic way of characterization of shallow aquifer groundwater. The current study aims at optimizing the tools and methodologies for (i) a direct and multilevel measurement of groundwater velocities in each piezometer and, (ii) a calculation of potential flux of dissolved contaminant in the shallow aquifer. Lab-scale experiments have been designed to test commercial and R&D tools in a controlled sandbox. Multiphysics modeling were performed and took into account Darcy equation in porous media and Navier-Stockes equation in the borehole. The first step of the current study focused on groundwater flow at porous media/piezometer interface. Huge uncertainties from direct flow rate measurements in the borehole versus Darcy flow rate in the porous media were characterized during experiments and modeling. The structure and location of the tools in the borehole also impacted the results and uncertainties of velocity measurement. In parallel, direct-push tool was tested and presented more accurate results. The second step of the study focused on mass flux of dissolved contaminant in groundwater. Several active and passive commercial and R&D tools have been tested in sandbox and reactive transport modeling has been performed to validate the experiments at the lab-scale. Some tools will be selected and deployed in field assays to better assess the mass discharge of dissolved contaminants in an industrial site. The long-term subsurface environmental strategy is targeting an in-situ, real-time, remote and cost-effective monitoring of groundwater.

Keywords: dynamic characterization, groundwater flow, lab-scale, mass flux

Procedia PDF Downloads 167
654 Quercetin and INT3 Inhibits Endocrine Therapy Resistance and Epithelial to Mesenchymal Transition in MCF7 Breast Cancer Cells

Authors: S. Pradhan, D. Pradhan, G. Tripathy

Abstract:

Anti-estrogen treatment resistant is a noteworthy reason for disease relapse and mortality in estrogen receptor alpha (ERα)- positive breast cancers. Tamoxifen or estrogen withdrawal increases the dependance of breast malignancy cells on INT3 signaling. Here, we researched the contribution of Quercetin and INT3 signaling in endocrine resistant breast cancer cells. Methods: We utilized two models of endocrine therapies resistant (ETR-) breast cancer: tamoxifen-resistant (TamR) and long term estrogen-deprived (LTED) MCF7 cells. We assessed the migratory and invasive limit of these cells by Transwell assay. Expression of epithelial to mesenchymal transition (EMT) controllers and in addition INT3 receptors and targets were assessed by real-time PCR and western blot analysis. Besides, we tried in vitro anti-Quercetin monoclonal antibodies (mAbs) and gamma secretase inhibitors (GSIs) as potential EMT reversal therapeutic agents. At last, we created stable Quercetin over expessing MCF7 cells and assessed their EMT features and response to tamoxifen. Results:We found that ETR cells acquired an epithelial to mesenchymal transition (EMT) phenotype and showed expanded levels of Quercetin and INT3 targets. Interestingly, we detected higher level of INT3 however lower levels of INT31 and INT32 proposing a switch to targeting through distinctive INT3 receptors after obtaining of resistance. Anti-Quercetin monoclonal antibodies and the GSI PF03084014 were effective in obstructing the Quercetin/INT3 axis and in part inhibiting the EMT process. As a consequence of this, cell migration and invasion were weakened and the stem cell like population was considerably decreased. Genetic hushing of Quercetin and INT3 prompted proportionate impacts. Finally, stable overexpression of Quercetin was adequate to make MCF7 lethargic to tamoxifen by INT3 activation. Conclusions: ETR cells express abnormal amounts of Quercetin and INT3, whose actuation eventually drives invasive conduct. Anti-Quercetin mAbs and GSI PF03084014 lessen expression of EMT molecules decreasing cellular invasiveness. Quercetin overexpression instigates tamoxifen resistance connected to obtaining of EMT phenotype. Our discovering propose that focusing on Quercetin and/or INT3 warrants further clinical assessment as substantial therapeutic methodologies in endocrine-resistant breast cancer.

Keywords: quercetin, INT3, mesenchymal transition, MCF7 breast cancer cells

Procedia PDF Downloads 311
653 Relation of Consumer Satisfaction on Organization by Focusing on the Different Aspects of Buying Behavior

Authors: I. Gupta, N. Setia

Abstract:

Introduction. Buyer conduct is a progression of practices or examples that buyers pursue before making a buy. It begins when the shopper ends up mindful of a need or wish for an item, at that point finishes up with the buying exchange. Business visionaries can't generally simply shake hands with their intended interest group people and become more acquainted with them. Research is often necessary, so every organization primarily involves doing continuous research to understand and satisfy consumer needs pattern. Aims and Objectives: The aim of the present study is to examine the different behaviors of the consumer, including pre-purchase, purchase, and post-purchase behavior. Materials and Methods: In order to get results, face to face interview held with 80 people which comprise a larger part of female individuals having upper as well as middle-class status. The prime source of data collection was primary. However, the study has also used the theoretical contribution of many researchers in their respective field. Results: Majority of the respondents were females (70%) from the age group of 20-50. The collected data was analyzed through hypothesis testing statistical techniques such as correlation analysis, single regression analysis, and ANOVA which has rejected the null hypothesis that there is no relation between researching the consumer behavior at different stages and organizational performance. The real finding of this study is that simply focusing on the buying part isn't enough to gain profits and fame, however, understanding the pre, buy and post-buy behavior of consumer performs a huge role in organization success. The outcomes demonstrated that the organization, which deals with the three phases of research of purchasing conduct is able to establish a great brand image as compare to their competitors. Alongside, enterprises can observe customer conduct in a considerably more proficient manner. Conclusion: The analyses of consumer behavior presented in this study is an attempt to understand the factors affecting consumer purchasing behavior. This study has revealed that those corporations are more successful, which work on understanding buying behavior instead to just focus on the selling products. As a result, organizations perform good and grow rapidly because consumers are the one who can make or break the company. The interviews that were conducted face to face, clearly revealed that those organizations become at top-notch whom consumers are satisfied, not just with product but also with services of the company. The study is not targeting the particular class of audience; however, it brings out benefits to the masses, in particular to business organizations.

Keywords: consumer behavior, pre purchase, post purchase, consumer satisfaction

Procedia PDF Downloads 112
652 Hybrid Model of Strategic and Contextual Leadership in Pluralistic Organizations- A Qualitative Multiple Case Study

Authors: Ergham Al Bachir

Abstract:

This study adopts strategic leadership (Upper Echelons) as the core theory and contextual leadership theory as the research lens. This research asks how the external context impacts strategic leadership effectiveness to achieve the outcomes in pluralistic organizations (PO). The study explores how the context influences the selection of CEOs, top management teams (TMT), and their leadership effectiveness. POs are characterized by the multiple objectives of their top management teams, divergent objectives, multiple strategies, and multiple governing authorities. The research question is explored by means of a qualitative multiple-case study focusing on healthcare, real estate, and financial services organizations. The data sources are semi-structured interviews, documents, and direct observations. The data analysis strategy is inductive and deploys thematic analysis and cross-case synthesis. The findings differentiate between national and international CEOs' delegation of authority and relationship with the Board of Directors. The findings identify the elements of the dynamic context that influence TMT and PO outcomes. The emergent hybrid strategic and contextual leadership framework shows how the different contextual factors influence strategic direction, PO context, selection of CEOs and TMT, and the outcomes in four pluralistic organizations. The study offers seven theoretical contributions to Upper Echelons, strategic leadership, and contextual leadership research. (1) The integration of two theories revealed how CEO’s impact on the organization is complementary to the contextual impact. (2) Conducting this study in the Middle East contributes to strategic leadership and contextual leadership research. (3) The demonstration of the significant contextual effects on the selection of CEOs. (4 and 5) Two contributions revealed new links between the context, the Board role, internal versus external CEOs, and national versus international CEOs. (6 and 7) This study offered two definitions: what accounts for CEO leadership effectiveness and organizational outcomes. Two methodological contributions were also identified: (1) Previous strategic leadership and Upper Echelons research are mainly quantitative, while this study adopts qualitative multiple-case research with face-to-face interviews. (2) The extrication of the CEO from the TMT advanced the data analysis in strategic leadership research. Four contributions are offered to practice: (1) The CEO's leadership effectiveness inside and outside the organization. (2) Rapid turnover of predecessor CEOs signifies the need for a strategic and contextual approach to CEOs' succession. (3) TMT composition and education impact on TMT-CEO and TMT-TMT interface. (4) Multilevel strategic contextual leadership development framework.

Keywords: strategic leadership, contextual leadership, upper echelons, pluralistic organizations, cross-cultural leadership

Procedia PDF Downloads 95
651 Modeling of in 738 LC Alloy Mechanical Properties Based on Microstructural Evolution Simulations for Different Heat Treatment Conditions

Authors: M. Tarik Boyraz, M. Bilge Imer

Abstract:

Conventionally cast nickel-based super alloys, such as commercial alloy IN 738 LC, are widely used in manufacturing of industrial gas turbine blades. With carefully designed microstructure and the existence of alloying elements, the blades show improved mechanical properties at high operating temperatures and corrosive environment. The aim of this work is to model and estimate these mechanical properties of IN 738 LC alloy solely based on simulations for projected heat treatment conditions or service conditions. The microstructure (size, fraction and frequency of gamma prime- γ′ and carbide phases in gamma- γ matrix, and grain size) of IN 738 LC needs to be optimized to improve the high temperature mechanical properties by heat treatment process. This process can be performed at different soaking temperature, time and cooling rates. In this work, micro-structural evolution studies were performed experimentally at various heat treatment process conditions, and these findings were used as input for further simulation studies. The operation time, soaking temperature and cooling rate provided by experimental heat treatment procedures were used as micro-structural simulation input. The results of this simulation were compared with the size, fraction and frequency of γ′ and carbide phases, and grain size provided by SEM (EDS module and mapping), EPMA (WDS module) and optical microscope for before and after heat treatment. After iterative comparison of experimental findings and simulations, an offset was determined to fit the real time and theoretical findings. Thereby, it was possible to estimate the final micro-structure without any necessity to carry out the heat treatment experiment. The output of this microstructure simulation based on heat treatment was used as input to estimate yield stress and creep properties. Yield stress was calculated mainly as a function of precipitation, solid solution and grain boundary strengthening contributors in microstructure. Creep rate was calculated as a function of stress, temperature and microstructural factors such as dislocation density, precipitate size, inter-particle spacing of precipitates. The estimated yield stress values were compared with the corresponding experimental hardness and tensile test values. The ability to determine best heat treatment conditions that achieve the desired microstructural and mechanical properties were developed for IN 738 LC based completely on simulations.

Keywords: heat treatment, IN738LC, simulations, super-alloys

Procedia PDF Downloads 248
650 Transition towards a Market Society: Commodification of Public Health in India and Pakistan

Authors: Mayank Mishra

Abstract:

Market Economy can be broadly defined as economic system where supply and demand regulate the economy and in which decisions pertaining to production, consumption, allocation of resources, price and competition are made by collective actions of individuals or organisations with limited government intervention. On the other hand Market Society is one where instead of the economy being embedded in social relations, social relations are embedded in the economy. A market economy becomes a market society when all of land, labour and capital are commodified. This transition also has effect on people’s attitude and values. Such a transition commence impacting the non-material aspect of life such as public education, public health and the like. The inception of neoliberal policies in non-market norms altered the nature of social goods like public health that raised the following questions. What impact would the transition to a market society make on people in terms of accessibility to public health? Is healthcare a commodity that can be subjected to a competitive market place? What kind of private investments are being made in public health and how do private investments alter the nature of a public good like healthcare? This research problem will employ empirical-analytical approach that includes deductive reasoning which will be using the existing concept of market economy and market society as a foundation for the analytical framework and the hypotheses to be examined. The research also intends to inculcate the naturalistic elements of qualitative methodology which refers to studying of real world situations as they unfold. The research will analyse the existing literature available on the subject. Concomitantly the research intends to access the primary literature which includes reports from the World Bank, World Health Organisation (WHO) and the different departments of respective ministries of the countries for the analysis. This paper endeavours to highlight how the issue of commodification of public health would lead to perpetual increase in its inaccessibility leading to stratification of healthcare services where one can avail the better services depending on the extent of one’s ability to pay. Since the fundamental maxim of private investments is to churn out profits, these kinds of trends would pose a detrimental effect on the society at large perpetuating the lacuna between the have and the have-nots.The increasing private investments, both, domestic and foreign, in public health sector are leading to increasing inaccessibility of public health services. Despite the increase in various public health schemes the quality and impact of government public health services are on a continuous decline.

Keywords: commodity, India and Pakistan, market society, public health

Procedia PDF Downloads 314
649 Policies to Reduce the Demand and Supply of Illicit Drugs in the Latin America: 2004 to 2016

Authors: Ana Caroline Ibrahim Lino, Denise Bomtempo Birche de Carvalho

Abstract:

The background of this research is the international process of control and monitoring of illicit psychoactive substances that has commenced in the early 20th century. This process was intensified with the UN Single Convention on Narcotic Drugs of 1961 and had its culmination in the 1970s with the "War on drugs", a doctrine undertaken by the United States of America. Since then, the phenomenon of drug prohibition has been pushing debates around alternatives of public policies to confront their consequences at a global level and in the specific context of Latin America. Previous research has answered the following key questions: a) With what characteristics and models has the international illicit drug control system consolidated in Latin America with the creation of the Organization of American States (OAS) and the Inter-American Drug Abuse Control Commission (CICAD)? b) What drug policies and programs were determined as guidelines for the member states by the OAS and CICAD? The present paper mainly addresses the analysis of the drug strategies developed by the OAS/CICAD for the Americas from 2004 to 2016. The primary sources have been extracted from the OAS/CICAD documents and reports, listed on the Internet sites of these organizations. Secondary sources refer to bibliographic research on the subject with the following descriptors: illicit drugs, public policies, international organizations, OAS, CICAD, and reducing the demand and supply of illicit drugs. The "content analysis" technique was used to organize the collected material and to choose the axes of analysis. The results show that the policies, strategies, and action plans for Latin America had been focused on anti-drug actions since the creation of the Commission until 2010. The discourses and policies to reduce drug demand and supply were of great importance for solving the problem. However, the real focus was on eliminating the substances by controlling the production, marketing, and distribution of illicit drugs. Little attention was given to the users and their families. The research is of great relevance to the Social Work. The guidelines and parameters of the Social Worker's profession are in line with the need for social, ethical, and political strengthening of any dimension that guarantees the rights of users of psychoactive substances. In addition, it contributed to the understanding of the political, economic, social, and cultural factors that structure the prohibitionism, whose matrix anchors the deprivation of rights and violence.

Keywords: illicit drug policies, international organizations, latin America, prohibitionism, reduce the demand and supply of illicit drugs

Procedia PDF Downloads 163
648 The First Import of Yellow Fever Cases in China and Its Revealing Suggestions for the Control and Prevention of Imported Emerging Diseases

Authors: Chao Li, Lei Zhou, Ruiqi Ren, Dan Li, Yali Wang, Daxin Ni, Zijian Feng, Qun Li

Abstract:

Background: In 2016, yellow fever had been first ever discovered in China, soon after the yellow fever epidemic occurred in Angola. After the discovery, China had promptly made the national protocol of control and prevention and strengthened the surveillance on passenger and vector. In this study, a descriptive analysis was conducted to summarize China’s experiences of response towards this import epidemic, in the hope of providing experiences on prevention and control of yellow fever and other similar imported infectious diseases in the future. Methods: The imported cases were discovered and reported by General Administration of Quality Supervision, Inspection and Quarantine (AQSIQ) and several hospitals. Each clinically diagnosed yellow fever case was confirmed by real-time reverse transcriptase polymerase chain reaction (RT–PCR). The data of the imported yellow fever cases were collected by local Centers for Disease Control and Prevention (CDC) through field investigations soon after they received the reports. Results: A total of 11 imported cases from Angola were reported in China, during Angola’s yellow fever outbreak. Six cases were discovered by the AQSIQ, among which two with mild symptom were initiative declarations at the time of entry. Except for one death, the remaining 10 cases all had recovered after timely and proper treatment. All cases are Chinese, and lived in Luanda, the capital of Angola. 73% were retailers (8/11) from Fuqing city in Fujian province, and the other three were labors send by companies. 10 cases had experiences of medical treatment in Luanda after onset, among which 8 cases visited the same local Chinese medicine hospital (China Railway four Bureau Hospital). Among the 11 cases, only one case had an effective vaccination. The result of emergency surveillance for mosquito density showed that only 14 containers of water were found positive around places of three cases, and the Breteau Index is 15. Conclusions: Effective response was taken to control and prevent the outbreak of yellow fever in China after discovering the imported cases. However, though the similar origin of Chinese in Angola has provided an easy access for disease detection, information sharing, health education and vaccination on yellow fever; these conveniences were overlooked during previous disease prevention methods. Besides, only one case having effective vaccination revealed the inadequate capacity of immunization service in China. These findings will provide suggestions to improve China’s capacity to deal with not only yellow fever but also other similar imported diseases in China.

Keywords: yellow fever, first import, China, suggestion

Procedia PDF Downloads 191
647 Dose Profiler: A Tracking Device for Online Range Monitoring in Particle Therapy

Authors: G. Battistoni, F. Collamati, E. De Lucia, R. Faccini, C. Mancini-Terracciano, M. Marafini, I. Mattei, S. Muraro, V. Patera, A. Sarti, A. Sciubba, E. Solfaroli Camillocci, M. Toppi, G. Traini, S. M. Valle, C. Voena

Abstract:

Accelerated charged particles, mainly protons and carbon ions, are presently used in Particle Therapy (PT) to treat solid tumors. The precision of PT exploiting the charged particle high localized dose deposition in tissues and biological effectiveness in killing cancer cells demands for an online dose monitoring technique, crucial to improve the quality assurance of treatments: possible patient mis-positionings and biological changes with respect to the CT scan could negatively affect the therapy outcome. In PT the beam range confined in the irradiated target can be monitored thanks to the secondary radiation produced by the interaction of the projectiles with the patient tissue. The Dose Profiler (DP) is a novel device designed to track charged secondary particles and reconstruct their longitudinal emission distribution, correlated to the Bragg peak position. The feasibility of this approach has been demonstrated by dedicated experimental measurements. The DP has been developed in the framework of the INSIDE project, MIUR, INFN and Centro Fermi, Museo Storico della Fisica e Centro Studi e Ricerche 'E. Fermi', Roma, Italy and will be tested at the Proton Therapy center of Trento (Italy) within the end of 2017. The DP combines a tracker, made of six layers of two-view scintillating fibers with square cross section (0.5 x 0.5 mm2) with two layers of two-view scintillating bars (section 12.0 x 0.6 mm2). The electronic readout is performed by silicon photomultipliers. The sensitive area of the tracking planes is 20 x 20 cm2. To optimize the detector layout, a Monte Carlo (MC) simulation based on the FLUKA code has been developed. The complete DP geometry and the track reconstruction code have been fully implemented in the MC. In this contribution, the DP hardware will be described. The expected detector performance computed using a dedicated simulation of a 220 MeV/u carbon ion beam impinging on a PMMA target will be presented, and the result will be discussed in the standard clinical application framework. A possible procedure for real-time beam range monitoring is proposed, following the expectations in actual clinical operation.

Keywords: online range monitoring, particle therapy, quality assurance, tracking detector

Procedia PDF Downloads 240
646 Method of Complex Estimation of Text Perusal and Indicators of Reading Quality in Different Types of Commercials

Authors: Victor N. Anisimov, Lyubov A. Boyko, Yazgul R. Almukhametova, Natalia V. Galkina, Alexander V. Latanov

Abstract:

Modern commercials presented on billboards, TV and on the Internet contain a lot of information about the product or service in text form. However, this information cannot always be perceived and understood by consumers. Typical sociological focus group studies often cannot reveal important features of the interpretation and understanding information that has been read in text messages. In addition, there is no reliable method to determine the degree of understanding of the information contained in a text. Only the fact of viewing a text does not mean that consumer has perceived and understood the meaning of this text. At the same time, the tools based on marketing analysis allow only to indirectly estimate the process of reading and understanding a text. Therefore, the aim of this work is to develop a valid method of recording objective indicators in real time for assessing the fact of reading and the degree of text comprehension. Psychophysiological parameters recorded during text reading can form the basis for this objective method. We studied the relationship between multimodal psychophysiological parameters and the process of text comprehension during reading using the method of correlation analysis. We used eye-tracking technology to record eye movements parameters to estimate visual attention, electroencephalography (EEG) to assess cognitive load and polygraphic indicators (skin-galvanic reaction, SGR) that reflect the emotional state of the respondent during text reading. We revealed reliable interrelations between perceiving the information and the dynamics of psychophysiological parameters during reading the text in commercials. Eye movement parameters reflected the difficulties arising in respondents during perceiving ambiguous parts of text. EEG dynamics in rate of alpha band were related with cumulative effect of cognitive load. SGR dynamics were related with emotional state of the respondent and with the meaning of text and type of commercial. EEG and polygraph parameters together also reflected the mental difficulties of respondents in understanding text and showed significant differences in cases of low and high text comprehension. We also revealed differences in psychophysiological parameters for different type of commercials (static vs. video, financial vs. cinema vs. pharmaceutics vs. mobile communication, etc.). Conclusions: Our methodology allows to perform multimodal evaluation of text perusal and the quality of text reading in commercials. In general, our results indicate the possibility of designing an integral model to estimate the comprehension of reading the commercial text in percent scale based on all noticed markers.

Keywords: reading, commercials, eye movements, EEG, polygraphic indicators

Procedia PDF Downloads 166
645 Experimental Investigation of Hydrogen Addition in the Intake Air of Compressed Engines Running on Biodiesel Blend

Authors: Hendrick Maxil Zárate Rocha, Ricardo da Silva Pereira, Manoel Fernandes Martins Nogueira, Carlos R. Pereira Belchior, Maria Emilia de Lima Tostes

Abstract:

This study investigates experimentally the effects of hydrogen addition in the intake manifold of a diesel generator operating with a 7% biodiesel-diesel oil blend (B7). An experimental apparatus setup was used to conduct performance and emissions tests in a single cylinder, air cooled diesel engine. This setup consisted of a generator set connected to a wirewound resistor load bank that was used to vary engine load. In addition, a flowmeter was used to determine hydrogen volumetric flowrate and a digital anemometer coupled with an air box to measure air flowrate. Furthermore, a digital precision electronic scale was used to measure engine fuel consumption and a gas analyzer was used to determine exhaust gas composition and exhaust gas temperature. A thermopar was installed near the exhaust collection to measure cylinder temperature. In-cylinder pressure was measured using an AVL Indumicro data acquisition system with a piezoelectric pressure sensor. An AVL optical encoder was installed in the crankshaft and synchronized with in-cylinder pressure in real time. The experimental procedure consisted of injecting hydrogen into the engine intake manifold at different mass concentrations of 2,6,8 and 10% of total fuel mass (B7 + hydrogen), which represented energy fractions of 5,15, 20 and 24% of total fuel energy respectively. Due to hydrogen addition, the total amount of fuel energy introduced increased and the generators fuel injection governor prevented any increases of engine speed. Several conclusions can be stated from the test results. A reduction in specific fuel consumption as a function of hydrogen concentration increase was noted. Likewise, carbon dioxide emissions (CO2), carbon monoxide (CO) and unburned hydrocarbons (HC) decreased as hydrogen concentration increased. On the other hand, nitrogen oxides emissions (NOx) increased due to average temperatures inside the cylinder being higher. There was also an increase in peak cylinder pressure and heat release rate inside the cylinder, since the fuel ignition delay was smaller due to hydrogen content increase. All this indicates that hydrogen promotes faster combustion and higher heat release rates and can be an important additive to all kind of fuels used in diesel generators.

Keywords: diesel engine, hydrogen, dual fuel, combustion analysis, performance, emissions

Procedia PDF Downloads 351
644 An Experimental Study on the Coupled Heat Source and Heat Sink Effects on Solid Rockets

Authors: Vinayak Malhotra, Samanyu Raina, Ajinkya Vajurkar

Abstract:

Enhancing the rocket efficiency by controlling the external factors in solid rockets motors has been an active area of research for most of the terrestrial and extra-terrestrial system operations. Appreciable work has been done, but the complexity of the problem has prevented thorough understanding due to heterogenous heat and mass transfer. On record, severe issues have surfaced amounting to irreplaceable loss of mankind, instruments, facilities, and huge amount of money being invested every year. The coupled effect of an external heat source and external heat sink is an aspect yet to be articulated in combustion. Better understanding of this coupled phenomenon will induce higher safety standards, efficient missions, reduced hazard risks, with better designing, validation, and testing. The experiment will help in understanding the coupled effect of an external heat sink and heat source on the burning process, contributing in better combustion and fire safety, which are very important for efficient and safer rocket flights and space missions. Safety is the most prevalent issue in rockets, which assisted by poor combustion efficiency, emphasizes research efforts to evolve superior rockets. This signifies real, engineering, scientific, practical, systems and applications. One potential application is Solid Rocket Motors (S.R.M). The study may help in: (i) Understanding the effect on efficiency of core engines due to the primary boosters if considered as source, (ii) Choosing suitable heat sink materials for space missions so as to vary the efficiency of the solid rocket depending on the mission, (iii) Giving an idea about how the preheating of the successive stage due to previous stage acting as a source may affect the mission. The present work governs the temperature (resultant) and thus the heat transfer which is expected to be non-linear because of heterogeneous heat and mass transfer. The study will deepen the understanding of controlled inter-energy conversions and the coupled effect of external source/sink(s) surrounding the burning fuel eventually leading to better combustion thus, better propulsion. The work is motivated by the need to have enhanced fire safety and better rocket efficiency. The specific objective of the work is to understand the coupled effect of external heat source and sink on propellant burning and to investigate the role of key controlling parameters. Results as of now indicate that there exists a singularity in the coupled effect. The dominance of the external heat sink and heat source decides the relative rocket flight in Solid Rocket Motors (S.R.M).

Keywords: coupled effect, heat transfer, sink, solid rocket motors, source

Procedia PDF Downloads 223
643 Questioning the Predominant Feminism in Ahalya, a Short Film by Sujoy Ghosh

Authors: Somya Sharma

Abstract:

Ahalya, the critically acclaimed short film, is known to demolish the gender constructs of the age old myth of Ahalya. The paper tries to crack the overt meaning of the short film by reading between the dialogues and deconstructing the idea of the pseudo feminism in the short film Ahalya by Sujoy Ghosh. The film, by subverting the role of male character by making it seem submissive as compared to the female character's role seems to be just a surface level reading of the text. It seems that Sujoy Ghosh has played not just with changing the paradigm, but also trying to alter the history by doing so. The age old myth of putting Ahalya as a part of the five virgins (panchkanya) of Hindu mythology is explored in the paper. God's manoeuvre cannot be questioned and the two male characters tend to again shape the deed and the life of the female character, Ahalya. It is of importance to note that even in the 21st century, progressive actors like Radhika Apte fail to acknowledge the politics of altering history, not in a progressive way. The film blinds the viewer in the first watch to fall for the female strength and ownership of her sexuality, which is reflected in the opening scene itself where she opens the gate for the police man Indra Sen (representing God Indra who seduced her) who is charmed by her white dress. White, in Hindu mythology, stands for mourning, and this can be a hint towards the prophecy of what is about to come. Ahalya, bold, strong, and confident in this scene seems to be in total ownership of her sexual identity. Further, as the film progresses, control of Ahalya over her acts becomes even more dominant. In the myth of Ahalya, Gautama Maharishi, her husband, who wins her by Brahma's courtesy, curses her for her infidelity. She is then turned into a stone because of the curse and is redeemed when Lord Rama's foot brushes the stone. In the film, it is with the help of Ahalya that Goutam Sadhu turns Indra Sen into a stone doll. Ahalya is seen as a seductress who bewitches Indra Sen, and because the latter falls for the trap laid by the husband wife duo, he is turned into a doll. The attempt made by the paper is to read Ahalya as a character of the stand in wife who is yet again a pawn in the play of Goutama's revenge from Indra (who in the myth is able to escape from any curse or punishment for the act). The paper, therefore, reverts the idea which has till now been signified by the film and attempts to study the feminism this film appropriates. It is essential to break down the structure formed by such overt transgressing films in order to provide a real outlook of how feminism is twisted and moulded according to a man’s wishes.

Keywords: deconstructing, Hindu mythology, Panchkanya, predominant feminism, seductress, stone doll

Procedia PDF Downloads 254
642 Photovoltaic Modules Fault Diagnosis Using Low-Cost Integrated Sensors

Authors: Marjila Burhanzoi, Kenta Onohara, Tomoaki Ikegami

Abstract:

Faults in photovoltaic (PV) modules should be detected to the greatest extent as early as possible. For that conventional fault detection methods such as electrical characterization, visual inspection, infrared (IR) imaging, ultraviolet fluorescence and electroluminescence (EL) imaging are used, but they either fail to detect the location or category of fault, or they require expensive equipment and are not convenient for onsite application. Hence, these methods are not convenient to use for monitoring small-scale PV systems. Therefore, low cost and efficient inspection techniques with the ability of onsite application are indispensable for PV modules. In this study in order to establish efficient inspection technique, correlation between faults and magnetic flux density on the surface is of crystalline PV modules are investigated. Magnetic flux on the surface of normal and faulted PV modules is measured under the short circuit and illuminated conditions using two different sensor devices. One device is made of small integrated sensors namely 9-axis motion tracking sensor with a 3-axis electronic compass embedded, an IR temperature sensor, an optical laser position sensor and a microcontroller. This device measures the X, Y and Z components of the magnetic flux density (Bx, By and Bz) few mm above the surface of a PV module and outputs the data as line graphs in LabVIEW program. The second device is made of a laser optical sensor and two magnetic line sensor modules consisting 16 pieces of magnetic sensors. This device scans the magnetic field on the surface of PV module and outputs the data as a 3D surface plot of the magnetic flux intensity in a LabVIEW program. A PC equipped with LabVIEW software is used for data acquisition and analysis for both devices. To show the effectiveness of this method, measured results are compared to those of a normal reference module and their EL images. Through the experiments it was confirmed that the magnetic field in the faulted areas have different profiles which can be clearly identified in the measured plots. Measurement results showed a perfect correlation with the EL images and using position sensors it identified the exact location of faults. This method was applied on different modules and various faults were detected using it. The proposed method owns the ability of on-site measurement and real-time diagnosis. Since simple sensors are used to make the device, it is low cost and convenient to be sued by small-scale or residential PV system owners.

Keywords: fault diagnosis, fault location, integrated sensors, PV modules

Procedia PDF Downloads 224
641 Evaluation of Sequential Polymer Flooding in Multi-Layered Heterogeneous Reservoir

Authors: Panupong Lohrattanarungrot, Falan Srisuriyachai

Abstract:

Polymer flooding is a well-known technique used for controlling mobility ratio in heterogeneous reservoirs, leading to improvement of sweep efficiency as well as wellbore profile. However, low injectivity of viscous polymer solution attenuates oil recovery rate and consecutively adds extra operating cost. An attempt of this study is to improve injectivity of polymer solution while maintaining recovery factor, enhancing effectiveness of polymer flooding method. This study is performed by using reservoir simulation program to modify conventional single polymer slug into sequential polymer flooding, emphasizing on increasing of injectivity and also reduction of polymer amount. Selection of operating conditions for single slug polymer including pre-injected water, polymer concentration and polymer slug size is firstly performed for a layered-heterogeneous reservoir with Lorenz coefficient (Lk) of 0.32. A selected single slug polymer flooding scheme is modified into sequential polymer flooding with reduction of polymer concentration in two different modes: Constant polymer mass and reduction of polymer mass. Effects of Residual Resistance Factor (RRF) is also evaluated. From simulation results, it is observed that first polymer slug with the highest concentration has the main function to buffer between displacing phase and reservoir oil. Moreover, part of polymer from this slug is also sacrificed for adsorption. Reduction of polymer concentration in the following slug prevents bypassing due to unfavorable mobility ratio. At the same time, following slugs with lower viscosity can be injected easily through formation, improving injectivity of the whole process. A sequential polymer flooding with reduction of polymer mass shows great benefit by reducing total production time and amount of polymer consumed up to 10% without any downside effect. The only advantage of using constant polymer mass is slightly increment of recovery factor (up to 1.4%) while total production time is almost the same. Increasing of residual resistance factor of polymer solution yields a benefit on mobility control by reducing effective permeability to water. Nevertheless, higher adsorption results in low injectivity, extending total production time. Modifying single polymer slug into sequence of reduced polymer concentration yields major benefits on reducing production time as well as polymer mass. With certain design of polymer flooding scheme, recovery factor can even be further increased. This study shows that application of sequential polymer flooding can be certainly applied to reservoir with high value of heterogeneity since it requires nothing complex for real implementation but just a proper design of polymer slug size and concentration.

Keywords: polymer flooding, sequential, heterogeneous reservoir, residual resistance factor

Procedia PDF Downloads 478
640 Informed Urban Design: Minimizing Urban Heat Island Intensity via Stochastic Optimization

Authors: Luis Guilherme Resende Santos, Ido Nevat, Leslie Norford

Abstract:

The Urban Heat Island (UHI) is characterized by increased air temperatures in urban areas compared to undeveloped rural surrounding environments. With urbanization and densification, the intensity of UHI increases, bringing negative impacts on livability, health and economy. In order to reduce those effects, it is required to take into consideration design factors when planning future developments. Given design constraints such as population size and availability of area for development, non-trivial decisions regarding the buildings’ dimensions and their spatial distribution are required. We develop a framework for optimization of urban design in order to jointly minimize UHI intensity and buildings’ energy consumption. First, the design constraints are defined according to spatial and population limits in order to establish realistic boundaries that would be applicable in real life decisions. Second, the tools Urban Weather Generator (UWG) and EnergyPlus are used to generate outputs of UHI intensity and total buildings’ energy consumption, respectively. Those outputs are changed based on a set of variable inputs related to urban morphology aspects, such as building height, urban canyon width and population density. Lastly, an optimization problem is cast where the utility function quantifies the performance of each design candidate (e.g. minimizing a linear combination of UHI and energy consumption), and a set of constraints to be met is set. Solving this optimization problem is difficult, since there is no simple analytic form which represents the UWG and EnergyPlus models. We therefore cannot use any direct optimization techniques, but instead, develop an indirect “black box” optimization algorithm. To this end we develop a solution that is based on stochastic optimization method, known as the Cross Entropy method (CEM). The CEM translates the deterministic optimization problem into an associated stochastic optimization problem which is simple to solve analytically. We illustrate our model on a typical residential area in Singapore. Due to fast growth in population and built area and land availability generated by land reclamation, urban planning decisions are of the most importance for the country. Furthermore, the hot and humid climate in the country raises the concern for the impact of UHI. The problem presented is highly relevant to early urban design stages and the objective of such framework is to guide decision makers and assist them to include and evaluate urban microclimate and energy aspects in the process of urban planning.

Keywords: building energy consumption, stochastic optimization, urban design, urban heat island, urban weather generator

Procedia PDF Downloads 134
639 Standardized Testing of Filter Systems regarding Their Separation Efficiency in Terms of Allergenic Particles and Airborne Germs

Authors: Johannes Mertl

Abstract:

Our surrounding air contains various particles. Besides typical representatives of inorganic dust, such as soot and ash, also particles originating from animals, microorganisms or plants are floating through the air, so-called bioaerosols. The group of bioaerosols consists of a broad spectrum of particles of different size, including fungi, bacteria, viruses, spores, or tree, flower and grass pollen that are of high relevance for allergy sufferers. In dependence of the environmental climate and the actual season, these allergenic particles can be found in enormous numbers in the air and are inhaled by humans via the respiration tract, with a potential for inflammatory diseases of the airways, such as asthma or allergic rhinitis. As a consequence air filter systems of ventilation and air conditioning devices are required to meet very high standards to prevent, or at least lower the number of allergens and airborne germs entering the indoor air. Still, filter systems are merely classified for their separation rates using well-defined mineral test dust, while no appropriate sufficiently standardized test methods for bioaerosols exist. However, determined separation rates for mineral test particles of a certain size cannot simply be transferred to bioaerosols, as separation efficiency of particularly fine and respirable particles (< 10 microns) is dependent not only on their shape and particle diameter, but also defined by their density and physicochemical properties. For this reason, the OFI developed a test method, which directly enables a testing of filters and filter media for their separation rates on bioaerosols, as well as a classification of filters. Besides allergens from an intact or fractured tree or grass pollen, allergenic proteins bound to particulates, as well as allergenic fungal spores (e.g. Cladosporium cladosporioides), or bacteria can be used to classify filters regarding their separation rates. Allergens passing through the filter can then be detected by highly sensitive immunological assays (ELISA) or in the case of fungal spores by microbiological methods, which allow for the detection of even one single spore passing the filter. The test procedure, which is carried out in laboratory scale, was furthermore validated regarding its sufficiency to cover real life situations by upscaling using air conditioning devices showing great conformity in terms of separation rates. Additionally, a clinical study with allergy sufferers was performed to verify analytical results. Several different air conditioning filters from the car industry have been tested, showing significant differences in their separation rates.

Keywords: airborne germs, allergens, classification of filters, fine dust

Procedia PDF Downloads 256
638 The Biological Function and Clinical Significance of Long Non-coding RNA LINC AC008063 in Head and Neck Squamous Carcinoma

Authors: Maierhaba Mijiti

Abstract:

Objective:The aim is to understand the relationship between the expression level of the long-non-coding RNA LINC AC008063 and the clinicopathological parameters of patients with head and neck squamous cell carcinoma (HNSCC), and to clarify the biological function of LINC AC008063 in HNSCC cells. Moreover, it provides a potential biomarker for the diagnosis, treatment, and prognosis evaluation of HNSCC. Methods: The expression level of LINC AC008063 in the HNSCC was analyzed using transcriptome sequencing data from the TCGA (The cancer genome atlas) database. The expression levels of LINC AC008063 in human embryonic lung diploid cells 2BS, human immortalized keratinocytes HACAT, HNSCC cell lines CAL-27, Detroit562, AMC-HN-8, FD-LSC-1, FaDu and WSU-HN30 were determined by real-time quantitative PCR (qPCR). RNAi (RNA interference) was introduced for LINC AC008063 knockdown in HNSCC cell lines, the localization and abundance analysis of LINC AC008063 was determined by RT-qPCR, and the biological functions were examined by CCK-8, clone formation, flow cytometry, transwell invasion and migration assays, Seahorse assay. Results: LINC AC008063 was upregulated in HNSCC tissue (P<0.001), and verified b CCK-8, clone formation, flow cytometry, transwell invasion and migration assays, Seahorse assayy qPCR in HNSCC cell lines. The survival analysis revealed that the overall survival rate (OS) of patients with high LINC AC008063 expression group was significantly lower than that in the LINC AC008063 expression group, the median survival times for the two groups were 33.10 months and 61.27 months, respectively (P=0.002). The clinical correlation analysis revealed that its expression was positively correlated with the age of patients with HNSCC (P<0.001) and positively correlated with pathological state (T3+T4>T1+T2, P=0.03). The RT-qPCR results showed that LINC AC008063 was mainly enriched in cytoplasm (P=0.01). Knockdown of LINC AC008063 inhibited proliferation, colony formation, migration and invasion; the glycolytic capacity was significantly decreased in HNSCC cell lines (P<0.05). Conclusion: High level of LINC AC008063 was associated with the malignant progression of HNSCC as well as promoting the important biological functions of proliferation, colony formation, migration and invasion; in particular, the glycolytic capacity was decreased in HNSCC cells. Therefore, LINC AC008063 may serve as a potential biomarker for HNSCC and a distinct molecular target to inhibit glycolysis.

Keywords: head and neck squamous cell carcinoma, oncogene, long non-coding RNA, LINC AC008063, invasion and metastasis

Procedia PDF Downloads 16
637 Measuring Systems Interoperability: A Focal Point for Standardized Assessment of Regional Disaster Resilience

Authors: Joel Thomas, Alexa Squirini

Abstract:

The key argument of this research is that every element of systems interoperability is an enabler of regional disaster resilience, and arguably should become a focal point for standardized measurement of communities’ ability to work together. Few resilience research efforts have focused on the development and application of solutions that measurably improve communities’ ability to work together at a regional level, yet a majority of the most devastating and disruptive disasters are those that have had a regional impact. The key findings of the research include a unique theoretical, mathematical, and operational approach to tangibly and defensibly measure and assess systems interoperability required to support crisis information management activities performed by governments, the private sector, and humanitarian organizations. A most effective way for communities to measurably improve regional disaster resilience is through deliberately executed disaster preparedness activities. Developing interoperable crisis information management capabilities is a crosscutting preparedness activity that greatly affects a community’s readiness and ability to work together in times of crisis. Thus, improving communities’ human and technical posture to work together in advance of a crisis, with the ultimate goal of enabling information sharing to support coordination and the careful management of available resources, is a primary means by which communities may improve regional disaster resilience. This model describes how systems interoperability can be qualitatively and quantitatively assessed when characterized as five forms of capital: governance; standard operating procedures; technology; training and exercises; and usage. The unique measurement framework presented defines the relationships between systems interoperability, information sharing and safeguarding, operational coordination, community preparedness and regional disaster resilience, and offers a means by which to implement real-world solutions and measure progress over the course of a multi-year program. The model is being developed and piloted in partnership with the U.S. Department of Homeland Security (DHS) Science and Technology Directorate (S&T) and the North Atlantic Treaty Organization (NATO) Advanced Regional Civil Emergency Coordination Pilot (ARCECP) with twenty-three organizations in Bosnia and Herzegovina, Croatia, Macedonia, and Montenegro. The intended effect of the model implementation is to enable communities to answer two key questions: 'Have we measurably improved crisis information management capabilities as a result of this effort?' and, 'As a result, are we more resilient?'

Keywords: disaster, interoperability, measurement, resilience

Procedia PDF Downloads 143
636 Static Charge Control Plan for High-Density Electronics Centers

Authors: Clara Oliver, Oibar Martinez, Jose Miguel Miranda

Abstract:

Ensuring a safe environment for sensitive electronics boards in places with high limitations in size poses two major difficulties: the control of charge accumulation in floating floors and the prevention of excess charge generation due to air cooling flows. In this paper, we discuss these mechanisms and possible solutions to prevent them. An experiment was made in the control room of a Cherenkov Telescope, where six racks of 2x1x1 m size and independent cooling units are located. The room is 10x4x2.5 m, and the electronics include high-speed digitizers, trigger circuits, etc. The floor used in this room was antistatic, but it was a raised floor mounted in floating design to facilitate the handling of the cables and maintenance. The tests were made by measuring the contact voltage acquired by a person who was walking along the room with different footwear qualities. In addition, we took some measurements of the voltage accumulated in a person in other situations like running or sitting up and down on an office chair. The voltages were taken in real time with an electrostatic voltage meter and dedicated control software. It is shown that peak voltages as high as 5 kV were measured with ambient humidity of more than 30%, which are within the range of a class 3A according to the HBM standard. In order to complete the results, we have made the same experiment in different spaces with alternative types of the floor like synthetic floor and earthenware floor obtaining peak voltages much lower than the ones measured with the floating synthetic floor. The grounding quality one achieves with this kind of floors can hardly beat the one typically encountered in standard floors glued directly on a solid substrate. On the other hand, the air ventilation used to prevent the overheating of the boards probably contributed in a significant way to the charge accumulated in the room. During the assessment of the quality of the static charge control, it is necessary to guarantee that the tests are made under repeatable conditions. One of the major difficulties which one encounters during these assessments is the fact the electrostatic voltmeters might provide different values depending on the humidity conditions and ground resistance quality. In addition, the use of certified antistatic footwear might mask deficiencies in the charge control. In this paper, we show how we defined protocols to guarantee that electrostatic readings are reliable. We believe that this can be helpful not only to qualify the static charge control in a laboratory but also to asses any procedure oriented to minimize the risk of electrostatic discharge events.

Keywords: electrostatics, ESD protocols, HBM, static charge control

Procedia PDF Downloads 131
635 Simulation of Solar Assisted Absorption Cooling and Electricity Generation along with Thermal Storage

Authors: Faezeh Mosallat, Eric L. Bibeau, Tarek El Mekkawy

Abstract:

Availability of a wide variety of renewable resources, such as large reserves of hydro, biomass, solar and wind in Canada provides significant potential to improve the sustainability of energy uses. As buildings represent a considerable portion of energy use in Canada, application of distributed solar energy systems for heating and cooling may increase the amount of renewable energy use. Parabolic solar trough systems have seen limited deployments in cold northern climates as they are more suitable for electricity production in southern latitudes. Heat production by concentrating solar rays using parabolic troughs can overcome the poor efficiencies of flat panels and evacuated tubes in cold climates. A numerical dynamic model is developed to simulate an installed parabolic solar trough facility in Winnipeg. The results of the numerical model are validated using the experimental data obtained from this system. The model is developed in Simulink and will be utilized to simulate a tri-generation system for heating, cooling and electricity generation in remote northern communities. The main objective of this simulation is to obtain operational data of solar troughs in cold climates as this is lacking in the literature. In this paper, the validated Simulink model is applied to simulate a solar assisted absorption cooling system along with electricity generation using organic Rankine cycle (ORC) and thermal storage. A control strategy is employed to distribute the heated oil from solar collectors among the above three systems considering the temperature requirements. This modeling provides dynamic performance results using real time minutely meteorological data which are collected at the same location the solar system is installed. This is a big step ahead of the current models by accurately calculating the available solar energy at each time step considering the solar radiation fluctuations due to passing clouds. The solar absorption cooling is modeled to use the generated heat from the solar trough system and provide cooling in summer for a greenhouse which is located next to the solar field. A natural gas water heater provides the required excess heat for the absorption cooling at low or no solar radiation periods. The results of the simulation are presented for a summer month in Winnipeg which includes the amount of generated electric power from ORC and contribution of solar energy in the cooling load provision

Keywords: absorption cooling, parabolic solar trough, remote community, validated model

Procedia PDF Downloads 216
634 How Virtualization, Decentralization, and Network-Building Change the Manufacturing Landscape: An Industry 4.0 Perspective

Authors: Malte Brettel, Niklas Friederichsen, Michael Keller, Marius Rosenberg

Abstract:

The German manufacturing industry has to withstand an increasing global competition on product quality and production costs. As labor costs are high, several industries have suffered severely under the relocation of production facilities towards aspiring countries, which have managed to close the productivity and quality gap substantially. Established manufacturing companies have recognized that customers are not willing to pay large price premiums for incremental quality improvements. As a consequence, many companies from the German manufacturing industry adjust their production focusing on customized products and fast time to market. Leveraging the advantages of novel production strategies such as Agile Manufacturing and Mass Customization, manufacturing companies transform into integrated networks, in which companies unite their core competencies. Hereby, virtualization of the process- and supply-chain ensures smooth inter-company operations providing real-time access to relevant product and production information for all participating entities. Boundaries of companies deteriorate, as autonomous systems exchange data, gained by embedded systems throughout the entire value chain. By including Cyber-Physical-Systems, advanced communication between machines is tantamount to their dialogue with humans. The increasing utilization of information and communication technology allows digital engineering of products and production processes alike. Modular simulation and modeling techniques allow decentralized units to flexibly alter products and thereby enable rapid product innovation. The present article describes the developments of Industry 4.0 within the literature and reviews the associated research streams. Hereby, we analyze eight scientific journals with regards to the following research fields: Individualized production, end-to-end engineering in a virtual process chain and production networks. We employ cluster analysis to assign sub-topics into the respective research field. To assess the practical implications, we conducted face-to-face interviews with managers from the industry as well as from the consulting business using a structured interview guideline. The results reveal reasons for the adaption and refusal of Industry 4.0 practices from a managerial point of view. Our findings contribute to the upcoming research stream of Industry 4.0 and support decision-makers to assess their need for transformation towards Industry 4.0 practices.

Keywords: Industry 4.0., mass customization, production networks, virtual process-chain

Procedia PDF Downloads 279