Search results for: open heart procedure
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6120

Search results for: open heart procedure

360 A Strength Weaknesses Opportunities and Threats Analysis of Socialisation Externalisation Combination and Internalisation Modes in Knowledge Management Practice: A Systematic Review of Literature

Authors: Aderonke Olaitan Adesina

Abstract:

Background: The paradigm shift to knowledge, as the key to organizational innovation and competitive advantage, has made the management of knowledge resources in organizations a mandate. A key component of the knowledge management (KM) cycle is knowledge creation, which is researched to be the result of the interaction between explicit and tacit knowledge. An effective knowledge creation process requires the use of the right model. The SECI (Socialisation, Externalisation, Combination, and Internalisation) model, proposed in 1995, is attested to be a preferred model of choice for knowledge creation activities. The model has, however, been criticized by researchers, who raise their concern, especially about its sequential nature. Therefore, this paper reviews extant literature on the practical application of each mode of the SECI model, from 1995 to date, with a view to ascertaining the relevance in modern-day KM practice. The study will establish the trends of use, with regards to the location and industry of use, and the interconnectedness of the modes. The main research question is, for organizational knowledge creation activities, is the SECI model indeed linear and sequential? In other words, does the model need to be reviewed in today’s KM practice? The review will generate a compendium of the usage of the SECI modes and propose a framework of use, based on the strength weaknesses opportunities and threats (SWOT) findings of the study. Method: This study will employ the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) methodology to investigate the usage and SWOT of the modes, in order to ascertain the success, or otherwise, of the sequential application of the modes in practice from 1995 to 2019. To achieve the purpose, four databases will be explored to search for open access, peer-reviewed articles from 1995 to 2019. The year 1995 is chosen as the baseline because it was the year the first paper on the SECI model was published. The study will appraise relevant peer-reviewed articles under the search terms: SECI (or its synonym, knowledge creation theory), socialization, externalization, combination, and internalization in the title, abstract, or keywords list. This review will include only empirical studies of knowledge management initiatives in which the SECI model and its modes were used. Findings: It is expected that the study will highlight the practical relevance of each mode of the SECI model, the linearity or not of the model, the SWOT in each mode. Concluding Statement: Organisations can, from the analysis, determine the modes of emphasis for their knowledge creation activities. It is expected that the study will support decision making in the choice of the SECI model as a strategy for the management of organizational knowledge resources, and in appropriating the SECI model, or its remodeled version, as a theoretical framework in future KM research.

Keywords: combination, externalisation, internalisation, knowledge management, SECI model, socialisation

Procedia PDF Downloads 316
359 Research on the Updating Strategy of Public Space in Small Towns in Zhejiang Province under the Background of New-Style Urbanization

Authors: Chen Yao, Wang Ke

Abstract:

Small towns are the most basic administrative institutions in our country, which are connected with cities and rural areas. Small towns play an important role in promoting local urban and rural economic development, providing the main public services and maintaining social stability in social governance. With the vigorous development of small towns and the transformation of industrial structure, the changes of social structure, spatial structure, and lifestyle are lagging behind, causing that the spatial form and landscape style do not belong to both cities and rural areas, and seriously affecting the quality of people’s life space and environment. The rural economy in Zhejiang Province has started, the society and the population are also developing in relative stability. In September 2016, Zhejiang Province set out the 'Technical Guidelines for Comprehensive Environmental Remediation of Small Towns in Zhejiang Province,' so as to comprehensively implement the small town comprehensive environmental remediation with the main content of strengthening the plan and design leading, regulating environmental sanitation, urban order and town appearance. In November 2016, Huzhou City started the comprehensive environmental improvement of small towns, strived to use three years to significantly improve the 115 small towns, as well as to create a number of high quality, distinctive and beautiful towns with features of 'clean and livable, rational layout, industrial development, poetry and painting style'. This paper takes Meixi Town, Zhangwu Town and Sanchuan Village in Huzhou City as the empirical cases, analyzes the small town public space by applying the relative theory of actor-network and space syntax. This paper also analyzes the spatial composition in actor and social structure elements, as well as explores the relationship of actor’s spatial practice and public open space by combining with actor-network theory. This paper introduces the relevant theories and methods of spatial syntax, carries out research analysis and design planning analysis of small town spaces from the perspective of quantitative analysis. And then, this paper proposes the effective updating strategy for the existing problems in public space. Through the planning and design in the building level, the dissonant factors produced by various spatial combination of factors and between landscape design and urban texture during small town development will be solved, inhabitant quality of life will be promoted, and town development vitality will be increased.

Keywords: small towns, urbanization, public space, updating

Procedia PDF Downloads 204
358 Modern Detection and Description Methods for Natural Plants Recognition

Authors: Masoud Fathi Kazerouni, Jens Schlemper, Klaus-Dieter Kuhnert

Abstract:

Green planet is one of the Earth’s names which is known as a terrestrial planet and also can be named the fifth largest planet of the solar system as another scientific interpretation. Plants do not have a constant and steady distribution all around the world, and even plant species’ variations are not the same in one specific region. Presence of plants is not only limited to one field like botany; they exist in different fields such as literature and mythology and they hold useful and inestimable historical records. No one can imagine the world without oxygen which is produced mostly by plants. Their influences become more manifest since no other live species can exist on earth without plants as they form the basic food staples too. Regulation of water cycle and oxygen production are the other roles of plants. The roles affect environment and climate. Plants are the main components of agricultural activities. Many countries benefit from these activities. Therefore, plants have impacts on political and economic situations and future of countries. Due to importance of plants and their roles, study of plants is essential in various fields. Consideration of their different applications leads to focus on details of them too. Automatic recognition of plants is a novel field to contribute other researches and future of studies. Moreover, plants can survive their life in different places and regions by means of adaptations. Therefore, adaptations are their special factors to help them in hard life situations. Weather condition is one of the parameters which affect plants life and their existence in one area. Recognition of plants in different weather conditions is a new window of research in the field. Only natural images are usable to consider weather conditions as new factors. Thus, it will be a generalized and useful system. In order to have a general system, distance from the camera to plants is considered as another factor. The other considered factor is change of light intensity in environment as it changes during the day. Adding these factors leads to a huge challenge to invent an accurate and secure system. Development of an efficient plant recognition system is essential and effective. One important component of plant is leaf which can be used to implement automatic systems for plant recognition without any human interface and interaction. Due to the nature of used images, characteristic investigation of plants is done. Leaves of plants are the first characteristics to select as trusty parts. Four different plant species are specified for the goal to classify them with an accurate system. The current paper is devoted to principal directions of the proposed methods and implemented system, image dataset, and results. The procedure of algorithm and classification is explained in details. First steps, feature detection and description of visual information, are outperformed by using Scale invariant feature transform (SIFT), HARRIS-SIFT, and FAST-SIFT methods. The accuracy of the implemented methods is computed. In addition to comparison, robustness and efficiency of results in different conditions are investigated and explained.

Keywords: SIFT combination, feature extraction, feature detection, natural images, natural plant recognition, HARRIS-SIFT, FAST-SIFT

Procedia PDF Downloads 247
357 Determinants of Corporate Social Responsibility Adoption: Evidence from China

Authors: Jing (Claire) LI

Abstract:

More than two decades from 2000 to 2020 of economic reforms have brought China unprecedented economic growth. There is an urgent call of research towards corporate social responsibility (CSR) in the context of China because while China continues to develop into a global trading market, it suffers from various serious problems relating to CSR. This study analyses the factors affecting the adoption of CSR practices by Chinese listed companies. The author proposes a new framework of factors of CSR adoption. Following common organisational factors and external factors in the literature (including organisational support, company size, shareholder pressures, and government support), this study introduces two additional factors, dynamic capability and regional culture. A survey questionnaire was conducted on the CSR adoption of Chinese listed companies in Shen Zhen and Shang Hai index from December 2019 to March 2020. The survey was conducted to collect data on the factors that affect the adoption of CSR. After collection of data, this study performed factor analysis to reduce the number of measurement items to several main factors. This procedure is to confirm the proposed framework and ensure the significant factors. Through analysis, this study identifies four grouped factors as determinants of the CSR adoption. The first factor loading includes dynamic capability and organisational support. The study finds that they are positively related to the first factor, so the first factor mainly reflects the capabilities of companies, which is one component in internal factors. In the second factor, measurement items of stakeholder pressures mainly are from regulatory bodies, customer and supplier, employees and community, and shareholders. In sum, they are positively related to the second factor and they reflect stakeholder pressures, which is one component of external factors. The third factor reflects organisational characteristics. Variables include company size and cultural score. Among these variables, company size is negatively related to the third factor. The resulted factor loading of the third factor implies that organisational factor is an important determinant of CSR adoption. Cultural consistency, the variable in the fourth factor, is positively related to the factor. It represents the difference between perception of managers and actual culture of the organisations in terms of cultural dimensions, which is one component in internal factors. It implies that regional culture is an important factor of CSR adoption. Overall, the results are consistent with previous literature. This study is of significance from both theoretical and empirical perspectives. First, from the significance of theoretical perspective, this research combines stakeholder theory, dynamic capability view of a firm, and neo-institutional theory in CSR research. Based on association of these three theories, this study introduces two new factors (dynamic capability and regional culture) to have a better framework for CSR adoption. Second, this study contributes to empirical literature of CSR in the context of China. Extant Chinese companies lack recognition of the importance of CSR practices adoption. This study built a framework and may help companies to design resource allocation strategies and evaluate future CSR and management practices in an early stage.

Keywords: China, corporate social responsibility, CSR adoption, dynamic capability, regional culture

Procedia PDF Downloads 103
356 From Clients to Colleagues: Supporting the Professional Development of Survivor Social Work Students

Authors: Stephanie Jo Marchese

Abstract:

This oral presentation is a reflective piece regarding current social work teaching methods that value and devalue the lived experiences of survivor students. This presentation grounds the term ‘survivor’ in feminist frameworks. A survivor-defined approach to feminist advocacy assumes an individual’s agency, considers each case and needs independent of generalizations, and provides resources and support to empower victims. Feminist ideologies are ripe arenas to update and influence the rapport-building schools of social work have with these students. Survivor-based frameworks are rooted in nuanced understandings of intersectional realities, staunchly combat both conscious and unconscious deficit lenses wielded against victims, elevate lived experiences to the realm of experiential expertise, and offer alternatives to traditional power structures and knowledge exchanges. Actively importing a survivor framework into the methodology of social work teaching breaks open barriers many survivor students have faced in institutional settings, this author included. The profession of social work is at an important crux of change, both in the United States and globally. The United States is currently undergoing a radical change in its citizenry and outlier communities have taken to the streets again in opposition to their othered-ness. New waves of students are entering this field, emboldened by their survival of personal and systemic oppressions- heavily influenced by third-wave feminism, critical race theory, queer theory, among other post-structuralist ideologies. Traditional models of sociological and psychological studies are actively being challenged. The profession of social work was not founded on the diagnosis of disorders but rather a grassroots-level activism that heralded and demanded resources for oppressed communities. Institutional and classroom acceptance and celebration of survivor narratives can catapult the resurgence of these values needed in the profession’s service-delivery models and put social workers back in the driver's seat of social change (a combined advocacy and policy perspective), moving away from outsider-based intervention models. Survivor students should be viewed as agents of change, not solely former victims and clients. The ideas of this presentation proposal are supported through various qualitative interviews, as well as reviews of ‘best practices’ in the field of education that incorporate feminist methods of inclusion and empowerment. Curriculum and policy recommendations are also offered.

Keywords: deficit lens bias, empowerment theory, feminist praxis, inclusive teaching models, strengths-based approaches, social work teaching methods

Procedia PDF Downloads 269
355 Investigation of Software Integration for Simulations of Buoyancy-Driven Heat Transfer in a Vehicle Underhood during Thermal Soak

Authors: R. Yuan, S. Sivasankaran, N. Dutta, K. Ebrahimi

Abstract:

This paper investigates the software capability and computer-aided engineering (CAE) method of modelling transient heat transfer process occurred in the vehicle underhood region during vehicle thermal soak phase. The heat retention from the soak period will be beneficial to the cold start with reduced friction loss for the second 14°C worldwide harmonized light-duty vehicle test procedure (WLTP) cycle, therefore provides benefits on both CO₂ emission reduction and fuel economy. When vehicle undergoes soak stage, the airflow and the associated convective heat transfer around and inside the engine bay is driven by the buoyancy effect. This effect along with thermal radiation and conduction are the key factors to the thermal simulation of the engine bay to obtain the accurate fluids and metal temperature cool-down trajectories and to predict the temperatures at the end of the soak period. Method development has been investigated in this study on a light-duty passenger vehicle using coupled aerodynamic-heat transfer thermal transient modelling method for the full vehicle under 9 hours of thermal soak. The 3D underhood flow dynamics were solved inherently transient by the Lattice-Boltzmann Method (LBM) method using the PowerFlow software. This was further coupled with heat transfer modelling using the PowerTHERM software provided by Exa Corporation. The particle-based LBM method was capable of accurately handling extremely complicated transient flow behavior on complex surface geometries. The detailed thermal modelling, including heat conduction, radiation, and buoyancy-driven heat convection, were integrated solved by PowerTHERM. The 9 hours cool-down period was simulated and compared with the vehicle testing data of the key fluid (coolant, oil) and metal temperatures. The developed CAE method was able to predict the cool-down behaviour of the key fluids and components in agreement with the experimental data and also visualised the air leakage paths and thermal retention around the engine bay. The cool-down trajectories of the key components obtained for the 9 hours thermal soak period provide vital information and a basis for the further development of reduced-order modelling studies in future work. This allows a fast-running model to be developed and be further imbedded with the holistic study of vehicle energy modelling and thermal management. It is also found that the buoyancy effect plays an important part at the first stage of the 9 hours soak and the flow development during this stage is vital to accurately predict the heat transfer coefficients for the heat retention modelling. The developed method has demonstrated the software integration for simulating buoyancy-driven heat transfer in a vehicle underhood region during thermal soak with satisfying accuracy and efficient computing time. The CAE method developed will allow integration of the design of engine encapsulations for improving fuel consumption and reducing CO₂ emissions in a timely and robust manner, aiding the development of low-carbon transport technologies.

Keywords: ATCT/WLTC driving cycle, buoyancy-driven heat transfer, CAE method, heat retention, underhood modeling, vehicle thermal soak

Procedia PDF Downloads 126
354 CybeRisk Management in Banks: An Italian Case Study

Authors: E. Cenderelli, E. Bruno, G. Iacoviello, A. Lazzini

Abstract:

The financial sector is exposed to the risk of cyber-attacks like any other industrial sector. Furthermore, the topic of CybeRisk (cyber risk) has become particularly relevant given that Information Technology (IT) attacks have increased drastically in recent years, and cannot be stopped by single organizations requiring a response at international and national level. IT risk is never a matter purely for the IT manager, although he clearly plays a key role. A bank's risk management function requires a thorough understanding of the evolving risks as well as the tools and practical techniques available to address them. Upon the request of European and national legislation regarding CybeRisk in the financial system, banks are therefore called upon to strengthen the operational model for CybeRisk management. This will require an important change with a more intense collaboration with the structures that deal with information security for the development of an ad hoc system for the evaluation and control of this type of risk. The aim of the work is to propose a framework for the management and control of CybeRisk that will bridge the gap in the literature regarding the understanding and consideration of CybeRisk as an integral part of business management. The IT function has a strong relevance in the management of CybeRisk, which is perceived mainly as operational risk, but with a positive tendency on the part of risk management to the identification of CybeRisk assessment methods that are increasingly complete, quantitative and able to better describe the possible impacts on the business. The paper provides answers to the research questions: Is it possible to define a CybeRisk governance structure able to support the comparison between risk and security? How can the relationships between IT assets be integrated into a cyberisk assessment framework to guarantee a system of protection and risks control? From a methodological point of view, this research uses a case study approach. The choice of “Monte dei Paschi di Siena” was determined by the specific features of one of Italy’s biggest lenders. It is chosen to use an intensive research strategy: an in-depth study of reality. The case study methodology is an empirical approach to explore a complex and current phenomenon that develops over time. The use of cases has also the advantage of allowing the deepening of aspects concerning the "how" and "why" of contemporary events, on which the scholar has little control. The research bases on quantitative data and qualitative information obtained through semi-structured interviews of an open-ended nature and questionnaires to directors, members of the audit committee, risk, IT and compliance managers, and those responsible for internal audit function and anti-money laundering. The added value of the paper can be seen in the development of a framework based on a mapping of IT assets from which it is possible to identify their relationships for purposes of a more effective management and control of cyber risk.

Keywords: bank, CybeRisk, information technology, risk management

Procedia PDF Downloads 212
353 Towards Visual Personality Questionnaires Based on Deep Learning and Social Media

Authors: Pau Rodriguez, Jordi Gonzalez, Josep M. Gonfaus, Xavier Roca

Abstract:

Image sharing in social networks has increased exponentially in the past years. Officially, there are 600 million Instagrammers uploading around 100 million photos and videos per day. Consequently, there is a need for developing new tools to understand the content expressed in shared images, which will greatly benefit social media communication and will enable broad and promising applications in education, advertisement, entertainment, and also psychology. Following these trends, our work aims to take advantage of the existing relationship between text and personality, already demonstrated by multiple researchers, so that we can prove that there exists a relationship between images and personality as well. To achieve this goal, we consider that images posted on social networks are typically conditioned on specific words, or hashtags, therefore any relationship between text and personality can also be observed with those posted images. Our proposal makes use of the most recent image understanding models based on neural networks to process the vast amount of data generated by social users to determine those images most correlated with personality traits. The final aim is to train a weakly-supervised image-based model for personality assessment that can be used even when textual data is not available, which is an increasing trend. The procedure is described next: we explore the images directly publicly shared by users based on those accompanying texts or hashtags most strongly related to personality traits as described by the OCEAN model. These images will be used for personality prediction since they have the potential to convey more complex ideas, concepts, and emotions. As a result, the use of images in personality questionnaires will provide a deeper understanding of respondents than through words alone. In other words, from the images posted with specific tags, we train a deep learning model based on neural networks, that learns to extract a personality representation from a picture and use it to automatically find the personality that best explains such a picture. Subsequently, a deep neural network model is learned from thousands of images associated with hashtags correlated to OCEAN traits. We then analyze the network activations to identify those pictures that maximally activate the neurons: the most characteristic visual features per personality trait will thus emerge since the filters of the convolutional layers of the neural model are learned to be optimally activated depending on each personality trait. For example, among the pictures that maximally activate the high Openness trait, we can see pictures of books, the moon, and the sky. For high Conscientiousness, most of the images are photographs of food, especially healthy food. The high Extraversion output is mostly activated by pictures of a lot of people. In high Agreeableness images, we mostly see flower pictures. Lastly, in the Neuroticism trait, we observe that the high score is maximally activated by animal pets like cats or dogs. In summary, despite the huge intra-class and inter-class variabilities of the images associated to each OCEAN traits, we found that there are consistencies between visual patterns of those images whose hashtags are most correlated to each trait.

Keywords: emotions and effects of mood, social impact theory in social psychology, social influence, social structure and social networks

Procedia PDF Downloads 167
352 Characteristics-Based Lq-Control of Cracking Reactor by Integral Reinforcement

Authors: Jana Abu Ahmada, Zaineb Mohamed, Ilyasse Aksikas

Abstract:

The linear quadratic control system of hyperbolic first order partial differential equations (PDEs) are presented. The aim of this research is to control chemical reactions. This is achieved by converting the PDEs system to ordinary differential equations (ODEs) using the method of characteristics to reduce the system to control it by using the integral reinforcement learning. The designed controller is applied to a catalytic cracking reactor. Background—Transport-Reaction systems cover a large chemical and bio-chemical processes. They are best described by nonlinear PDEs derived from mass and energy balances. As a main application to be considered in this work is the catalytic cracking reactor. Indeed, the cracking reactor is widely used to convert high-boiling, high-molecular weight hydrocarbon fractions of petroleum crude oils into more valuable gasoline, olefinic gases, and others. On the other hand, control of PDEs systems is an important and rich area of research. One of the main control techniques is feedback control. This type of control utilizes information coming from the system to correct its trajectories and drive it to a desired state. Moreover, feedback control rejects disturbances and reduces the variation effects on the plant parameters. Linear-quadratic control is a feedback control since the developed optimal input is expressed as feedback on the system state to exponentially stabilize and drive a linear plant to the steady-state while minimizing a cost criterion. The integral reinforcement learning policy iteration technique is a strong method that solves the linear quadratic regulator problem for continuous-time systems online in real time, using only partial information about the system dynamics (i.e. the drift dynamics A of the system need not be known), and without requiring measurements of the state derivative. This is, in effect, a direct (i.e. no system identification procedure is employed) adaptive control scheme for partially unknown linear systems that converges to the optimal control solution. Contribution—The goal of this research is to Develop a characteristics-based optimal controller for a class of hyperbolic PDEs and apply the developed controller to a catalytic cracking reactor model. In the first part, developing an algorithm to control a class of hyperbolic PDEs system will be investigated. The method of characteristics will be employed to convert the PDEs system into a system of ODEs. Then, the control problem will be solved along the characteristic curves. The reinforcement technique is implemented to find the state-feedback matrix. In the other half, applying the developed algorithm to the important application of a catalytic cracking reactor. The main objective is to use the inlet fraction of gas oil as a manipulated variable to drive the process state towards desired trajectories. The outcome of this challenging research would yield the potential to provide a significant technological innovation for the gas industries since the catalytic cracking reactor is one of the most important conversion processes in petroleum refineries.

Keywords: PDEs, reinforcement iteration, method of characteristics, riccati equation, cracking reactor

Procedia PDF Downloads 67
351 Agri-Tourism as a Sustainable Adaptation Option for Climate Change Impacts on Small Scale Agricultural Sector

Authors: Rohana Pandukabhya Mahaliyanaarachchi, Maheshwari Sangeetha Elapatha, Mohamed Esham, Banagala Chathurika Maduwanthi

Abstract:

The global climate change has become one of the imperative issues for the smallholder dominated agricultural sector and nature based tourism sector in Sri Lanka. Thus addressing this issue is notably important. The main objective of this study was to investigate the potential of agri-tourism as a sustainable adaptation option to mitigate some of the negative impacts of climate change in small scale agricultural sector in Sri Lanka. The study was carried out in two different climatic zones in Sri Lanka namely Low Country Dry Zone and Up Country Wet Zone. A case study strategy followed by structured and unstructured interviewers through cross-sectional surveys were adapted to collect data. The study revealed that there had been a significant change in the climate in regard to the rainfall patterns in both climatic zones resulting unexpected rains during months and longer drought periods. This results the damages of agricultural production, low yields and subsequently low income. However, to mitigate these adverse effects, farmers have mainly focused on using strategies related to the crops and farming patterns rather than diversifying their business by adopting other entrepreneurial activities like agri-tourism. One of the major precursor for this was due to lesser awareness on the concept of agri-tourism within the farming community. The study revealed that the respondents of both climatic zones do have willingness and potential to adopt agri-tourism. One key important factor identified was that farming or agriculture was the main livelihood of the respondents, which is one of the vital precursor needed to start up an agri-tourism enterprise. Most of the farmers in the Up Country Wet Zone had an inclination to start a farm guest house or a farm home stay whereas the farmers in the Low Country Dry Zone wish to operate farm guest house, farm home stay or farm restaurant. They also have an interest to open up a road side farm product stall to facilitate the direct sales of the farm. Majority of the farmers in both climatic zones showed an interest to initiate an agri-tourism business as a complementary enterprise where they wished to give an equal share to both farming and agri-tourism. Thus this revealed that the farmers have identified agri-tourism as a vital concept and have given the equal importance as given to farming. This shows that most of the farmers have understood agri-tourism as an alternative income source that can mitigate the adverse effects of climatic change. This study emphasizes that agri-tourism as an alternative income source that can mitigate the adverse effects of climatic change on small scale agriculture sector.

Keywords: adaptation, agri-tourism, climate change, small scale agriculture

Procedia PDF Downloads 132
350 Environmental Literacy of Teacher Educators in Colleges of Teacher Education in Israel

Authors: Tzipi Eshet

Abstract:

The importance of environmental education as part of a national strategy to promote the environment is recognized around the world. Lecturers at colleges of teacher education have considerable responsibility, directly and indirectly, for the environmental literacy of students who will end up teaching in the school system. This study examined whether lecturers in colleges of teacher education and teacher training in Israel, are able and willing to develop among the students, environmental literacy. Capability and readiness is assessed by evaluating the level of environmental literacy dimensions that include knowledge on environmental issues, positions related to the environmental agenda and "green" patterns of behavior in everyday life. The survey included 230 lecturers from 22 state colleges coming from various sectors (secular, religious, and Arab), from different academic fields and different personal backgrounds. Firstly, the results show that the higher the commitment to environmental issues, the lower the satisfaction with the current situation. In general, the respondents show positive environmental attitudes in all categories examined, they feel that they can personally influence responsible environmental behavior of others and are able to internalize environmental education in schools and colleges; they also report positive environmental behavior. There are no significant differences between teachers of different background characteristics when it comes to behavior patterns that generate personal income funds (e.g. returning bottles for deposit). Women show a more responsible environmental behavior than men. Jewish lecturers, in most categories, show more responsible behavior than Druze and Arab lecturers; however, when referring to positions, Arabs and Druze have a better sense in their ability to influence the environmental agenda. The Knowledge test, which included 15 questions, was mostly based on basic environmental issues. The average score was adequate - 83.6. Science lecturers' environmental literacy is higher than the other lecturers significantly. The larger the environmental knowledge base is, they are more environmental in their attitudes, and they feel more responsible toward the environment. It can be concluded from the research findings, that knowledge is a fundamental basis for developing environmental literacy. Environmental knowledge has a positive effect on the development of environmental commitment that is reflected in attitudes and behavior. This conclusion is probably also true of the general public. Hence, there is a great importance to the expansion of knowledge among the general public and teacher educators in particular on environmental. From the open questions in the survey, it is evident that most of the lecturers are interested in the subject and understand the need to integrate environmental issues in the colleges, either directly by teaching courses on the environment or indirectly by integrating environmental issues in different professions as well as asking the students to set an example (such as, avoid unnecessary printing, keeping the environment clean). The curriculum at colleges should include a variety of options for the development and enhancement of environmental literacy of student teachers, but first there must be a focus on bringing their teachers to a high literacy level so they can meet the difficult and important task they face.

Keywords: colleges of teacher education, environmental literacy, environmental education, teacher's teachers

Procedia PDF Downloads 255
349 Determination of Slope of Hilly Terrain by Using Proposed Method of Resolution of Forces

Authors: Reshma Raskar-Phule, Makarand Landge, Saurabh Singh, Vijay Singh, Jash Saparia, Shivam Tripathi

Abstract:

For any construction project, slope calculations are necessary in order to evaluate constructability on the site, such as the slope of parking lots, sidewalks, and ramps, the slope of sanitary sewer lines, slope of roads and highways. When slopes and grades are to be determined, designers are concerned with establishing proper slopes and grades for their projects to assess cut and fill volume calculations and determine inverts of pipes. There are several established instruments commonly used to determine slopes, such as Dumpy level, Abney level or Hand Level, Inclinometer, Tacheometer, Henry method, etc., and surveyors are very familiar with the use of these instruments to calculate slopes. However, they have some other drawbacks which cannot be neglected while major surveying works. Firstly, it requires expert surveyors and skilled staff. The accessibility, visibility, and accommodation to remote hilly terrain with these instruments and surveying teams are difficult. Also, determination of gentle slopes in case of road and sewer drainage constructions in congested urban places with these instruments is not easy. This paper aims to develop a method that requires minimum field work, minimum instruments, no high-end technology or instruments or software, and low cost. It requires basic and handy surveying accessories like a plane table with a fixed weighing machine, standard weights, alidade, tripod, and ranging rods should be able to determine the terrain slope in congested areas as well as in remote hilly terrain. Also, being simple and easy to understand and perform the people of that local rural area can be easily trained for the proposed method. The idea for the proposed method is based on the principle of resolution of weight components. When any object of standard weight ‘W’ is placed on an inclined surface with a weighing machine below it, then its cosine component of weight is presently measured by that weighing machine. The slope can be determined from the relation between the true or actual weight and the apparent weight. A proper procedure is to be followed, which includes site location, centering and sighting work, fixing the whole set at the identified station, and finally taking the readings. A set of experiments for slope determination, mild and moderate slopes, are carried out by the proposed method and by the theodolite instrument in a controlled environment, on the college campus, and uncontrolled environment actual site. The slopes determined by the proposed method were compared with those determined by the established instruments. For example, it was observed that for the same distances for mild slope, the difference in the slope obtained by the proposed method and by the established method ranges from 4’ for a distance of 8m to 2o15’20” for a distance of 16m for an uncontrolled environment. Thus, for mild slopes, the proposed method is suitable for a distance of 8m to 10m. The correlation between the proposed method and the established method shows a good correlation of 0.91 to 0.99 for various combinations, mild and moderate slope, with the controlled and uncontrolled environment.

Keywords: surveying, plane table, weight component, slope determination, hilly terrain, construction

Procedia PDF Downloads 61
348 Accelerating Malaysian Technology Startups: Case Study of Malaysian Technology Development Corporation as the Innovator

Authors: Norhalim Yunus, Mohamad Husaini Dahalan, Nor Halina Ghazali

Abstract:

Building technology start-ups from ground zero into world-class companies in form and substance present a rare opportunity for government-affiliated institutions in Malaysia. The challenge of building such start-ups becomes tougher when their core businesses involve commercialization of unproven technologies for the mass market. These simple truths, while difficult to execute, will go a long way in getting a business off the ground and flying high. Malaysian Technology Development Corporation (MTDC), a company founded to facilitate the commercial exploitation of R&D findings from research institutions and universities, and eventually help translate these findings of applications in the marketplace, is an excellent case in point. The purpose of this paper is to examine MTDC as an institution as it explores the concept of ‘it takes a village to raise a child’ in an effort to create and nurture start-ups into established world class Malaysian technology companies. With MTDC at the centre of Malaysia's innovative start-ups, the analysis seeks to specifically answer two questions: How has the concept been applied in MTDC? and what can we learn from this successful case? A key aim is to elucidate how MTDC's journey as a private limited company can help leverage reforms and achieve transformation, a process that might be suitable for other small, open, third world and developing countries. This paper employs a single case study, designed to acquire an in-depth understanding of how MTDC has developed and grown technology start-ups to world-class technology companies. The case study methodology is employed as the focus is on a contemporary phenomenon within a real business context. It also explains the causal links in real-life situations where a single survey or experiment is unable to unearth. The findings show that MTDC maximises the concept of it needs a village to raise a child in totality, as MTDC itself assumes the role of the innovator to 'raise' start-up companies into world-class stature. As the innovator, MTDC creates shared value and leadership, introduces innovative programmes ahead of the curve, mobilises talents for optimum results and aggregates knowledge for personnel advancement. The success of the company's effort is attributed largely to leadership, visionary, adaptability, commitment to innovate, partnership and networking, and entrepreneurial drive. The findings of this paper are however limited by the single case study of MTDC. Future research is required to study more cases of success or/and failure where the concept of it takes a village to raise a child have been explored and applied.

Keywords: start-ups, technology transfer, commercialization, technology incubator

Procedia PDF Downloads 123
347 Towards an Environmental Knowledge System in Water Management

Authors: Mareike Dornhoefer, Madjid Fathi

Abstract:

Water supply and water quality are key problems of mankind at the moment and - due to increasing population - in the future. Management disciplines like water, environment and quality management therefore need to closely interact, to establish a high level of water quality and to guarantee water supply in all parts of the world. Groundwater remediation is one aspect in this process. From a knowledge management perspective it is only possible to solve complex ecological or environmental problems if different factors, expert knowledge of various stakeholders and formal regulations regarding water, waste or chemical management are interconnected in form of a knowledge base. In general knowledge management focuses the processes of gathering and representing existing and new knowledge in a way, which allows for inference or deduction of knowledge for e.g. a situation where a problem solution or decision support are required. A knowledge base is no sole data repository, but a key element in a knowledge based system, thus providing or allowing for inference mechanisms to deduct further knowledge from existing facts. In consequence this knowledge provides decision support. The given paper introduces an environmental knowledge system in water management. The proposed environmental knowledge system is part of a research concept called Green Knowledge Management. It applies semantic technologies or concepts such as ontology or linked open data to interconnect different data and information sources about environmental aspects, in this case, water quality, as well as background material enriching an established knowledge base. Examples for the aforementioned ecological or environmental factors threatening water quality are among others industrial pollution (e.g. leakage of chemicals), environmental changes (e.g. rise in temperature) or floods, where all kinds of waste are merged and transferred into natural water environments. Water quality is usually determined with the help of measuring different indicators (e.g. chemical or biological), which are gathered with the help of laboratory testing, continuous monitoring equipment or other measuring processes. During all of these processes data are gathered and stored in different databases. Meanwhile the knowledge base needs to be established through interconnecting data of these different data sources and enriching its semantics. Experts may add their knowledge or experiences of previous incidents or influencing factors. In consequence querying or inference mechanisms are applied for the deduction of coherence between indicators, predictive developments or environmental threats. Relevant processes or steps of action may be modeled in form of a rule based approach. Overall the environmental knowledge system supports the interconnection of information and adding semantics to create environmental knowledge about water environment, supply chain as well as quality. The proposed concept itself is a holistic approach, which links to associated disciplines like environmental and quality management. Quality indicators and quality management steps need to be considered e.g. for the process and inference layers of the environmental knowledge system, thus integrating the aforementioned management disciplines in one water management application.

Keywords: water quality, environmental knowledge system, green knowledge management, semantic technologies, quality management

Procedia PDF Downloads 199
346 Analysis of Shrinkage Effect during Mercerization on Himalayan Nettle, Cotton and Cotton/Nettle Yarn Blends

Authors: Reena Aggarwal, Neha Kestwal

Abstract:

The Himalayan Nettle (Girardinia diversifolia) has been used for centuries as fibre and food source by Himalayan communities. Himalayan Nettle is a natural cellulosic fibre that can be handled in the same way as other cellulosic fibres. The Uttarakhand Bamboo and Fibre Development Board based in Uttarakhand, India is working extensively with the nettle fibre to explore the potential of nettle for textile production in the region. The fiber is a potential resource for rural enterprise development for some high altitude pockets of the state and traditionally the plant fibre is used for making domestic products like ropes and sacks. Himalayan Nettle is an unconventional natural fiber with functional characteristics of shrink resistance, degree of pathogen and fire resistance and can blend nicely with other fibres. Most importantly, they generate mainly organic wastes and leave residues that are 100% biodegradable. The fabrics may potentially be reused or re-manufactured and can also be used as a source of cellulose feedstock for regenerated cellulosic products. Being naturally bio- degradable, the fibre can be composted if required. Though a lot of research activities and training are directed towards fibre extraction and processing techniques in different craft clusters villagers of different clusters of Uttarkashi, Chamoli and Bageshwar of Uttarakhand like retting and Degumming process, very little is been done to analyse the crucial properties of nettle fiber like shrinkage and wash fastness. These properties are very crucial to obtain desired quality of fibre for further processing of yarn making and weaving and in developing these fibers into fine saleable products. This research therefore is focused towards various on-field experiments which were focused on shrinkage properties conducted on cotton, nettle and cotton/nettle blended yarn samples. The objective of the study was to analyze the scope of the blended fiber for developing into wearable fabrics. For the study, after conducting the initial fiber length and fineness testing, cotton and nettle fibers were mixed in 60:40 ratio and five varieties of yarns were spun in open end spinning mill having yarn count of 3s, 5s, 6s, 7s and 8s. Samples of 100% Nettle 100% cotton fibers in 8s count were also developed for the study. All the six varieties of yarns were tested with shrinkage test and results were critically analyzed as per ASTM method D2259. It was observed that 100% Nettle has a least shrinkage of 3.36% while pure cotton has shrinkage approx. 13.6%. Yarns made of 100% Cotton exhibits four times more shrinkage than 100% Nettle. The results also show that cotton and Nettle blended yarn exhibit lower shrinkage than 100% cotton yarn. It was thus concluded that as the ratio of nettle increases in the samples, the shrinkage decreases in the samples. These results are very crucial for Uttarakhand people who want to commercially exploit the abundant nettle fiber for generating sustainable employment.

Keywords: Himalayan nettle, sustainable, shrinkage, blending

Procedia PDF Downloads 213
345 Early Diagnosis of Myocardial Ischemia Based on Support Vector Machine and Gaussian Mixture Model by Using Features of ECG Recordings

Authors: Merve Begum Terzi, Orhan Arikan, Adnan Abaci, Mustafa Candemir

Abstract:

Acute myocardial infarction is a major cause of death in the world. Therefore, its fast and reliable diagnosis is a major clinical need. ECG is the most important diagnostic methodology which is used to make decisions about the management of the cardiovascular diseases. In patients with acute myocardial ischemia, temporary chest pains together with changes in ST segment and T wave of ECG occur shortly before the start of myocardial infarction. In this study, a technique which detects changes in ST/T sections of ECG is developed for the early diagnosis of acute myocardial ischemia. For this purpose, a database of real ECG recordings that contains a set of records from 75 patients presenting symptoms of chest pain who underwent elective percutaneous coronary intervention (PCI) is constituted. 12-lead ECG’s of the patients were recorded before and during the PCI procedure. Two ECG epochs, which are the pre-inflation ECG which is acquired before any catheter insertion and the occlusion ECG which is acquired during balloon inflation, are analyzed for each patient. By using pre-inflation and occlusion recordings, ECG features that are critical in the detection of acute myocardial ischemia are identified and the most discriminative features for the detection of acute myocardial ischemia are extracted. A classification technique based on support vector machine (SVM) approach operating with linear and radial basis function (RBF) kernels to detect ischemic events by using ST-T derived joint features from non-ischemic and ischemic states of the patients is developed. The dataset is randomly divided into training and testing sets and the training set is used to optimize SVM hyperparameters by using grid-search method and 10fold cross-validation. SVMs are designed specifically for each patient by tuning the kernel parameters in order to obtain the optimal classification performance results. As a result of implementing the developed classification technique to real ECG recordings, it is shown that the proposed technique provides highly reliable detections of the anomalies in ECG signals. Furthermore, to develop a detection technique that can be used in the absence of ECG recording obtained during healthy stage, the detection of acute myocardial ischemia based on ECG recordings of the patients obtained during ischemia is also investigated. For this purpose, a Gaussian mixture model (GMM) is used to represent the joint pdf of the most discriminating ECG features of myocardial ischemia. Then, a Neyman-Pearson type of approach is developed to provide detection of outliers that would correspond to acute myocardial ischemia. Neyman – Pearson decision strategy is used by computing the average log likelihood values of ECG segments and comparing them with a range of different threshold values. For different discrimination threshold values and number of ECG segments, probability of detection and probability of false alarm values are computed, and the corresponding ROC curves are obtained. The results indicate that increasing number of ECG segments provide higher performance for GMM based classification. Moreover, the comparison between the performances of SVM and GMM based classification showed that SVM provides higher classification performance results over ECG recordings of considerable number of patients.

Keywords: ECG classification, Gaussian mixture model, Neyman–Pearson approach, support vector machine

Procedia PDF Downloads 125
344 Exploring the Physical Activity Behavior and Needs of Adolescent Girls: A Mixed-Methods Study

Authors: Vicki R. Voskuil, Jorgie M. Watson

Abstract:

Despite the well-established health benefits of physical activity (PA), most adolescents do not meet guidelines recommending 60 minutes of moderate to vigorous physical activity (MVPA) each day. Adolescent girls engage in less PA than boys, a difference that increases with age. By the 9th grade, only 20% of girls report meeting recommendations for PA with lower percentages for black and Hispanic girls compared to white girls. The purpose of the study was to explore the physical activity (PA) behavior and needs of adolescent girls. Study aims included assessment of adolescent girls’ PA behavior; facilitators of and barriers to PA, PA needs, and acceptability of the Fitbit-Flex 2 activity tracker. This exploratory study used a qualitative and quantitative approach. The qualitative approach involved a focus group using a semi-structured interview technique. PA was measured using the Fitbit-Flex 2 activity tracker. Steps, distance, and active minutes were recorded for one week. A Fitbit survey was also administered to assess acceptability. SPSS Version 22.0 and ATLAS.ti Version 8 were used to analyze data. Girls in the ninth grade were recruited from a high school in the Midwest (n=11). Girls were excluded if they were involved in sports or other organized PA ≥ 3 days per week, had a health condition that prevented or limited PA, or could not read and write English. Participants received a Fitbit-Flex 2 activity tracker to wear for one week. At the end of the week, girls returned the Fitbit and participated in a focus group. Girls responded to open-ended questions regarding their PA behavior and shared their ideas for future intervention efforts aimed at increasing PA among adolescents. Girls completed a survey assessing their perceptions of the Fitbit. Mean age of the girls was 15.3 years (SD=0.44). On average girls took 6,520 steps and walked 2.73 miles each day. Girls stated their favorite types of PA were walking, riding bike, and running. Most girls stated they did PA for 30 minutes or more at a time once a day or every other day. The top 3 facilitators of PA reported by girls were friends, family, and transportation. The top 3 barriers included health issues, lack of motivation, and weather. Top intervention ideas were community service projects, camps, and using a Fitbit activity tracker. Girls felt the best timing of a PA program would be in the summer. Fitbit survey results showed 100% of girls would use a Fitbit on most days if they had one. Ten (91%) girls wore the Fitbit on all days. Seven (64%) girls used the Fitbit app and all reported they liked it. Findings indicate that PA participation for this sample is consistent with previous studies. Adolescent girls are not meeting recommended daily guidelines for PA. Fitbit activity trackers were positively received by all participants and could be used in future interventions aimed at increasing PA for adolescent girls. PA interventions that take place in the summer with friends and include community service projects may increase PA and be well received by this population.

Keywords: adolescents, girls, interventions, physical activity

Procedia PDF Downloads 206
343 The Community Stakeholders’ Perspectives on Sexual Health Education for Young Adolescents in Western New York, USA: A Qualitative Descriptive Study

Authors: Sadandaula Rose Muheriwa Matemba, Alexander Glazier, Natalie M. LeBlanc

Abstract:

In the United States, up to 10% of girls and 22 % of boys 10-14 years have had sex, 5% of them had their first sex before 11 years, and the age of first sexual encounter is reported to be 8 years. Over 4,000 adolescent girls, 10-14 years, become pregnant every year, and 2.6% of the abortions in 2019 were among adolescents below 15 years. Despite these negative outcomes, little research has been conducted to understand the sexual health education offered to young adolescents ages 10-14. Early sexual health education is one of the most effective strategies to help lower the rate of early pregnancies, HIV infections, and other sexually transmitted. Such knowledge is necessary to inform best practices for supporting the healthy sexual development of young adolescents and prevent adverse outcomes. This qualitative descriptive study was conducted to explore the community stakeholders’ experiences in sexual health education for young adolescents ages 10-14 and ascertain the young adolescents’ sexual health support needs. Maximum variation purposive sampling was used to recruit a total sample of 13 community stakeholders, including health education teachers, members of youth-based organizations, and Adolescent Clinic providers in Rochester, New York State, in the United States of America from April to June 2022. Data were collected through semi-structured individual in-depth interviews and were analyzed using MAXQDA following a conventional content analysis approach. Triangulation, team analysis, and respondent validation to enhance rigor were also employed to enhance study rigor. The participants were predominantly female (92.3%) and comprised of Caucasians (53.8%), Black/African Americans (38.5%), and Indian-American (7.7%), with ages ranging from 23-59. Four themes emerged: the perceived need for early sexual health education, preferred timing to initiate sexual health conversations, perceived age-appropriate content for young adolescents, and initiating sexual health conversations with young adolescents. The participants described encouraging and concerning experiences. Most participants were concerned that young adolescents are living in a sexually driven environment and are not given the sexual health education they need, even though they are open to learning sexual health materials. There was consensus on the need to initiate sexual health conversations early at 4 years or younger, standardize sexual health education in schools and make age-appropriate sexual health education progressive. These results show that early sexual health education is essential if young adolescents are to delay sexual debut, prevent early pregnancies, and if the goal of ending the HIV epidemic is to be achieved. However, research is needed on a larger scale to understand how best to implement sexual health education among young adolescents and to inform interventions for implementing contextually-relevant sexuality education for this population. These findings call for increased multidisciplinary efforts in promoting early sexual health education for young adolescents.

Keywords: community stakeholders’ perspectives, sexual development, sexual health education, young adolescents

Procedia PDF Downloads 56
342 Tailoring Quantum Oscillations of Excitonic Schrodinger’s Cats as Qubits

Authors: Amit Bhunia, Mohit Kumar Singh, Maryam Al Huwayz, Mohamed Henini, Shouvik Datta

Abstract:

We report [https://arxiv.org/abs/2107.13518] experimental detection and control of Schrodinger’s Cat like macroscopically large, quantum coherent state of a two-component Bose-Einstein condensate of spatially indirect electron-hole pairs or excitons using a resonant tunneling diode of III-V Semiconductors. This provides access to millions of excitons as qubits to allow efficient, fault-tolerant quantum computation. In this work, we measure phase-coherent periodic oscillations in photo-generated capacitance as a function of an applied voltage bias and light intensity over a macroscopically large area. Periodic presence and absence of splitting of excitonic peaks in the optical spectra measured by photocapacitance point towards tunneling induced variations in capacitive coupling between the quantum well and quantum dots. Observation of negative ‘quantum capacitance’ due to a screening of charge carriers by the quantum well indicates Coulomb correlations of interacting excitons in the plane of the sample. We also establish that coherent resonant tunneling in this well-dot heterostructure restricts the available momentum space of the charge carriers within this quantum well. Consequently, the electric polarization vector of the associated indirect excitons collective orients along the direction of applied bias and these excitons undergo Bose-Einstein condensation below ~100 K. Generation of interference beats in photocapacitance oscillation even with incoherent white light further confirm the presence of stable, long-range spatial correlation among these indirect excitons. We finally demonstrate collective Rabi oscillations of these macroscopically large, ‘multipartite’, two-level, coupled and uncoupled quantum states of excitonic condensate as qubits. Therefore, our study not only brings the physics and technology of Bose-Einstein condensation within the reaches of semiconductor chips but also opens up experimental investigations of the fundamentals of quantum physics using similar techniques. Operational temperatures of such two-component excitonic BEC can be raised further with a more densely packed, ordered array of QDs and/or using materials having larger excitonic binding energies. However, fabrications of single crystals of 0D-2D heterostructures using 2D materials (e.g. transition metal di-chalcogenides, oxides, perovskites etc.) having higher excitonic binding energies are still an open challenge for semiconductor optoelectronics. As of now, these 0D-2D heterostructures can already be scaled up for mass production of miniaturized, portable quantum optoelectronic devices using the existing III-V and/or Nitride based semiconductor fabrication technologies.

Keywords: exciton, Bose-Einstein condensation, quantum computation, heterostructures, semiconductor Physics, quantum fluids, Schrodinger's Cat

Procedia PDF Downloads 163
341 The Effects of the New Silk Road Initiatives and the Eurasian Union to the East-Central-Europe’s East Opening Policies

Authors: Tamas Dani

Abstract:

The author’s research explores the geo-economical role and importance of some small and medium sized states, reviews their adaption strategies in foreign trade and also in foreign affairs in the course of changing into a multipolar world, uses international background. With these, the paper analyses the recent years and the future of ‘Opening towards Eastern foreign economic policies’ from East-Central Europe and parallel with that the ‘Western foreign economy policies’ from Asia, as the Chinese One Belt One Road new silk route plans (so far its huge part is an infrastructural development plan to reach international trade and investment aims). It can be today’s question whether these ideas will reshape the global trade or not. How does the new silk road initiatives and the Eurasian Union reflect the effect of globalization? It is worth to analyse that how did Central and Eastern European countries open to Asia; why does China have the focus of the opening policies in many countries and why could China be seen as the ‘winner’ of the world economic crisis after 2008. The research is based on the following methodologies: national and international literature, policy documents and related design documents, complemented by processing of international databases, statistics and live interviews with leaders from East-Central European countries’ companies and public administration, diplomats and international traders. The results also illustrated by mapping and graphs. The research will find out as major findings whether the state decision-makers have enough margin for manoeuvres to strengthen foreign economic relations. This work has a hypothesis that countries in East-Central Europe have real chance to diversify their relations in foreign trade, focus beyond their traditional partners. This essay focuses on the opportunities of East-Central-European countries in diversification of foreign trade relations towards China and Russia in terms of ‘Eastern Openings’. The effects of the new silk road initiatives and the Eurasian Union to Hungary’s economy with a comparing outlook on East-Central European countries and exploring common regional cooperation opportunities in this area. The essay concentrate on the changing trade relations between East-Central-Europe and China as well as Russia, try to analyse the effects of the new silk road initiatives and the Eurasian Union also. In the conclusion part, it shows how the cooperation is necessary for the East-Central European countries if they want to have a non-asymmetric trade with Russia, China or some Chinese regions (Pearl River Delta, Hainan, …). The form of the cooperation for the East-Central European nations can be Visegrad 4 Cooperation (V4), Central and Eastern European Countries (CEEC16), 3 SEAS Cooperation (or BABS – Baltic, Adriatic, Black Seas Initiative).

Keywords: China, East-Central Europe, foreign trade relations, geoeconomics, geopolitics, Russia

Procedia PDF Downloads 153
340 Confidence Envelopes for Parametric Model Selection Inference and Post-Model Selection Inference

Authors: I. M. L. Nadeesha Jayaweera, Adao Alex Trindade

Abstract:

In choosing a candidate model in likelihood-based modeling via an information criterion, the practitioner is often faced with the difficult task of deciding just how far up the ranked list to look. Motivated by this pragmatic necessity, we construct an uncertainty band for a generalized (model selection) information criterion (GIC), defined as a criterion for which the limit in probability is identical to that of the normalized log-likelihood. This includes common special cases such as AIC & BIC. The method starts from the asymptotic normality of the GIC for the joint distribution of the candidate models in an independent and identically distributed (IID) data framework and proceeds by deriving the (asymptotically) exact distribution of the minimum. The calculation of an upper quantile for its distribution then involves the computation of multivariate Gaussian integrals, which is amenable to efficient implementation via the R package "mvtnorm". The performance of the methodology is tested on simulated data by checking the coverage probability of nominal upper quantiles and compared to the bootstrap. Both methods give coverages close to nominal for large samples, but the bootstrap is two orders of magnitude slower. The methodology is subsequently extended to two other commonly used model structures: regression and time series. In the regression case, we derive the corresponding asymptotically exact distribution of the minimum GIC invoking Lindeberg-Feller type conditions for triangular arrays and are thus able to similarly calculate upper quantiles for its distribution via multivariate Gaussian integration. The bootstrap once again provides a default competing procedure, and we find that similar comparison performance metrics hold as for the IID case. The time series case is complicated by far more intricate asymptotic regime for the joint distribution of the model GIC statistics. Under a Gaussian likelihood, the default in most packages, one needs to derive the limiting distribution of a normalized quadratic form for a realization from a stationary series. Under conditions on the process satisfied by ARMA models, a multivariate normal limit is once again achieved. The bootstrap can, however, be employed for its computation, whence we are once again in the multivariate Gaussian integration paradigm for upper quantile evaluation. Comparisons of this bootstrap-aided semi-exact method with the full-blown bootstrap once again reveal a similar performance but faster computation speeds. One of the most difficult problems in contemporary statistical methodological research is to be able to account for the extra variability introduced by model selection uncertainty, the so-called post-model selection inference (PMSI). We explore ways in which the GIC uncertainty band can be inverted to make inferences on the parameters. This is being attempted in the IID case by pivoting the CDF of the asymptotically exact distribution of the minimum GIC. For inference one parameter at a time and a small number of candidate models, this works well, whence the attained PMSI confidence intervals are wider than the MLE-based Wald, as expected.

Keywords: model selection inference, generalized information criteria, post model selection, Asymptotic Theory

Procedia PDF Downloads 63
339 Making the Neighbourhood: Analyzing Mapping Procedures to Deal with Plurality and Conflict

Authors: Barbara Roosen, Oswald Devisch

Abstract:

Spatial projects are often contested. Despite participatory trajectories in official spatial development processes, citizens engage often by their power to say no. Participatory mapping helps to produce more legible and democratic ways of decision-making. It has proven its value in producing a multitude of knowledges and views, for individuals and community groups and local stakeholders to imagine desired and undesired futures and to give them the rhetorical power to present their views throughout the development process. From this perspective, mapping works as a social process in which individuals and groups share their knowledge, learn from each other and negotiate their relationship with each other as well as with space and power. In this way, these processes eventually aim to activate communities to intervene in cooperation in real problems. However, these are fragile and bumpy processes, sometimes leading to (local) conflict and intractable situations. Heterogeneous subjectivities and knowledge that become visible during the mapping process and which are contested by members of the community, is often the first trigger. This paper discusses a participatory mapping project conducted in a residential subdivision in Flanders to provide a deeper understanding of how or under which conditions the mapping process could moderate discordant situations amongst inhabitants, local organisations and local authorities, towards a more constructive outcome. In our opinion, this implies a thorough documentation and presentation of the different steps of the mapping process to design and moderate an open and transparent dialogue. The mapping project ‘Make the Neighbourhood’, is set up in the aftermath of a socio-spatial design intervention in the neighbourhood that led to polarization within the community. To start negotiation between the diverse claims that came to the fore, we co-create a desired future map of the neighbourhood together with local organisations and inhabitants as a way to engage them in the development of a new spatial development plan for the area. This mapping initiative set up a new ‘common’ goal or concern, as a first step to bridge the gap that we experienced between different sociocultural groups, bottom-up and top-down initiatives and between professionals and non-professionals. An atlas of elements (materials), an atlas of actors with different roles and an atlas of ways of cooperation and organisation form the work and building material of the future neighbourhood map, assembled in two co-creation sessions. Firstly, we will consider how the mapping procedures articulate the plurality of claims and agendas. Secondly, we will elaborate upon how social relations and spatialities are negotiated and reproduced during the different steps of the map making. Thirdly, we will reflect on the role of the rules, format, and structure of the mapping process in moderating negotiations between much divided claims. To conclude, we will discuss the challenges of visualizing the different steps of mapping process as a strategy to moderate tense negotiations in a more constructive direction in the context of spatial development processes.

Keywords: conflict, documentation, participatory mapping, residential subdivision

Procedia PDF Downloads 180
338 Role of Lipid-Lowering Treatment in the Monocyte Phenotype and Chemokine Receptor Levels after Acute Myocardial Infarction

Authors: Carolina N. França, Jônatas B. do Amaral, Maria C.O. Izar, Ighor L. Teixeira, Francisco A. Fonseca

Abstract:

Introduction: Atherosclerosis is a progressive disease, characterized by lipid and fibrotic element deposition in large-caliber arteries. Conditions related to the development of atherosclerosis, as dyslipidemia, hypertension, diabetes, and smoking are associated with endothelial dysfunction. There is a frequent recurrence of cardiovascular outcomes after acute myocardial infarction and, at this sense, cycles of mobilization of monocyte subtypes (classical, intermediate and nonclassical) secondary to myocardial infarction may determine the colonization of atherosclerotic plaques in different stages of the development, contributing to early recurrence of ischemic events. The recruitment of different monocyte subsets during inflammatory process requires the expression of chemokine receptors CCR2, CCR5, and CX3CR1, to promote the migration of monocytes to the inflammatory site. The aim of this study was to evaluate the effect of lipid-lowering treatment by six months in the monocyte phenotype and chemokine receptor levels of patients after Acute Myocardial Infarction (AMI). Methods: This is a PROBE (prospective, randomized, open-label trial with blinded endpoints) study (ClinicalTrials.gov Identifier: NCT02428374). Adult patients (n=147) of both genders, ageing 18-75 years, were randomized in a 2x2 factorial design for treatment with rosuvastatin 20 mg/day or simvastatin 40 mg/day plus ezetimibe 10 mg/day as well as ticagrelor 90 mg 2x/day and clopidogrel 75 mg, in addition to conventional AMI therapy. Blood samples were collected at baseline, after one month and six months of treatment. Monocyte subtypes (classical - inflammatory, intermediate - phagocytic and nonclassical – anti-inflammatory) were identified, quantified and characterized by flow cytometry, as well as the expressions of the chemokine receptors (CCR2, CCR5 and CX3CR1) were also evaluated in the mononuclear cells. Results: After six months of treatment, there was an increase in the percentage of classical monocytes and reduction in the nonclassical monocytes (p=0.038 and p < 0.0001 Friedman Test), without differences for intermediate monocytes. Besides, classical monocytes had higher expressions of CCR5 and CX3CR1 after treatment, without differences related to CCR2 (p < 0.0001 for CCR5 and CX3CR1; p=0.175 for CCR2). Intermediate monocytes had higher expressions of CCR5 and CX3CR1 and lower expression of CCR2 (p = 0.003; p < 0.0001 and p = 0.011, respectively). Nonclassical monocytes had lower expressions of CCR2 and CCR5, without differences for CX3CR1 (p < 0.0001; p = 0.009 and p = 0.138, respectively). There were no differences after the comparison between the four treatment arms. Conclusion: The data suggest a time-dependent modulation of classical and nonclassical monocytes and chemokine receptor levels. The higher percentage of classical monocytes (inflammatory cells) suggest a residual inflammatory risk, even under preconized treatments to AMI. Indeed, these changes do not seem to be affected by choice of the lipid-lowering strategy.

Keywords: acute myocardial infarction, chemokine receptors, lipid-lowering treatment, monocyte subtypes

Procedia PDF Downloads 95
337 Sculpted Forms and Sensitive Spaces: Walking through the Underground in Naples

Authors: Chiara Barone

Abstract:

In Naples, the visible architecture is only what emerges from the underground. Caves and tunnels cross it in every direction, intertwining with each other. They are not natural caves but spaces built by removing what is superfluous in order to dig a form out of the material. Architects, as sculptors of space, do not determine the exterior, what surrounds the volume and in which the forms live, but an interior underground space, perceptive and sensitive, able to generate new emotions each time. It is an intracorporeal architecture linked to the body, not in its external relationships, but rather with what happens inside. The proposed aims to reflect on the design of underground spaces in the Neapolitan city. The idea is to intend the underground as a spectacular museum of the city, an opportunity to learn in situ the history of the place along an unpredictable itinerary that crosses the caves and, in certain points, emerges, escaping from the world of shadows. Starting form the analysis and the study of the many overlapping elements, the archaeological one, the geological layer and the contemporary city above, it is possible to develop realistic alternatives for underground itineraries. The objective is to define minor paths to ensure the continuity between the touristic flows and entire underground segments already investigated but now disconnected: open-air paths, which abyss in the earth, retracing historical and preserved fragments. The visitor, in this way, passes from real spaces to sensitive spaces, in which the imaginary replaces the real experience, running towards exciting and secret knowledge. To safeguard the complex framework of the historical-artistic values, it is essential to use a multidisciplinary methodology based on a global approach. Moreover, it is essential to refer to similar design projects for the archaeological underground, capable of guide action strategies, looking at similar conditions in other cities, where the project has led to an enhancement of the heritage in the city. The research limits the field of investigation, by choosing the historic center of Naples, applying bibliographic and theoretical research to a real place. First of all, it’s necessary to deepen the places’ knowledge understanding the potentialities of the project as a link between what is below and what is above. Starting from a scientific approach, in which theory and practice are constantly intertwined through the architectural project, the major contribution is to provide possible alternative configurations for the underground space and its relationship with the city above, understanding how the condition of transition, as passage between the below and the above becomes structuring in the design process. Starting from the consideration of the underground as both a real physical place and a sensitive place, which engages the memory, imagination, and sensitivity of a man, the research aims at identifying possible configurations and actions useful for future urban programs to make the underground a central part of the lived city, again.

Keywords: underground paths, invisible ruins, imaginary, sculpted forms, sensitive spaces, Naples

Procedia PDF Downloads 78
336 Solar Liquid Desiccant Regenerator for Two Stage KCOOH Based Fresh Air Dehumidifier

Authors: M. V. Rane, Tareke Tekia

Abstract:

Liquid desiccant based fresh air dehumidifiers can be gainfully deployed for air-conditioning, agro-produce drying and in many industrial processes. Regeneration of liquid desiccant can be done using direct firing, high temperature waste heat or solar energy. Solar energy is clean and available in abundance; however, it is costly to collect. A two stage liquid desiccant fresh air dehumidification system can offer Coefficient of Performance (COP), in the range of 1.6 to 2 for comfort air conditioning applications. High COP helps reduce the size and cost of collectors required. Performance tests on high temperature regenerator of a two stage liquid desiccant fresh air dehumidifier coupled with seasonally tracked flat plate like solar collector will be presented in this paper. The two stage fresh air dehumidifier has four major components: High Temperature Regenerator (HTR), Low Temperature Regenerator (LTR), High and Low Temperature Solution Heat Exchangers and Fresh Air Dehumidifier (FAD). This open system can operate at near atmospheric pressure in all the components. These systems can be simple, maintenance-free and scalable. Environmentally benign, non-corrosive, moderately priced Potassium Formate, KCOOH, is used as a liquid desiccant. Typical KCOOH concentration in the system is expected to vary between 65 and 75%. Dilute liquid desiccant at 65% concentration exiting the fresh air dehumidifier will be pumped and preheated in solution heat exchangers before entering the high temperature solar regenerator. In the solar collector, solution will be regenerated to intermediate concentration of 70%. Steam and saturated solution exiting the solar collector array will be separated. Steam at near atmospheric pressure will then be used to regenerate the intermediate concentration solution up to a concentration of 75% in a low temperature regenerator where moisture vaporized be released in to atmosphere. Condensed steam can be used as potable water after adding a pinch of salt and some nutrient. Warm concentrated liquid desiccant will be routed to solution heat exchanger to recycle its heat to preheat the weak liquid desiccant solution. Evacuated glass tube based seasonally tracked solar collector is used for regeneration of liquid desiccant at high temperature. Temperature of regeneration for KCOOH is 133°C at 70% concentration. The medium temperature collector was designed for temperature range of 100 to 150°C. Double wall polycarbonate top cover helps reduce top losses. Absorber integrated heat storage helps stabilize the temperature of liquid desiccant exiting the collectors during intermittent cloudy conditions, and extends the operation of the system by couple of hours beyond the sunshine hours. This solar collector is light in weight, 12 kg/m2 without absorber integrated heat storage material, and 27 kg/m2 with heat storage material. Cost of the collector is estimated to be 10,000 INR/m2. Theoretical modeling of the collector has shown that the optical efficiency is 62%. Performance test of regeneration of KCOOH will be reported.

Keywords: solar, liquid desiccant, dehumidification, air conditioning, regeneration

Procedia PDF Downloads 327
335 Dynamic EEG Desynchronization in Response to Vicarious Pain

Authors: Justin Durham, Chanda Rooney, Robert Mather, Mickie Vanhoy

Abstract:

The psychological construct of empathy is to understand a person’s cognitive perspective and experience the other person’s emotional state. Deciphering emotional states is conducive for interpreting vicarious pain. Observing others' physical pain activates neural networks related to the actual experience of pain itself. The study addresses empathy as a nonlinear dynamic process of simulation for individuals to understand the mental states of others and experience vicarious pain, exhibiting self-organized criticality. Such criticality follows from a combination of neural networks with an excitatory feedback loop generating bistability to resonate permutated empathy. Cortical networks exhibit diverse patterns of activity, including oscillations, synchrony and waves, however, the temporal dynamics of neurophysiological activities underlying empathic processes remain poorly understood. Mu rhythms are EEG oscillations with dominant frequencies of 8-13 Hz becoming synchronized when the body is relaxed with eyes open and when the sensorimotor system is in idle, thus, mu rhythm synchrony is expected to be highest in baseline conditions. When the sensorimotor system is activated either by performing or simulating action, mu rhythms become suppressed or desynchronize, thus, should be suppressed while observing video clips of painful injuries if previous research on mirror system activation holds. Twelve undergraduates contributed EEG data and survey responses to empathy and psychopathy scales in addition to watching consecutive video clips of sports injuries. Participants watched a blank, black image on a computer monitor before and after observing a video of consecutive sports injuries incidents. Each video condition lasted five-minutes long. A BIOPAC MP150 recorded EEG signals from sensorimotor and thalamocortical regions related to a complex neural network called the ‘pain matrix’. Physical and social pain are activated in this network to resonate vicarious pain responses to processing empathy. Five EEG single electrode locations were applied to regions measuring sensorimotor electrical activity in microvolts (μV) to monitor mu rhythms. EEG signals were sampled at a rate of 200 Hz. Mu rhythm desynchronization was measured via 8-13 Hz at electrode sites (F3 & F4). Data for each participant’s mu rhythms were analyzed via Fast Fourier Transformation (FFT) and multifractal time series analysis.

Keywords: desynchronization, dynamical systems theory, electroencephalography (EEG), empathy, multifractal time series analysis, mu waveform, neurophysiology, pain simulation, social cognition

Procedia PDF Downloads 261
334 DEKA-1 a Dose-Finding Phase 1 Trial: Observing Safety and Biomarkers using DK210 (EGFR) for Inoperable Locally Advanced and/or Metastatic EGFR+ Tumors with Progressive Disease Failing Systemic Therapy

Authors: Spira A., Marabelle A., Kientop D., Moser E., Mumm J.

Abstract:

Background: Both interleukin-2 (IL-2) and interleukin-10 (IL-10) have been extensively studied for their stimulatory function on T cells and their potential to obtain sustainable tumor control in RCC, melanoma, lung, and pancreatic cancer as monotherapy, as well as combination with PD-1 blockers, radiation, and chemotherapy. While approved, IL-2 retains significant toxicity, preventing its widespread use. The significant efforts undertaken to uncouple IL-2 toxicity from its anti-tumor function have been unsuccessful, and early phase clinical safety observed with PEGylated IL-10 was not met in a blinded Phase 3 trial. Deka Biosciences has engineered a novel molecule coupling wild-type IL-2 to a high affinity variant of Epstein Barr Viral (EBV) IL-10 via a scaffold (scFv) that binds to epidermal growth factor receptors (EGFR). This patented molecule, termed DK210 (EGFR), is retained at high levels within the tumor microenvironment for days after dosing. In addition to overlapping and non-redundant anti-tumor function, IL-10 reduces IL-2 mediated cytokine release syndrome risks and inhibits IL-2 mediated T regulatory cell proliferation. Methods: DK210 (EGFR) is being evaluated in an open-label, dose-escalation (Phase 1) study with 5 (0.025-0.3 mg/kg) monotherapy dose levels and (expansion cohorts) in combination with PD-1 blockers, or radiation or chemotherapy in patients with advanced solid tumors overexpressing EGFR. Key eligibility criteria include 1) confirmed progressive disease on at least one line of systemic treatment, 2) EGFR overexpression or amplification documented in histology reports, 3) at least a 4 week or 5 half-lives window since last treatment, and 4) excluding subjects with long QT syndrome, multiple myeloma, multiple sclerosis, myasthenia gravis or uncontrolled infectious, psychiatric, neurologic, or cancer disease. Plasma and tissue samples will be investigated for pharmacodynamic and predictive biomarkers and genetic signatures associated with IFN-gamma secretion, aiming to select subjects for treatment in Phase 2. Conclusion: Through successful coupling of wild-type IL-2 with a high affinity IL-10 and targeting directly to the tumor microenvironment, DK210 (EGFR) has the potential to harness IL-2 and IL-10’s known anti-cancer promise while reducing immunogenicity and toxicity risks enabling safe concomitant cytokine treatment with other anti-cancer modalities.

Keywords: cytokine, EGFR over expression, interleukine-2, interleukine-10, clinical trial

Procedia PDF Downloads 57
333 A Short Dermatoscopy Training Increases Diagnostic Performance in Medical Students

Authors: Magdalena Chrabąszcz, Teresa Wolniewicz, Cezary Maciejewski, Joanna Czuwara

Abstract:

BACKGROUND: Dermoscopy is a clinical tool known to improve the early detection of melanoma and other malignancies of the skin. Over the past few years melanoma has grown into a disease of socio-economic importance due to the increasing incidence and persistently high mortality rates. Early diagnosis remains the best method to reduce melanoma and non-melanoma skin cancer– related mortality and morbidity. Dermoscopy is a noninvasive technique that consists of viewing pigmented skin lesions through a hand-held lens. This simple procedure increases melanoma diagnostic accuracy by up to 35%. Dermoscopy is currently the standard for clinical differential diagnosis of cutaneous melanoma and for qualifying lesion for the excision biopsy. Like any clinical tool, training is required for effective use. The introduction of small and handy dermoscopes contributed significantly to the switch of dermatoscopy toward a first-level useful tool. Non-dermatologist physicians are well positioned for opportunistic melanoma detection; however, education in the skin cancer examination is limited during medical school and traditionally lecture-based. AIM: The aim of this randomized study was to determine whether the adjunct of dermoscopy to the standard fourth year medical curriculum improves the ability of medical students to distinguish between benign and malignant lesions and assess acceptability and satisfaction with the intervention. METHODS: We performed a prospective study in 2 cohorts of fourth-year medical students at Medical University of Warsaw. Groups having dermatology course, were randomly assigned to:  cohort A: with limited access to dermatoscopy from their teacher only – 1 dermatoscope for 15 people  Cohort B: with a full access to use dermatoscopy during their clinical classes:1 dermatoscope for 4 people available constantly plus 15-minute dermoscopy tutorial. Students in both study arms got an image-based test of 10 lesions to assess ability to differentiate benign from malignant lesions and postintervention survey collecting minimal background information, attitudes about the skin cancer examination and course satisfaction. RESULTS: The cohort B had higher scores than the cohort A in recognition of nonmelanocytic (P < 0.05) and melanocytic (P <0.05) lesions. Medical students who have a possibility to use dermatoscope by themselves have also a higher satisfaction rates after the dermatology course than the group with limited access to this diagnostic tool. Moreover according to our results they were more motivated to learn dermatoscopy and use it in their future everyday clinical practice. LIMITATIONS: There were limited participants. Further study of the application on clinical practice is still needed. CONCLUSION: Although the use of dermatoscope in dermatology as a specialty is widely accepted, sufficiently validated clinical tools for the examination of potentially malignant skin lesions are lacking in general practice. Introducing medical students to dermoscopy in their fourth year curricula of medical school may improve their ability to differentiate benign from malignant lesions. It can can also encourage students to use dermatoscopy in their future practice which can significantly improve early recognition of malignant lesions and thus decrease melanoma mortality.

Keywords: dermatoscopy, early detection of melanoma, medical education, skin cancer

Procedia PDF Downloads 96
332 Efficacy of Sparganium stoloniferum–Derived Compound in the Treatment of Acne Vulgaris: A Pilot Study

Authors: Wanvipa Thongborisute, Punyaphat Sirithanabadeekul, Pichit Suvanprakorn, Anan Jiraviroon

Abstract:

Background: Acne vulgaris is one of the most common dermatologic problems, and can have a significant psychological and physical effect on patients. Propionibacterium acnes' roles in acne vulgaris involve the activation of toll-like receptor 4 (TLR4), and toll-like receptor 2 (TLR2) pathways. By activating these pathways, inflammatory events of acne lesions, comedogenesis and sebaceous lipogenesis can occur. Currently, there are several topical agents commonly use in treating acne vulgaris that are known to have an effect on TLRs, such as retinoic acid and adapalene, but these drugs still have some irritating effects. At present, there is an alarming increase in rate of bacterial resistance due to irrational used of antibiotics both orally and topically. For this reason, acne treatments should contain bioactive molecules targeting at the site of action for the most effective therapeutic effect with the least side effects. Sparganium stoloniferumis a Chinese aquatic herb containing a compound called Sparstolonin B (SsnB), which has been reported to selectively blocks Toll-like receptor 2 (TLR2) and Toll-like receptor 4 (TLR4)-mediated inflammatory signals. Therefore, this topical TLR2 and TLR4 antagonist, in a form of Sparganium stoloniferum-derived compound containing SsnB, should give a benefit in reducing inflammation of acne vulgaris lesions and providing an alternative treatments for patients with this condition. Materials and Methods: The objectives of this randomized double blinded split faced placebo controlled trial is to study the safety and efficacy of the Sparganium stoloniferum-derived compound. 32 volunteered patients with mild to moderate degree of acne vulgaris according to global acne grading system were included in the study. After being informed and consented the subjects were given 2 topical treatments for acne vulgaris, one being topical 2.40% Sparganium stoloniferum extraction (containing Sparstolonin B) and the other, placebo. The subjects were asked to apply each treatment to either half of the face daily morning and night by randomization for 8 weeks, and come in for a weekly follow up. For each visit, the patients went through a procedure of lesion counting, including comedones, papules, nodules, pustules, and cystic lesions. Results: During 8 weeks of experimentation, the result shows a reduction in total lesions number between the placebo and the treatment side show statistical significance starting at week 4, where the 95% confidence interval begin to no longer overlap, and shows a trend of continuing to be further apart. The decrease in the amount of total lesions between week 0 and week 8 of the placebo side shows no statistical significant at P value >0.05. While the decrease in the amount of total lesions of acne vulgaris of the treatment side comparing between week 0 and week 8 shows statistical significant at P value <0.001. Conclusion: The data demonstrates that 2.40% Sparganium stoloniferum extraction (containing Sparstolonin B) is more effective in treating acne vulgaris comparing to topical placebo in treating acne vulgaris, by showing significant reduction in the total numbers of acne lesions. Therefore, this topical Sparganium stoloniferum extraction could become a potential alternative treatment for acne vulgaris.

Keywords: acne vulgaris, sparganium stoloniferum, sparstolonin B, toll-like receptor 2, toll-like receptor 4

Procedia PDF Downloads 157
331 An Efficient Process Analysis and Control Method for Tire Mixing Operation

Authors: Hwang Ho Kim, Do Gyun Kim, Jin Young Choi, Sang Chul Park

Abstract:

Since tire production process is very complicated, company-wide management of it is very difficult, necessitating considerable amounts of capital and labors. Thus, productivity should be enhanced and maintained competitive by developing and applying effective production plans. Among major processes for tire manufacturing, consisting of mixing component preparation, building and curing, the mixing process is an essential and important step because the main component of tire, called compound, is formed at this step. Compound as a rubber synthesis with various characteristics plays its own role required for a tire as a finished product. Meanwhile, scheduling tire mixing process is similar to flexible job shop scheduling problem (FJSSP) because various kinds of compounds have their unique orders of operations, and a set of alternative machines can be used to process each operation. In addition, setup time required for different operations may differ due to alteration of additives. In other words, each operation of mixing processes requires different setup time depending on the previous one, and this kind of feature, called sequence dependent setup time (SDST), is a very important issue in traditional scheduling problems such as flexible job shop scheduling problems. However, despite of its importance, there exist few research works dealing with the tire mixing process. Thus, in this paper, we consider the scheduling problem for tire mixing process and suggest an efficient particle swarm optimization (PSO) algorithm to minimize the makespan for completing all the required jobs belonging to the process. Specifically, we design a particle encoding scheme for the considered scheduling problem, including a processing sequence for compounds and machine allocation information for each job operation, and a method for generating a tire mixing schedule from a given particle. At each iteration, the coordination and velocity of particles are updated, and the current solution is compared with new solution. This procedure is repeated until a stopping condition is satisfied. The performance of the proposed algorithm is validated through a numerical experiment by using some small-sized problem instances expressing the tire mixing process. Furthermore, we compare the solution of the proposed algorithm with it obtained by solving a mixed integer linear programming (MILP) model developed in previous research work. As for performance measure, we define an error rate which can evaluate the difference between two solutions. As a result, we show that PSO algorithm proposed in this paper outperforms MILP model with respect to the effectiveness and efficiency. As the direction for future work, we plan to consider scheduling problems in other processes such as building, curing. We can also extend our current work by considering other performance measures such as weighted makespan or processing times affected by aging or learning effects.

Keywords: compound, error rate, flexible job shop scheduling problem, makespan, particle encoding scheme, particle swarm optimization, sequence dependent setup time, tire mixing process

Procedia PDF Downloads 240