Search results for: computing paradigm
258 Digital Transformation of Lean Production: Systematic Approach for the Determination of Digitally Pervasive Value Chains
Authors: Peter Burggräf, Matthias Dannapfel, Hanno Voet, Patrick-Benjamin Bök, Jérôme Uelpenich, Julian Hoppe
Abstract:
The increasing digitalization of value chains can help companies to handle rising complexity in their processes and thereby reduce the steadily increasing planning and control effort in order to raise performance limits. Due to technological advances, companies face the challenge of smart value chains for the purpose of improvements in productivity, handling the increasing time and cost pressure and the need of individualized production. Therefore, companies need to ensure quick and flexible decisions to create self-optimizing processes and, consequently, to make their production more efficient. Lean production, as the most commonly used paradigm for complexity reduction, reaches its limits when it comes to variant flexible production and constantly changing market and environmental conditions. To lift performance limits, which are inbuilt in current value chains, new methods and tools must be applied. Digitalization provides the potential to derive these new methods and tools. However, companies lack the experience to harmonize different digital technologies. There is no practicable framework, which instructs the transformation of current value chains into digital pervasive value chains. Current research shows that a connection between lean production and digitalization exists. This link is based on factors such as people, technology and organization. In this paper, the introduced method for the determination of digitally pervasive value chains takes the factors people, technology and organization into account and extends existing approaches by a new dimension. It is the first systematic approach for the digital transformation of lean production and consists of four steps: The first step of ‘target definition’ describes the target situation and defines the depth of the analysis with regards to the inspection area and the level of detail. The second step of ‘analysis of the value chain’ verifies the lean-ability of processes and lies in a special focus on the integration capacity of digital technologies in order to raise the limits of lean production. Furthermore, the ‘digital evaluation process’ ensures the usefulness of digital adaptions regarding their practicability and their integrability into the existing production system. Finally, the method defines actions to be performed based on the evaluation process and in accordance with the target situation. As a result, the validation and optimization of the proposed method in a German company from the electronics industry shows that the digital transformation of current value chains based on lean production achieves a raise of their inbuilt performance limits.Keywords: digitalization, digital transformation, Industrie 4.0, lean production, value chain
Procedia PDF Downloads 313257 Fully Eulerian Finite Element Methodology for the Numerical Modeling of the Dynamics of Heart Valves
Authors: Aymen Laadhari
Abstract:
During the last decade, an increasing number of contributions have been made in the fields of scientific computing and numerical methodologies applied to the study of the hemodynamics in the heart. In contrast, the numerical aspects concerning the interaction of pulsatile blood flow with highly deformable thin leaflets have been much less explored. This coupled problem remains extremely challenging and numerical difficulties include e.g. the resolution of full Fluid-Structure Interaction problem with large deformations of extremely thin leaflets, substantial mesh deformations, high transvalvular pressure discontinuities, contact between leaflets. Although the Lagrangian description of the structural motion and strain measures is naturally used, many numerical complexities can arise when studying large deformations of thin structures. Eulerian approaches represent a promising alternative to readily model large deformations and handle contact issues. We present a fully Eulerian finite element methodology tailored for the simulation of pulsatile blood flow in the aorta and sinus of Valsalva interacting with highly deformable thin leaflets. Our method enables to use a fluid solver on a fixed mesh, whilst being able to easily model the mechanical properties of the valve. We introduce a semi-implicit time integration scheme based on a consistent NewtonRaphson linearization. A variant of the classical Newton method is introduced and guarantees a third-order convergence. High-fidelity computational geometries are built and simulations are performed under physiological conditions. We address in detail the main features of the proposed method, and we report several experiments with the aim of illustrating its accuracy and efficiency.Keywords: eulerian, level set, newton, valve
Procedia PDF Downloads 278256 Semiotics of the New Commercial Music Paradigm
Authors: Mladen Milicevic
Abstract:
This presentation will address how the statistical analysis of digitized popular music influences the music creation and emotionally manipulates consumers.Furthermore, it will deal with semiological aspect of uniformization of musical taste in order to predict the potential revenues generated by popular music sales. In the USA, we live in an age where most of the popular music (i.e. music that generates substantial revenue) has been digitized. It is safe to say that almost everything that was produced in last 10 years is already digitized (either available on iTunes, Spotify, YouTube, or some other platform). Depending on marketing viability and its potential to generate additional revenue most of the “older” music is still being digitized. Once the music gets turned into a digital audio file,it can be computer-analyzed in all kinds of respects, and the similar goes for the lyrics because they also exist as a digital text file, to which any kin of N Capture-kind of analysis may be applied. So, by employing statistical examination of different popular music metrics such as tempo, form, pronouns, introduction length, song length, archetypes, subject matter,and repetition of title, the commercial result may be predicted. Polyphonic HMI (Human Media Interface) introduced the concept of the hit song science computer program in 2003.The company asserted that machine learning could create a music profile to predict hit songs from its audio features Thus,it has been established that a successful pop song must include: 100 bpm or more;an 8 second intro;use the pronoun 'you' within 20 seconds of the start of the song; hit the bridge middle 8 between 2 minutes and 2 minutes 30 seconds; average 7 repetitions of the title; create some expectations and fill that expectation in the title. For the country song: 100 bpm or less for a male artist; 14-second intro; uses the pronoun 'you' within the first 20 seconds of the intro; has a bridge middle 8 between 2 minutes and 2 minutes 30 seconds; has 7 repetitions of title; creates an expectation,fulfills it in 60 seconds.This approach to commercial popular music minimizes the human influence when it comes to which “artist” a record label is going to sign and market. Twenty years ago,music experts in the A&R (Artists and Repertoire) departments of the record labels were making personal aesthetic judgments based on their extensive experience in the music industry. Now, the computer music analyzing programs, are replacing them in an attempt to minimize investment risk of the panicking record labels, in an environment where nobody can predict the future of the recording industry.The impact on the consumers taste through the narrow bottleneck of the above mentioned music selection by the record labels,created some very peculiar effects not only on the taste of popular music consumers, but also the creative chops of the music artists as well. What is the meaning of this semiological shift is the main focus of this research and paper presentation.Keywords: music, semiology, commercial, taste
Procedia PDF Downloads 393255 A Paradigm Shift in the Cost of Illness of Type 2 Diabetes Mellitus over a Decade in South India: A Prevalence Based Study
Authors: Usha S. Adiga, Sachidanada Adiga
Abstract:
Introduction: Diabetes Mellitus (DM) is one of the most common non-communicable diseases which imposes a large economic burden on the global health-care system. Cost of illness studies in India have assessed the health care cost of DM, but have certain limitations due to lack of standardization of the methods used, improper documentation of data, lack of follow up, etc. The objective of the study was to estimate the cost of illness of uncomplicated versus complicated type 2 diabetes mellitus in Coastal Karnataka, India. The study also aimed to find out the trend of cost of illness of the disease over a decade. Methodology: A prevalence based bottom-up approach study was carried out in two tertiary care hospitals located in Coastal Karnataka after ethical approval. Direct Medical costs like annual laboratory costs, pharmacy cost, consultation charges, hospital bed charges, surgical /intervention costs of 238 diabetics and 340 diabetic patients respectively from two hospitals were obtained from the medical record sections. Patients were divided into six groups, uncomplicated diabetes, diabetic retinopathy(DR), nephropathy(DN), neuropathy(DNeu), diabetic foot(DF), and ischemic heart disease (IHD). Different costs incurred in 2008 and 2017 in these groups were compared, to study the trend of cost of illness. Kruskal Wallis test followed by Dunn’s test were used to compare median costs between the groups and Spearman's correlation test was used for correlation studies. Results: Uncomplicated patients had significantly lower costs (p <0.0001) compared to other groups. Patients with IHD had highest Medical expenses (p < 0.0001), followed by DN and DF (p < 0.0001 ). Annual medical costs incurred were 1.8, 2.76, 2.77, 1.76, and 4.34 times higher in retinopathy, nephropathy, diabetic foot, neuropathy and IHD patients as compared to the cost incurred in managing uncomplicated diabetics. Other costs also showed a similar pattern of rising. A positive correlation was observed between the costs incurred and duration of diabetes, a negative correlation between the glycemic status and cost incurred. The cost incurred in the management of DM in 2017 was found to be elevated 1.4 - 2.7 times when compared to that in 2008. Conclusion: It is evident from the study that the economic burden due to diabetes mellitus is substantial. It poses a significant financial burden on the healthcare system, individual and society as a whole. There is a need for the strategies to achieve optimal glycemic control and operationalize regular and early screening methods for complications so as to reduce the burden of the disease.Keywords: COI, diabetes mellitus, a bottom up approach, economics
Procedia PDF Downloads 116254 Optimizing Energy Efficiency: Leveraging Big Data Analytics and AWS Services for Buildings and Industries
Authors: Gaurav Kumar Sinha
Abstract:
In an era marked by increasing concerns about energy sustainability, this research endeavors to address the pressing challenge of energy consumption in buildings and industries. This study delves into the transformative potential of AWS services in optimizing energy efficiency. The research is founded on the recognition that effective management of energy consumption is imperative for both environmental conservation and economic viability. Buildings and industries account for a substantial portion of global energy use, making it crucial to develop advanced techniques for analysis and reduction. This study sets out to explore the integration of AWS services with big data analytics to provide innovative solutions for energy consumption analysis. Leveraging AWS's cloud computing capabilities, scalable infrastructure, and data analytics tools, the research aims to develop efficient methods for collecting, processing, and analyzing energy data from diverse sources. The core focus is on creating predictive models and real-time monitoring systems that enable proactive energy management. By harnessing AWS's machine learning and data analytics capabilities, the research seeks to identify patterns, anomalies, and optimization opportunities within energy consumption data. Furthermore, this study aims to propose actionable recommendations for reducing energy consumption in buildings and industries. By combining AWS services with metrics-driven insights, the research strives to facilitate the implementation of energy-efficient practices, ultimately leading to reduced carbon emissions and cost savings. The integration of AWS services not only enhances the analytical capabilities but also offers scalable solutions that can be customized for different building and industrial contexts. The research also recognizes the potential for AWS-powered solutions to promote sustainable practices and support environmental stewardship.Keywords: energy consumption analysis, big data analytics, AWS services, energy efficiency
Procedia PDF Downloads 64253 Conditional Relation between Migration, Demographic Shift and Human Development in India
Authors: Rakesh Mishra, Rajni Singh, Mukunda Upadhyay
Abstract:
Since the last few decades, the prima facie of development has shifted towards the working population in India. There has been a paradigm shift in the development approach with the realization that the present demographic dividend has to be harnessed for sustainable development. Rapid urbanization and improved socioeconomic characteristics experienced within its territory has catalyzed various forms of migration into it, resulting in massive transference of workforce between its states. Workforce in any country plays a very crucial role in deciding development of both the places, from where they have out-migrated and the place they are residing currently. In India, people are found to be migrating from relatively less developed states to a well urbanized and developed state for satisfying their neediness. Linking migration to HDI at place of destination, the regression coefficient (β ̂) shows affirmative association between them, because higher the HDI of the place would be, higher would be chance of earning and hence likeliness of the migrants would be more to choose that place as a new destination and vice versa. So the push factor is compromised by the cost of rearing and provides negative impulse on the in migrants letting down their numbers to metro cities or megacities of the states but increasing their mobility to the suburban areas and vice versa. The main objective of the study is to check the role of migration in deciding the dividend of the place of destination as well as people at the place of their usual residence with special focus to highly urban states in India. Idealized scenario of Indian migrants refers to some new theories in making. On analyzing the demographic dividend of the places we got to know that Uttar Pradesh provides maximum dividend to Maharashtra, West Bengal and Delhi, and the demographic divided of migrants are quite comparable to the native’s shares in the demographic dividend in these places. On analyzing the data from National Sample Survey 64th round and Census of India-2001, we have observed that for males in rural areas, the share of unemployed person declined by 9 percentage points (from 45% before migration to 36 % after migration) and for females in rural areas the decline was nearly 12 percentage points (from 79% before migration to 67% after migration. It has been observed that the shares of unemployed males in both rural and urban areas, which were significant before migration, got reduced after migration while the share of unemployed females in the rural as well as in the urban areas remained almost negligible both for before and after migration. So increase in the number of employed after migration provides an indication of changes in the associated cofactors like health and education of the place of destination and arithmetically to the place from where they have migrated out. This paper presents the evidence on the patterns of prevailing migration dynamics and corresponding demographic benefits in India and its states, examines trends and effects, and discusses plausible explanations.Keywords: migration, demographic shift, human development index, multilevel analysis
Procedia PDF Downloads 388252 Products in Early Development Phases: Ecological Classification and Evaluation Using an Interval Arithmetic Based Calculation Approach
Authors: Helen L. Hein, Joachim Schwarte
Abstract:
As a pillar of sustainable development, ecology has become an important milestone in research community, especially due to global challenges like climate change. The ecological performance of products can be scientifically conducted with life cycle assessments. In the construction sector, significant amounts of CO2 emissions are assigned to the energy used for building heating purposes. Therefore, sustainable construction materials for insulating purposes are substantial, whereby aerogels have been explored intensively in the last years due to their low thermal conductivity. Therefore, the WALL-ACE project aims to develop an aerogel-based thermal insulating plaster that would achieve minor thermal conductivities. But as in the early stage of development phases, a lot of information is still missing or not yet accessible, the ecological performance of innovative products bases increasingly on uncertain data that can lead to significant deviations in the results. To be able to predict realistically how meaningful the results are and how viable the developed products may be with regard to their corresponding respective market, these deviations however have to be considered. Therefore, a classification method is presented in this study, which may allow comparing the ecological performance of modern products with already established and competitive materials. In order to achieve this, an alternative calculation method was used that allows computing with lower and upper bounds to consider all possible values without precise data. The life cycle analysis of the considered products was conducted with an interval arithmetic based calculation method. The results lead to the conclusion that the interval solutions describing the possible environmental impacts are so wide that the result usability is limited. Nevertheless, a further optimization in reducing environmental impacts of aerogels seems to be needed to become more competitive in the future.Keywords: aerogel-based, insulating material, early development phase, interval arithmetic
Procedia PDF Downloads 143251 Redeeming the Self-Settling Scores with the Nazis by the Means of Poetics
Authors: Liliane Steiner
Abstract:
Beyond the testimonial act, that sheds light on the feminine experience in the Holocaust, the survivors' writing voices first and foremost the abjection of the feminine self brutally inflicted by the Nazis in the Holocaust, and in the same movement redeems the self by the means of poetics, and brings it to an existential state of being a subject. This study aims to stress the poetics of this writing in order to promote the Holocaust literature from the margins to the mainstream and to contribute to the commemoration of the Holocaust in the next generations. Methodology: The study of the survivors' redeeming of self is based on Julia Kristeva's theory of the abject: the self-throws out everything that threatens its existence and Liliane Steiner's theory of the post- abjection of hell: the belated act of vomiting the abject experiences settles cores with the author of the abject to redeem the self. The research will focus on Ruth Sender's trilogy The Cage, To Life and The Holocaust Lady as a case study. Findings: The binary mode that characterizes this writing reflects the experience of Jewish women, who were subject(s), were treated violently as object(s), debased, defeminized and, eventually turned into abject by the Nazis. In a tour de force, this writing re-enacts the postponed resistance, that vomited the abject imposed on the feminine self by the very act of narration, which denounces the real abject, the perpetrators. The post-abjection of self is acted out in constructs of abject, relating the abject experience of the Holocaust as well as the rehabilitation of the surviving self (subject). The transcription of abject surfaces in deconstructing the abject through self- characterization, and in the elusive rendering of bad memories, having recourse to literary figures. The narrative 'I' selects, obstructs, mends and tells the past events from an active standpoint, as would a subject in control of its (narrative) fate. In a compensatory movement, the narrating I tells itself by reconstructing the subject and proving time and again that I is other. Moreover, in the belated endeavor to revenge, testify and narrate the abject, the narrative I defies itself, and represents itself as a dialectical I, splitting and multiplying itself in a deconstructing way. The dialectical I is never (one) I. It voices not only the unvoiced but also and mainly the other silenced 'I's. Drawing its nature and construct from traumatic memories, the dialectical I transgresses boundaries to narrate her story, and in the same breath, the story of Jewish women doomed to silence. In this narrative feat, the dialectical I stresses its essential dialectical existence with the past, never to be (one) again. Conclusion: The pattern of I is other generates patterns of subject(s) that defy, transgress and repudiate the abject and its repercussions on the feminine I. The feminine I writes itself as a survivor that defies the abject (Nazis) and takes revenge. The paradigm of metamorphosis that accompanies the journey of the Holocaust memoirist engenders life and surviving as well as a narration that defies stagnation and death.Keywords: abject, feminine writing, holocaust, post-abjection
Procedia PDF Downloads 103250 Educational Infrastructure a Barrier for Teaching and Learning Architecture
Authors: Alejandra Torres-Landa López
Abstract:
Introduction: Can architecture students be creative in spaces conformed by an educational infrastructure build with paradigms of the past?, this question and others related are answered in this paper as it presents the PhD research: An anthropic conflict in Mexican Higher Education Institutes, problems and challenges of the educational infrastructure in teaching and learning History of Architecture. This research was finished in 2013 and is one of the first studies conducted nationwide in Mexico that analysis the educational infrastructure impact in learning architecture; its objective was to identify which elements of the educational infrastructure of Mexican Higher Education Institutes where architects are formed, hinder or contribute to the teaching and learning of History of Architecture; how and why it happens. The methodology: A mixed methodology was used combining quantitative and qualitative analysis. Different resources and strategies for data collection were used, such as questionnaires for students and teachers, interviews to architecture research experts, direct observations in Architecture classes, among others; the data collected was analyses using SPSS and MAXQDA. The veracity of the quantitative data was supported by the Cronbach’s Alpha Coefficient, obtaining a 0.86, figure that gives the data enough support. All the above enabled to certify the anthropic conflict in which Mexican Universities are. Major findings of the study: Although some of findings were probably not unknown, they haven’t been systematized and analyzed with the depth to which it’s done in this research. So, it can be said, that the educational infrastructure of most of the Higher Education Institutes studied, is a barrier to the educational process, some of the reasons are: the little morphological variation of space, the inadequate control of lighting, noise, temperature, equipment and furniture, the poor or none accessibility for disable people; as well as the absence, obsolescence and / or insufficiency of information technologies are some of the issues that generate an anthropic conflict understanding it as the trouble that teachers and students have to relate between them, in order to achieve significant learning). It is clear that most of the educational infrastructure of Mexican Higher Education Institutes is anchored to paradigms of the past; it seems that they respond to the previous era of industrialization. The results confirm that the educational infrastructure of Mexican Higher Education Institutes where architects are formed, is perceived as a "closed container" of people and data; infrastructure that becomes a barrier to teaching and learning process. Conclusion: The research results show it's time to change the paradigm in which we conceive the educational infrastructure, it’s time to stop seen it just only as classrooms, workshops, laboratories and libraries, as it must be seen from a constructive, urban, architectural and human point of view, taking into account their different dimensions: physical, technological, documental, social, among others; so the educational infrastructure can become a set of elements that organize and create spaces where ideas and thoughts can be shared; to be a social catalyst where people can interact between each other and with the space itself.Keywords: educational infrastructure, impact of space in learning architecture outcomes, learning environments, teaching architecture, learning architecture
Procedia PDF Downloads 412249 Comparison of Feedforward Back Propagation and Self-Organizing Map for Prediction of Crop Water Stress Index of Rice
Authors: Aschalew Cherie Workneh, K. S. Hari Prasad, Chandra Shekhar Prasad Ojha
Abstract:
Due to the increase in water scarcity, the crop water stress index (CWSI) is receiving significant attention these days, especially in arid and semiarid regions, for quantifying water stress and effective irrigation scheduling. Nowadays, machine learning techniques such as neural networks are being widely used to determine CWSI. In the present study, the performance of two artificial neural networks, namely, Self-Organizing Maps (SOM) and Feed Forward-Back Propagation Artificial Neural Networks (FF-BP-ANN), are compared while determining the CWSI of rice crop. Irrigation field experiments with varying degrees of irrigation were conducted at the irrigation field laboratory of the Indian Institute of Technology, Roorkee, during the growing season of the rice crop. The CWSI of rice was computed empirically by measuring key meteorological variables (relative humidity, air temperature, wind speed, and canopy temperature) and crop parameters (crop height and root depth). The empirically computed CWSI was compared with SOM and FF-BP-ANN predicted CWSI. The upper and lower CWSI baselines are computed using multiple regression analysis. The regression analysis showed that the lower CWSI baseline for rice is a function of crop height (h), air vapor pressure deficit (AVPD), and wind speed (u), whereas the upper CWSI baseline is a function of crop height (h) and wind speed (u). The performance of SOM and FF-BP-ANN were compared by computing Nash-Sutcliffe efficiency (NSE), index of agreement (d), root mean squared error (RMSE), and coefficient of correlation (R²). It is found that FF-BP-ANN performs better than SOM while predicting the CWSI of rice crops.Keywords: artificial neural networks; crop water stress index; canopy temperature, prediction capability
Procedia PDF Downloads 117248 Real-Time Big-Data Warehouse a Next-Generation Enterprise Data Warehouse and Analysis Framework
Authors: Abbas Raza Ali
Abstract:
Big Data technology is gradually becoming a dire need of large enterprises. These enterprises are generating massively large amount of off-line and streaming data in both structured and unstructured formats on daily basis. It is a challenging task to effectively extract useful insights from the large scale datasets, even though sometimes it becomes a technology constraint to manage transactional data history of more than a few months. This paper presents a framework to efficiently manage massively large and complex datasets. The framework has been tested on a communication service provider producing massively large complex streaming data in binary format. The communication industry is bound by the regulators to manage history of their subscribers’ call records where every action of a subscriber generates a record. Also, managing and analyzing transactional data allows service providers to better understand their customers’ behavior, for example, deep packet inspection requires transactional internet usage data to explain internet usage behaviour of the subscribers. However, current relational database systems limit service providers to only maintain history at semantic level which is aggregated at subscriber level. The framework addresses these challenges by leveraging Big Data technology which optimally manages and allows deep analysis of complex datasets. The framework has been applied to offload existing Intelligent Network Mediation and relational Data Warehouse of the service provider on Big Data. The service provider has 50+ million subscriber-base with yearly growth of 7-10%. The end-to-end process takes not more than 10 minutes which involves binary to ASCII decoding of call detail records, stitching of all the interrogations against a call (transformations) and aggregations of all the call records of a subscriber.Keywords: big data, communication service providers, enterprise data warehouse, stream computing, Telco IN Mediation
Procedia PDF Downloads 175247 Mediation Analysis of the Efficacy of the Nimotuzumab-Cisplatin-Radiation (NCR) Improve Overall Survival (OS): A HPV Negative Oropharyngeal Cancer Patient (HPVNOCP) Cohort
Authors: Akshay Patil
Abstract:
Objective: Mediation analysis identifies causal pathways by testing the relationships between the NCR, the OS, and an intermediate variable that mediates the relationship between the Nimotuzumab-cisplatin-radiation (NCR) and OS. Introduction: In randomized controlled trials, the primary interest is in the mechanisms by which an intervention exerts its effects on the outcomes. Clinicians are often interested in how the intervention works (or why it does not work) through hypothesized causal mechanisms. In this work, we highlight the value of understanding causal mechanisms in randomized trial by applying causal mediation analysis in a randomized trial in oncology. Methods: Data was obtained from a phase III randomized trial (Subgroup of HPVNOCP). NCR is reported to significantly improve the OS of patients locally advanced head and neck cancer patients undergoing definitive chemoradiation. Here, based on trial data, the mediating effect of NCR on patient overall survival was systematically quantified through progression-free survival(PFS), disease free survival (DFS), Loco-regional failure (LRF), and the disease control rate (DCR), Overall response rate (ORR). Effects of potential mediators on the HR for OS with NCR versus cisplatin-radiation (CR) were analyzed by Cox regression models. Statistical analyses were performed using R software Version 3.6.3 (The R Foundation for Statistical Computing) Results: Effects of potential mediator PFS was an association between NCR treatment and OS, with an indirect-effect (IE) 0.76(0.62 – 0.95), which mediated 60.69% of the treatment effect. Taking into account baseline confounders, the overall adjusted hazard ratio of death was 0.64 (95% CI: 0.43 – 0.96; P=0.03). The DFS was also a significant mediator and had an IE 0.77 (95% CI; 0.62-0.93), 58% mediated). Smaller mediation effects (maximum 27%) were observed for LRF with IE 0.88(0.74 – 1.06). Both DCR and ORR mediated 10% and 15%, respectively, of the effect of NCR vs. CR on the OS with IE 0.65 (95% CI; 0.81 – 1.08) and 0.94(95% CI; 0.79 – 1.04). Conclusion: Our findings suggest that PFS and DFS were the most important mediators of the OS with nimotuzumab to weekly cisplatin-radiation in HPVNOCP.Keywords: mediation analysis, cancer data, survival, NCR, HPV negative oropharyngeal
Procedia PDF Downloads 145246 Curriculum Transformation: Multidisciplinary Perspectives on ‘Decolonisation’ and ‘Africanisation’ of the Curriculum in South Africa’s Higher Education
Authors: Andre Bechuke
Abstract:
The years of 2015-2017 witnessed a huge campaign, and in some instances, violent protests in South Africa by students and some groups of academics advocating the decolonisation of the curriculum of universities. These protests have forced through high expectations for universities to teach a curriculum relevant to the country, and the continent as well as enabled South Africa to participate in the globalised world. To realise this purpose, most universities are currently undertaking steps to transform and decolonise their curriculum. However, the transformation process is challenged and delayed by lack of a collective understanding of the concepts ‘decolonisation’ and ‘africanisation’ that should guide its application. Even more challenging is lack of a contextual understanding of these concepts across different university disciplines. Against this background, and underpinned in a qualitative research paradigm, the perspectives of these concepts as applied by different university disciplines were examined in order to understand and establish their implementation in the curriculum transformation agenda. Data were collected by reviewing the teaching and learning plans of 8 faculties of an institution of higher learning in South Africa and analysed through content and textual analysis. The findings revealed varied understanding and use of these concepts in the transformation of the curriculum across faculties. Decolonisation, according to the faculties of Law and Humanities, is perceived as the eradication of the Eurocentric positioning in curriculum content and the constitutive rules and norms that control thinking. This is not done by ignoring other knowledge traditions but does call for an affirmation and validation of African views of the world and systems of thought, mixing it with current knowledge. For the Faculty of Natural and Agricultural Sciences, decolonisation is seen as making the content of the curriculum relevant to students, fulfilling the needs of industry and equipping students for job opportunities. This means the use of teaching strategies and methods that are inclusive of students from diverse cultures, and to structure the learning experience in ways that are not alien to the cultures of the students. For the Health Sciences, decolonisation of the curriculum refers to the need for a shift in Western thinking towards being more sensitive to all cultural beliefs and thoughts. Collectively, decolonisation of education thus entails that a nation must become independent with regard to the acquisition of knowledge, skills, values, beliefs, and habits. Based on the findings, for universities to successfully transform their curriculum and integrate the concepts of decolonisation and Africanisation, there is a need to contextually determine the meaning of the concepts generally and narrow them down to what they should mean to specific disciplines. Universities should refrain from considering an umbrella approach to these concepts. Decolonisation should be seen as a means and not an end. A decolonised curriculum should equally be developed based on the finest knowledge skills, values, beliefs and habits around the world and not limited to one country or continent.Keywords: Africanisation, curriculum, transformation, decolonisation, multidisciplinary perspectives, South Africa’s higher education
Procedia PDF Downloads 161245 A Strength Weaknesses Opportunities and Threats Analysis of Socialisation Externalisation Combination and Internalisation Modes in Knowledge Management Practice: A Systematic Review of Literature
Authors: Aderonke Olaitan Adesina
Abstract:
Background: The paradigm shift to knowledge, as the key to organizational innovation and competitive advantage, has made the management of knowledge resources in organizations a mandate. A key component of the knowledge management (KM) cycle is knowledge creation, which is researched to be the result of the interaction between explicit and tacit knowledge. An effective knowledge creation process requires the use of the right model. The SECI (Socialisation, Externalisation, Combination, and Internalisation) model, proposed in 1995, is attested to be a preferred model of choice for knowledge creation activities. The model has, however, been criticized by researchers, who raise their concern, especially about its sequential nature. Therefore, this paper reviews extant literature on the practical application of each mode of the SECI model, from 1995 to date, with a view to ascertaining the relevance in modern-day KM practice. The study will establish the trends of use, with regards to the location and industry of use, and the interconnectedness of the modes. The main research question is, for organizational knowledge creation activities, is the SECI model indeed linear and sequential? In other words, does the model need to be reviewed in today’s KM practice? The review will generate a compendium of the usage of the SECI modes and propose a framework of use, based on the strength weaknesses opportunities and threats (SWOT) findings of the study. Method: This study will employ the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) methodology to investigate the usage and SWOT of the modes, in order to ascertain the success, or otherwise, of the sequential application of the modes in practice from 1995 to 2019. To achieve the purpose, four databases will be explored to search for open access, peer-reviewed articles from 1995 to 2019. The year 1995 is chosen as the baseline because it was the year the first paper on the SECI model was published. The study will appraise relevant peer-reviewed articles under the search terms: SECI (or its synonym, knowledge creation theory), socialization, externalization, combination, and internalization in the title, abstract, or keywords list. This review will include only empirical studies of knowledge management initiatives in which the SECI model and its modes were used. Findings: It is expected that the study will highlight the practical relevance of each mode of the SECI model, the linearity or not of the model, the SWOT in each mode. Concluding Statement: Organisations can, from the analysis, determine the modes of emphasis for their knowledge creation activities. It is expected that the study will support decision making in the choice of the SECI model as a strategy for the management of organizational knowledge resources, and in appropriating the SECI model, or its remodeled version, as a theoretical framework in future KM research.Keywords: combination, externalisation, internalisation, knowledge management, SECI model, socialisation
Procedia PDF Downloads 354244 Additive Manufacturing – Application to Next Generation Structured Packing (SpiroPak)
Authors: Biao Sun, Tejas Bhatelia, Vishnu Pareek, Ranjeet Utikar, Moses Tadé
Abstract:
Additive manufacturing (AM), commonly known as 3D printing, with the continuing advances in parallel processing and computational modeling, has created a paradigm shift (with significant radical thinking) in the design and operation of chemical processing plants, especially LNG plants. With the rising energy demands, environmental pressures, and economic challenges, there is a continuing industrial need for disruptive technologies such as AM, which possess capabilities that can drastically reduce the cost of manufacturing and operations of chemical processing plants in the future. However, the continuing challenge for 3D printing is its lack of adaptability in re-designing the process plant equipment coupled with the non-existent theory or models that could assist in selecting the optimal candidates out of the countless potential fabrications that are possible using AM. One of the most common packings used in the LNG process is structured packing in the packed column (which is a unit operation) in the process. In this work, we present an example of an optimum strategy for the application of AM to this important unit operation. Packed columns use a packing material through which the gas phase passes and comes into contact with the liquid phase flowing over the packing, typically performing the necessary mass transfer to enrich the products, etc. Structured packing consists of stacks of corrugated sheets, typically inclined between 40-70° from the plane. Computational Fluid Dynamics (CFD) was used to test and model various geometries to study the governing hydrodynamic characteristics. The results demonstrate that the costly iterative experimental process can be minimized. Furthermore, they also improve the understanding of the fundamental physics of the system at the multiscale level. SpiroPak, patented by Curtin University, represents an innovative structured packing solution currently at a technology readiness level (TRL) of 5~6. This packing exhibits remarkable characteristics, offering a substantial increase in surface area while significantly enhancing hydrodynamic and mass transfer performance. Recent studies have revealed that SpiroPak can reduce pressure drop by 50~70% compared to commonly used commercial packings, and it can achieve 20~50% greater mass transfer efficiency (particularly in CO2 absorption applications). The implementation of SpiroPak has the potential to reduce the overall size of columns and decrease power consumption, resulting in cost savings for both capital expenditure (CAPEX) and operational expenditure (OPEX) when applied to retrofitting existing systems or incorporated into new processes. Furthermore, pilot to large-scale tests is currently underway to further advance and refine this technology.Keywords: Additive Manufacturing (AM), 3D printing, Computational Fluid Dynamics (CFD, structured packing (SpiroPak)
Procedia PDF Downloads 87243 Single Ion Transport with a Single-Layer Graphene Nanopore
Authors: Vishal V. R. Nandigana, Mohammad Heiranian, Narayana R. Aluru
Abstract:
Graphene material has found tremendous applications in water desalination, DNA sequencing and energy storage. Multiple nanopores are etched to create opening for water desalination and energy storage applications. The nanopores created are of the order of 3-5 nm allowing multiple ions to transport through the pore. In this paper, we present for the first time, molecular dynamics study of single ion transport, where only one ion passes through the graphene nanopore. The diameter of the graphene nanopore is of the same order as the hydration layers formed around each ion. Analogous to single electron transport resulting from ionic transport is observed for the first time. The current-voltage characteristics of such a device are similar to single electron transport in quantum dots. The current is blocked until a critical voltage, as the ions are trapped inside a hydration shell. The trapped ions have a high energy barrier compared to the applied input electrical voltage, preventing the ion to break free from the hydration shell. This region is called “Coulomb blockade region”. In this region, we observe zero transport of ions inside the nanopore. However, when the electrical voltage is beyond the critical voltage, the ion has sufficient energy to break free from the energy barrier created by the hydration shell to enter into the pore. Thus, the input voltage can control the transport of the ion inside the nanopore. The device therefore acts as a binary storage unit, storing 0 when no ion passes through the pore and storing 1 when a single ion passes through the pore. We therefore postulate that the device can be used for fluidic computing applications in chemistry and biology, mimicking a computer. Furthermore, the trapped ion stores a finite charge in the Coulomb blockade region; hence the device also acts a super capacitor.Keywords: graphene nanomembrane, single ion transport, Coulomb blockade, nanofluidics
Procedia PDF Downloads 321242 Cognition in Context: Investigating the Impact of Persuasive Outcomes across Face-to-Face, Social Media and Virtual Reality Environments
Authors: Claire Tranter, Coral Dando
Abstract:
Gathering information from others is a fundamental goal for those concerned with investigating crime, and protecting national and international security. Persuading an individual to move from an opposing to converging viewpoint, and an understanding on the cognitive style behind this change can serve to increase understanding of traditional face-to-face interactions, as well as synthetic environments (SEs) often used for communication across varying geographical locations. SEs are growing in usage, and with this increase comes an increase in crime being undertaken online. Communication technologies can allow people to mask their real identities, supporting anonymous communication which can raise significant challenges for investigators when monitoring and managing these conversations inside SEs. To date, the psychological literature concerning how to maximise information-gain in SEs for real-world interviewing purposes is sparse, and as such this aspect of social cognition is not well understood. Here, we introduce an overview of a novel programme of PhD research which seeks to enhance understanding of cross-cultural and cross-gender communication in SEs for maximising information gain. Utilising a dyadic jury paradigm, participants interacted with a confederate who attempted to persuade them to the opposing verdict across three distinct environments: face-to-face, instant messaging, and a novel virtual reality environment utilising avatars. Participants discussed a criminal scenario, acting as a two-person (male; female) jury. Persuasion was manipulated by the confederate claiming an opposing viewpoint (guilty v. not guilty) to the naïve participants from the outset. Pre and post discussion data, and observational digital recordings (voice and video) of participant’ discussion performance was collected. Information regarding cognitive style was also collected to ascertain participants need for cognitive closure and biases towards jumping to conclusions. Findings revealed that individuals communicating via an avatar in a virtual reality environment reacted in a similar way, and thus equally persuasive, when compared to individuals communicating face-to-face. Anonymous instant messaging however created a resistance to persuasion in participants, with males showing a significant decline in persuasive outcomes compared to face to face. The findings reveal new insights particularly regarding the interplay of persuasion on gender and modality, with anonymous instant messaging enhancing resistance to persuasion attempts. This study illuminates how varying SE can support new theoretical and applied understandings of how judgments are formed and modified in response to advocacy.Keywords: applied cognition, persuasion, social media, virtual reality
Procedia PDF Downloads 144241 2016 Taiwan's 'Health and Physical Education Field of 12-Year Basic Education Curriculum Outline (Draft)' Reform and Its Implications
Authors: Hai Zeng, Yisheng Li, Jincheng Huang, Chenghui Huang, Ying Zhang
Abstract:
Children are strong; the country strong, the development of children Basketball is a strategic advantage. Common forms of basketball equipment has been difficult to meet the needs of young children teaching the game of basketball, basketball development for 3-6 years old children in the form of appropriate teaching aids is a breakthrough basketball game teaching children bottlenecks, improve teaching critical path pleasure, but also the development of early childhood basketball a necessary requirement. In this study, literature, questionnaires, focus group interviews, comparative analysis, for domestic and foreign use of 12 kinds of basketball teaching aids (cloud computing MINI basketball, adjustable basketball MINI, MINI basketball court, shooting assist paw print ball, dribble goggles, dribbling machine, machine cartoon shooting, rebounding machine, against the mat, elastic belt, ladder, fitness ball), from fun and improve early childhood shooting technique, dribbling technology, as well as offensive and defensive rebounding against technology conduct research on conversion technology. The results show that by using appropriate forms of teaching children basketball aids, can effectively improve children's fun basketball game, targeted to improve a technology, different types of aids from different perspectives enrich the connotation of children basketball game. Recommended for children of color psychology, cartoon and environmentally friendly material production aids, and increase research efforts basketball aids children, encourage children to sports teachers aids applications.Keywords: health and physical education field of curriculum outline, health fitness, sports and health curriculum reform, Taiwan, twelve years basic education
Procedia PDF Downloads 393240 Evaluation of the Effectiveness of Crisis Management Support Bases in Tehran
Authors: Sima Hajiazizi
Abstract:
Tehran is a capital of Iran, with the capitals of the world to natural disasters such as earthquake and flood vulnerable has known. City has stated on three faults, Ray, Mosha, and north according to report of JICA in 2000, the most casualties and destruction was the result of active fault Ray. In 2003, the prevention and management of crisis in Tehran to conduct prevention and rehabilitation of the city, under the Ministry has active. Given the breadth and lack of appropriate access in the city, was considered decentralized management for crisis management support, in each region, in order to position the crisis management headquarters at the time of crises and implementation of programs for prevention and education of the citizens and also to position the bases given in some areas of the neighboring provinces at the time of the accident for help and a number of databases to store food and equipment needed at the time of the disaster. In this study, the bases for one, six, nine and eleven regions of Tehran in the field of management and training are evaluated. Selected areas had local accident and experience of practice for disaster management and local training has been experiencing challenges. The research approach was used qualitative research methods underlying Ground theory. At first, the information obtained through the study of documents and Semi-structured interviews by administrators, officials of training and participant observation in the classroom, line by line, and then it was coded in two stages, by comparing and questioning concepts, categories and extract according to the indicators is obtained from literature studies, subjects were been central. Main articles according to the frequency and importance of the phenomenon were called and they were drawn diagram paradigm and at the end with the intersections phenomena and their causes with indicators extracted from the texts, approach each phenomenon and the effectiveness of the bases was measured. There are two phenomenons in management; 1. The inability to manage the vast and complex crisis events and to resolve minor incidents due to the mismatch between managers. 2. Weaknesses in the implementation of preventive measures and preparedness to manage crisis is causal of situations, fields and intervening. There are five phenomenons in the field of education; 1. In the six-region participation and interest is high. 2. In eleven-region training partnerships for crisis management were to low that next by maneuver in schools and local initiatives such as advertising and use of aid groups have increased. 3. In nine-region, contributions to education in the area of crisis management at the beginning were low that initiatives like maneuver in schools and communities to stimulate and increase participation have increased sensitivity. 4. Managers have been disagreement with the same training in all areas. Finally for the issues that are causing the main issues, with the help of concepts extracted from the literature, recommendations are provided.Keywords: crises management, crisis management support bases, vulnerability, crisis management headquarters, prevention
Procedia PDF Downloads 174239 Unraveling the Mysteries of the Anahata Nada to Achieve Supreme Consciousness
Authors: Shanti Swaroop Mokkapati
Abstract:
The unstruck sound, or the Anahata Nada, holds the key in elevating the consciousness levels of the practitioner. This has been well established by the great saints of the eastern tradition over the past few centuries. This paper intends to explore in-depth the common thread of the practice of Anahata Nada by the musical saints, examining the subtle mention in their compositions as well as demystifying their musical experiences that throw insights into elevated levels of consciousness. Mian Tansen, one of the greatest musicians in the North Indian Hindustani Classical Music tradition and who lived in the 15th century, is said to have brought rain through his singing of Raga Megh Malhar. The South Indian (Carnatic) Musical Saint Tyagaraja, who lived in the 18th Century, composed hundreds of musical pieces full of love for the Supreme Being. Many of these compositions unravel the secrets of Anahata Nada, the chakras in the human body that hold key to these practices, and the visions of elevated levels of consciousness that Saint Tyagaraja himself experienced through these practices. The spiritual practitioners of the Radhasoami Faith (Religion of Saints) in Dayalbagh, India, have adopted a practice called Surat Shabda Yoga (Meditational practices that unite the all-pervasive sound current with the spirit current and elevate levels of consciousness). The practitioners of this Yogic method submit that they have been able to hear mystic words including Om, Racing, Soham, Sat, and Radhasoami, along with instrumental sounds that accompany these mystic words in the form of a crescendo. These prolific experiences of elevated consciousness of musical saints are numerous, and this paper intends to explore more significant ones from many centuries in the past till the present day, where elevated consciousness levels of practitioners are being scientifically measured and analyzed using quantum computing.Keywords: Anahata Nada, Nada Yoga, Tyagaraja, Radhasoami
Procedia PDF Downloads 177238 Analysis of Digital Transformation in Banking: The Hungarian Case
Authors: Éva Pintér, Péter Bagó, Nikolett Deutsch, Miklós Hetényi
Abstract:
The process of digital transformation has a profound influence on all sectors of the worldwide economy and the business environment. The influence of blockchain technology can be observed in the digital economy and e-government, rendering it an essential element of a nation's growth strategy. The banking industry is experiencing significant expansion and development of financial technology firms. Utilizing developing technologies such as artificial intelligence (AI), machine learning (ML), and big data (BD), these entrants are offering more streamlined financial solutions, promptly addressing client demands, and presenting a challenge to incumbent institutions. The advantages of digital transformation are evident in the corporate realm, and firms that resist its adoption put their survival at risk. The advent of digital technologies has revolutionized the business environment, streamlining processes and creating opportunities for enhanced communication and collaboration. Thanks to the aid of digital technologies, businesses can now swiftly and effortlessly retrieve vast quantities of information, all the while accelerating the process of creating new and improved products and services. Big data analytics is generally recognized as a transformative force in business, considered the fourth paradigm of science, and seen as the next frontier for innovation, competition, and productivity. Big data, an emerging technology that is shaping the future of the banking sector, offers numerous advantages to banks. It enables them to effectively track consumer behavior and make informed decisions, thereby enhancing their operational efficiency. Banks may embrace big data technologies to promptly and efficiently identify fraud, as well as gain insights into client preferences, which can then be leveraged to create better-tailored products and services. Moreover, the utilization of big data technology empowers banks to develop more intelligent and streamlined models for accurately recognizing and focusing on the suitable clientele with pertinent offers. There is a scarcity of research on big data analytics in the banking industry, with the majority of existing studies only examining the advantages and prospects associated with big data. Although big data technologies are crucial, there is a dearth of empirical evidence about the role of big data analytics (BDA) capabilities in bank performance. This research addresses a gap in the existing literature by introducing a model that combines the resource-based view (RBV), the technical organization environment framework (TOE), and dynamic capability theory (DC). This study investigates the influence of Big Data Analytics (BDA) utilization on the performance of market and risk management. This is supported by a comparative examination of Hungarian mobile banking services.Keywords: big data, digital transformation, dynamic capabilities, mobile banking
Procedia PDF Downloads 64237 Combating Corruption to Enhance Learner Academic Achievement: A Qualitative Study of Zimbabwean Public Secondary Schools
Authors: Onesmus Nyaude
Abstract:
The aim of the study was to investigate participants’ views on how corruption can be combated to enhance learner academic achievement. The study was undertaken on three select public secondary institutions in Zimbabwe. This study also focuses on exploring the various views of educators; parents and the learners on the role played by corruption in perpetuating the seemingly existing learner academic achievement disparities in various educational institutions. The study further interrogates and examines the nexus between the prevalence of corruption in schools and the subsequent influence on the academic achievement of learners. Corruption is considered a form of social injustice; hence in Zimbabwe, the general consensus is that it is perceived rife to the extent that it is overtaking the traditional factors that contributed to the poor academic achievement of learners. Coupled to this, have been the issue of gross abuse of power and some malpractices emanating from concealment of essential and official transactions in the conduct of business. Through proposing robust anti-corruption mechanisms, teaching and learning resources poured in schools would be put into good use. This would prevent the unlawful diversion and misappropriation of the resources in question which has always been the culture. This study is of paramount significance to curriculum planners, teachers, parents, and learners. The study was informed by the interpretive paradigm; thus qualitative research approaches were used. Both probability and non-probability sampling techniques were adopted in ‘site and participants’ selection. A representative sample of (150) participants was used. The study found that the majority of the participants perceived corruption as a social problem and a human right threat affecting the quality of teaching and learning processes in the education sector. It was established that corruption prevalence within institutions is as a result of the perpetual weakening of ethical values and other variables linked to upholding of ‘Ubuntu’ among general citizenry. It was further established that greediness and weak systems are major causes of rampant corruption within institutions of higher learning and are manifesting through abuse of power, bribery, misappropriation and embezzlement of material and financial resources. Therefore, there is great need to collectively address the problem of corruption in educational institutions and society at large. The study additionally concludes that successful combating of corruption will promote successful moral development of students as well as safeguarding their human rights entitlements. The study recommends the adoption of principles of good corporate governance within educational institutions in order to successfully curb corruption. The study further recommends the intensification of interventionist strategies and strengthening of systems in educational institutions as well as regular audits to overcome the problem associated with rampant corruption cases.Keywords: academic achievement, combating, corruption, good corporate governance, qualitative study
Procedia PDF Downloads 243236 Training for Digital Manufacturing: A Multilevel Teaching Model
Authors: Luís Rocha, Adam Gąska, Enrico Savio, Michael Marxer, Christoph Battaglia
Abstract:
The changes observed in the last years in the field of manufacturing and production engineering, popularly known as "Fourth Industry Revolution", utilizes the achievements in the different areas of computer sciences, introducing new solutions at almost every stage of the production process, just to mention such concepts as mass customization, cloud computing, knowledge-based engineering, virtual reality, rapid prototyping, or virtual models of measuring systems. To effectively speed up the production process and make it more flexible, it is necessary to tighten the bonds connecting individual stages of the production process and to raise the awareness and knowledge of employees of individual sectors about the nature and specificity of work in other stages. It is important to discover and develop a suitable education method adapted to the specificities of each stage of the production process, becoming an extremely crucial issue to exploit the potential of the fourth industrial revolution properly. Because of it, the project “Train4Dim” (T4D) intends to develop complex training material for digital manufacturing, including content for design, manufacturing, and quality control, with a focus on coordinate metrology and portable measuring systems. In this paper, the authors present an approach to using an active learning methodology for digital manufacturing. T4D main objective is to develop a multi-degree (apprenticeship up to master’s degree studies) and educational approach that can be adapted to different teaching levels. It’s also described the process of creating the underneath methodology. The paper will share the steps to achieve the aims of the project (training model for digital manufacturing): 1) surveying the stakeholders, 2) Defining the learning aims, 3) producing all contents and curriculum, 4) training for tutors, and 5) Pilot courses test and improvements.Keywords: learning, Industry 4.0, active learning, digital manufacturing
Procedia PDF Downloads 97235 Nature of Cities: Ontological Dimension of the Urban
Authors: Ana Cristina García-Luna Romero
Abstract:
This document seeks to reflect on the urban project from its conceptual identity root. In the first instance, a proposal is made on how the city project is sustained from the conceptual root, from the logos: it opens a way to assimilate the imagination; what we imagine becomes a reality. In this way, firstly, the need to use language as a vehicle for transmitting the stories that sustain us as humanity can be deemed as an important social factor that enables us to social behavior. Secondly, the need to attend to the written language as a mechanism of power, as a means to consolidate a dominant ideology or a political position, is raised; as it served to carry out the modernization project, it is therefore addressed differences between the real and the literate city. Thus, the consolidated urban-architectural project is based on logos, the project, and planning. Considering the importance of materiality and its relation to subjective well-being contextualized from a socio-urban approach, we question ourselves into how we can look at something that is doubtful. From a philosophy perspective, the truth is considered to be nothing more than a matter of correspondence between the observer and the observed. To understand beyond the relative of the gaze, it is necessary to expose different perspectives since it depends on the understanding of what is observed and how it is critically analyzed. Therefore, the analysis of materiality, as a political field, takes a proposal based on this research in the principles in transgenesis: principle of communication, representativeness, security, health, malleability, availability of potentiality or development, conservation, sustainability, economy, harmony, stability, accessibility, justice, legibility, significance, consistency, joint responsibility, connectivity, beauty, among others. The (urban) human being acts because he wants to live in a certain way: in a community, in a fair way, with opportunity for development, with the possibility of managing the environment according to their needs, etc. In order to comply with this principle, it is necessary to design strategies from the principles in transgenesis, which must be named, defined, understood, and socialized by the urban being, the companies, and from themselves. In this way, the technical status of the city in the neoliberal present determines extraordinary conditions for reflecting on an almost emergency scenario created by the impact of cities that, far from being limited to resilient proposals, must aim at the reflection of the urban process that the present social model has generated. Therefore, can we rethink the paradigm of the perception of life quality in the current neoliberal model in the production of the character of public space related to the practices of being urban. What we are trying to do within this document is to build a framework to study under what logic the practices of the social system that make sense of the public space are developed, what the implications of the phenomena of the inscription of action and materialization (and its results over political action between the social and the technical system) are and finally, how we can improve the quality of life of individuals from the urban space.Keywords: cities, nature, society, urban quality of life
Procedia PDF Downloads 124234 Applying Biculturalism in Studying Tourism Host Community Cultural Integrity and Individual Member Stress
Authors: Shawn P. Daly
Abstract:
Communities heavily engaged in the tourism industry discover their values intersect, meld, and conflict with those of visitors. Maintaining cultural integrity in the face of powerful external pressures causes stress among society members. This effect represents a less studied aspect of sustainable tourism. The present paper brings a perspective unique to the tourism literature: biculturalism. The grounded theories, coherent hypotheses, and validated constructs and indicators of biculturalism represent a sound base from which to consider sociocultural issues in sustainable tourism. Five models describe the psychological state of individuals operating at cultural crossroads: assimilation (joining the new culture), acculturation (grasping the new culture but remaining of the original culture), alternation (varying behavior to cultural context), multicultural (maintaining distinct cultures), and fusion (blending cultures). These five processes divide into two units of analysis (individual and society), permitting research questions at levels important for considering sociocultural sustainability. Acculturation modelling has morphed into dual processes of acculturation (new culture adaptation) and enculturation (original culture adaptation). This dichotomy divides sustainability research questions into human impacts from assimilation (acquiring new culture, throwing away original), separation (rejecting new culture, keeping original), integration (acquiring new culture, keeping original), and marginalization (rejecting new culture, throwing away original). Biculturalism is often cast in terms of its emotional, behavioral, and cognitive dimensions. Required cultural adjustments and varying levels of cultural competence lead to physical, psychological, and emotional outcomes, including depression, lowered life satisfaction and self-esteem, headaches, and back pain—or enhanced career success, social skills, and life styles. Numerous studies provide empirical scales and research hypotheses for sustainability research into tourism’s causality and effect on local well-being. One key issue in applying biculturalism to sustainability scholarship concerns identification and specification of the alternative new culture contacting local culture. Evidence exists for tourism industry, universal tourist, and location/event-specific tourist culture. The biculturalism paradigm holds promise for researchers examining evolving cultural identity and integrity in response to mass tourism. In particular, confirmed constructs and scales simplify operationalization of tourism sustainability studies in terms of human impact and adjustment.Keywords: biculturalism, cultural integrity, psychological and sociocultural adjustment, tourist culture
Procedia PDF Downloads 409233 Machine Learning for Exoplanetary Habitability Assessment
Authors: King Kumire, Amos Kubeka
Abstract:
The synergy of machine learning and astronomical technology advancement is giving rise to the new space age, which is pronounced by better habitability assessments. To initiate this discussion, it should be recorded for definition purposes that the symbiotic relationship between astronomy and improved computing has been code-named the Cis-Astro gateway concept. The cosmological fate of this phrase has been unashamedly plagiarized from the cis-lunar gateway template and its associated LaGrange points which act as an orbital bridge to the moon from our planet Earth. However, for this study, the scientific audience is invited to bridge toward the discovery of new habitable planets. It is imperative to state that cosmic probes of this magnitude can be utilized as the starting nodes of the astrobiological search for galactic life. This research can also assist by acting as the navigation system for future space telescope launches through the delimitation of target exoplanets. The findings and the associated platforms can be harnessed as building blocks for the modeling of climate change on planet earth. The notion that if the human genus exhausts the resources of the planet earth or there is a bug of some sort that makes the earth inhabitable for humans explains the need to find an alternative planet to inhabit. The scientific community, through interdisciplinary discussions of the International Astronautical Federation so far has the common position that engineers can reduce space mission costs by constructing a stable cis-lunar orbit infrastructure for refilling and carrying out other associated in-orbit servicing activities. Similarly, the Cis-Astro gateway can be envisaged as a budget optimization technique that models extra-solar bodies and can facilitate the scoping of future mission rendezvous. It should be registered as well that this broad and voluminous catalog of exoplanets shall be narrowed along the way using machine learning filters. The gist of this topic revolves around the indirect economic rationale of establishing a habitability scoping platform.Keywords: machine-learning, habitability, exoplanets, supercomputing
Procedia PDF Downloads 89232 Machine Learning for Exoplanetary Habitability Assessment
Authors: King Kumire, Amos Kubeka
Abstract:
The synergy of machine learning and astronomical technology advancement is giving rise to the new space age, which is pronounced by better habitability assessments. To initiate this discussion, it should be recorded for definition purposes that the symbiotic relationship between astronomy and improved computing has been code-named the Cis-Astro gateway concept. The cosmological fate of this phrase has been unashamedly plagiarized from the cis-lunar gateway template and its associated LaGrange points which act as an orbital bridge to the moon from our planet Earth. However, for this study, the scientific audience is invited to bridge toward the discovery of new habitable planets. It is imperative to state that cosmic probes of this magnitude can be utilized as the starting nodes of the astrobiological search for galactic life. This research can also assist by acting as the navigation system for future space telescope launches through the delimitation of target exoplanets. The findings and the associated platforms can be harnessed as building blocks for the modeling of climate change on planet earth. The notion that if the human genus exhausts the resources of the planet earth or there is a bug of some sort that makes the earth inhabitable for humans explains the need to find an alternative planet to inhabit. The scientific community, through interdisciplinary discussions of the International Astronautical Federation so far, has the common position that engineers can reduce space mission costs by constructing a stable cis-lunar orbit infrastructure for refilling and carrying out other associated in-orbit servicing activities. Similarly, the Cis-Astro gateway can be envisaged as a budget optimization technique that models extra-solar bodies and can facilitate the scoping of future mission rendezvous. It should be registered as well that this broad and voluminous catalog of exoplanets shall be narrowed along the way using machine learning filters. The gist of this topic revolves around the indirect economic rationale of establishing a habitability scoping platform.Keywords: exoplanets, habitability, machine-learning, supercomputing
Procedia PDF Downloads 117231 A Method Intensive Top-down Approach for Generating Guidelines for an Energy-Efficient Neighbourhood: A Case of Amaravati, Andhra Pradesh, India
Authors: Rituparna Pal, Faiz Ahmed
Abstract:
Neighbourhood energy efficiency is a newly emerged term to address the quality of urban strata of built environment in terms of various covariates of sustainability. The concept of sustainability paradigm in developed nations has encouraged the policymakers for developing urban scale cities to envision plans under the aegis of urban scale sustainability. The concept of neighbourhood energy efficiency is realized a lot lately just when the cities, towns and other areas comprising this massive global urban strata have started facing a strong blow from climate change, energy crisis, cost hike and an alarming shortfall in the justice which the urban areas required. So this step of urban sustainability can be easily referred more as a ‘Retrofit Action’ which is to cover up the already affected urban structure. So even if we start energy efficiency for existing cities and urban areas the initial layer remains, for which a complete model of urban sustainability still lacks definition. Urban sustainability is a broadly spoken off word with end number of parameters and policies through which the loop can be met. Out of which neighbourhood energy efficiency can be an integral part where the concept and index of neighbourhood scale indicators, block level indicators and building physics parameters can be understood, analyzed and concluded to help emerge guidelines for urban scale sustainability. The future of neighbourhood energy efficiency not only lies in energy efficiency but also important parameters like quality of life, access to green, access to daylight, outdoor comfort, natural ventilation etc. So apart from designing less energy-hungry buildings, it is required to create a built environment which will create less stress on buildings to consume more energy. A lot of literary analysis has been done in the Western countries prominently in Spain, Paris and also Hong Kong, leaving a distinct gap in the Indian scenario in exploring the sustainability at the urban strata. The site for the study has been selected in the upcoming capital city of Amaravati which can be replicated with similar neighbourhood typologies in the area. The paper suggests a methodical intent to quantify energy and sustainability indices in detail taking by involving several macro, meso and micro level covariates and parameters. Several iterations have been made both at macro and micro level and have been subjected to simulation, computation and mathematical models and finally to comparative analysis. Parameters at all levels are analyzed to suggest the best case scenarios which in turn is extrapolated to the macro level finally coming out with a proposal model for energy efficient neighbourhood and worked out guidelines with significance and correlations derived.Keywords: energy quantification, macro scale parameters, meso scale parameters, micro scale parameters
Procedia PDF Downloads 176230 A Conceptual Framework of Integrated Evaluation Methodology for Aquaculture Lakes
Authors: Robby Y. Tallar, Nikodemus L., Yuri S., Jian P. Suen
Abstract:
Research in the subject of ecological water resources management is full of trivial questions addressed and it seems, today to be one branch of science that can strongly contribute to the study of complexity (physical, biological, ecological, socio-economic, environmental, and other aspects). Existing literature available on different facets of these studies, much of it is technical and targeted for specific users. This study offered the combination all aspects in evaluation methodology for aquaculture lakes with its paradigm refer to hierarchical theory and to the effects of spatial specific arrangement of an object into a space or local area. Therefore, the process in developing a conceptual framework represents the more integrated and related applicable concept from the grounded theory. A design of integrated evaluation methodology for aquaculture lakes is presented. The method is based on the identification of a series of attributes which can be used to describe status of aquaculture lakes using certain indicators from aquaculture water quality index (AWQI), aesthetic aquaculture lake index (AALI) and rapid appraisal for fisheries index (RAPFISH). The preliminary preparation could be accomplished as follows: first, the characterization of study area was undertaken at different spatial scales. Second, an inventory data as a core resource such as city master plan, water quality reports from environmental agency, and related government regulations. Third, ground-checking survey should be completed to validate the on-site condition of study area. In order to design an integrated evaluation methodology for aquaculture lakes, finally we integrated and developed rating scores system which called Integrated Aquaculture Lake Index (IALI).The development of IALI are reflecting a compromise all aspects and it responds the needs of concise information about the current status of aquaculture lakes by the comprehensive approach. IALI was elaborated as a decision aid tool for stakeholders to evaluate the impact and contribution of anthropogenic activities on the aquaculture lake’s environment. The conclusion was while there is no denying the fact that the aquaculture lakes are under great threat from the pressure of the increasing human activities, one must realize that no evaluation methodology for aquaculture lakes can succeed by keeping the pristine condition. The IALI developed in this work can be used as an effective, low-cost evaluation methodology of aquaculture lakes for developing countries. Because IALI emphasizes the simplicity and understandability as it must communicate to decision makers and the experts. Moreover, stakeholders need to be helped to perceive their lakes so that sites can be accepted and valued by local people. For this site of lake development, accessibility and planning designation of the site is of decisive importance: the local people want to know whether the lake condition is safe or whether it can be used.Keywords: aesthetic value, AHP, aquaculture lakes, integrated lakes, RAPFISH
Procedia PDF Downloads 237229 Heart Rate Variability Analysis for Early Stage Prediction of Sudden Cardiac Death
Authors: Reeta Devi, Hitender Kumar Tyagi, Dinesh Kumar
Abstract:
In present scenario, cardiovascular problems are growing challenge for researchers and physiologists. As heart disease have no geographic, gender or socioeconomic specific reasons; detecting cardiac irregularities at early stage followed by quick and correct treatment is very important. Electrocardiogram is the finest tool for continuous monitoring of heart activity. Heart rate variability (HRV) is used to measure naturally occurring oscillations between consecutive cardiac cycles. Analysis of this variability is carried out using time domain, frequency domain and non-linear parameters. This paper presents HRV analysis of the online dataset for normal sinus rhythm (taken as healthy subject) and sudden cardiac death (SCD subject) using all three methods computing values for parameters like standard deviation of node to node intervals (SDNN), square root of mean of the sequences of difference between adjacent RR intervals (RMSSD), mean of R to R intervals (mean RR) in time domain, very low-frequency (VLF), low-frequency (LF), high frequency (HF) and ratio of low to high frequency (LF/HF ratio) in frequency domain and Poincare plot for non linear analysis. To differentiate HRV of healthy subject from subject died with SCD, k –nearest neighbor (k-NN) classifier has been used because of its high accuracy. Results show highly reduced values for all stated parameters for SCD subjects as compared to healthy ones. As the dataset used for SCD patients is recording of their ECG signal one hour prior to their death, it is therefore, verified with an accuracy of 95% that proposed algorithm can identify mortality risk of a patient one hour before its death. The identification of a patient’s mortality risk at such an early stage may prevent him/her meeting sudden death if in-time and right treatment is given by the doctor.Keywords: early stage prediction, heart rate variability, linear and non-linear analysis, sudden cardiac death
Procedia PDF Downloads 342