Search results for: marketing theory and applications
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11669

Search results for: marketing theory and applications

1169 A Study of Smartphone Engagement Patterns of Millennial in India

Authors: Divyani Redhu, Manisha Rathaur

Abstract:

India has emerged as a very lucrative market for the smartphones in a very short span of time. The number of smartphone users here is growing massively with each passing day. Also, the expansion of internet services to far corners of the nation has also given a push to the smartphone revolution in India. Millennial, also known as Generation Y or the Net Generation is the generation born between the early 1980s and mid-1990s (some definitions extending further to early 2000s). Spanning roughly over 15 years, different social classes, cultures, and continents; it is irrational to imagine that millennial have a unified identity. But still, it cannot be denied that the growing millennial population is not only young but is highly tech-savvy too. It is not just the appearance of the device that today; we call it ‘smart’. Rather, it is the numerous tasks and functions that it can perform which has led its name to evolve as that of a ‘smartphone’. From usual tasks that were earlier performed by a simple mobile phone like making calls, sending messages, clicking photographs, recording videos etc.; today, the time has come where most of our day – to – day tasks are being taken care of by our all-time companion, i.e. smartphones. From being our alarm clock to being our note-maker, from our watch to our radio, our book-reader to our reminder, smartphones are present everywhere. Smartphone has now become an essential device for particularly the millennial to communicate not only with their friends but also with their family, colleagues, and teachers. The study by the researchers would be quantitative in nature. For the same, a survey would be conducted in particularly the capital of India, i.e. Delhi and the National Capital Region (NCR), which is the metropolitan area covering the entire National Capital Territory of Delhi and urban areas covering states of Haryana, Uttarakhand, Uttar Pradesh and Rajasthan. The tool of the survey would be a questionnaire and the number of respondents would be 200. The results derived from the study would primarily focus on the increasing reach of smartphones in India, smartphones as technological innovation and convergent tools, smartphone usage pattern of millennial in India, most used applications by the millennial, the average time spent by them, the impact of smartphones on the personal interactions of millennial etc. Thus, talking about the smartphone technology and the millennial in India, it would not be wrong to say that the growth, as well as the potential of the smartphones in India, is still immense. Also, very few technologies have made it possible to give a global exposure to the users and smartphone, if not the only one is certainly an immensely effective one that comes to the mind in this case.

Keywords: Delhi – NCR, India, millennial, smartphone

Procedia PDF Downloads 140
1168 Understanding the Basics of Information Security: An Act of Defense

Authors: Sharon Q. Yang, Robert J. Congleton

Abstract:

Information security is a broad concept that covers any issues and concerns about the proper access and use of information on the Internet, including measures and procedures to protect intellectual property and private data from illegal access and online theft; the act of hacking; and any defensive technologies that contest such cybercrimes. As more research and commercial activities are conducted online, cybercrimes have increased significantly, putting sensitive information at risk. Information security has become critically important for organizations and private citizens alike. Hackers scan for network vulnerabilities on the Internet and steal data whenever they can. Cybercrimes disrupt our daily life, cause financial losses, and instigate fear in the public. Since the start of the pandemic, most data related cybercrimes targets have been either financial or health information from companies and organizations. Libraries also should have a high interest in understanding and adopting information security methods to protect their patron data and copyrighted materials. But according to information security professionals, higher education and cultural organizations, including their libraries, are the least prepared entities for cyberattacks. One recent example is that of Steven’s Institute of Technology in New Jersey in the US, which had its network hacked in 2020, with the hackers demanding a ransom. As a result, the network of the college was down for two months, causing serious financial loss. There are other cases where libraries, colleges, and universities have been targeted for data breaches. In order to build an effective defense, we need to understand the most common types of cybercrimes, including phishing, whaling, social engineering, distributed denial of service (DDoS) attacks, malware and ransomware, and hacker profiles. Our research will focus on each hacking technique and related defense measures; and the social background and reasons/purpose of hacker and hacking. Our research shows that hacking techniques will continue to evolve as new applications, housing information, and data on the Internet continue to be developed. Some cybercrimes can be stopped with effective measures, while others present challenges. It is vital that people understand what they face and the consequences when not prepared.

Keywords: cybercrimes, hacking technologies, higher education, information security, libraries

Procedia PDF Downloads 134
1167 Knowledge Management Processes as a Driver of Knowledge-Worker Performance in Public Health Sector of Pakistan

Authors: Shahid Razzaq

Abstract:

The governments around the globe have started taking into considerations the knowledge management dynamics while formulating, implementing, and evaluating the strategies, with or without the conscious realization, for the different public sector organizations and public policy developments. Health Department of Punjab province in Pakistan is striving to deliver quality healthcare services to the community through an efficient and effective service delivery system. Despite of this struggle some employee performance issues yet exists in the form of challenge to government. To overcome these issues department took several steps including HR strategies, use of technologies and focus of hard issues. Consequently, this study was attempted to highlight the importance of soft issue that is knowledge management in its true essence to tackle their performance issues. Knowledge management in public sector is quite an ignored area in the knowledge management-a growing multidisciplinary research discipline. Knowledge-based view of the firm theory asserts the knowledge is the most deliberate resource that can result in competitive advantage for an organization over the other competing organizations. In the context of our study it means for gaining employee performance, organizations have to increase the heterogeneous knowledge bases. The study uses the cross-sectional and quantitative research design. The data is collected from the knowledge workers of Health Department of Punjab, the biggest province of Pakistan. A total of 341 sample size is achieved. The SmartPLS 3 Version 2.6 is used for analyzing the data. The data examination revealed that knowledge management processes has a strong impact on knowledge worker performance. All hypotheses are accepted according to the results. Therefore, it can be summed up that to increase the employee performance knowledge management activities should be implemented. Health Department within province of Punjab introduces the knowledge management infrastructure and systems to make effective availability of knowledge for the service staff. This knowledge management infrastructure resulted in an increase in the knowledge management process in different remote hospitals, basic health units and care centers which resulted in greater service provisions to public. This study is to have theoretical and practical significances. In terms of theoretical contribution, this study is to establish the relationship between knowledge management and performance for the first time. In case of the practical contribution, this study is to give an insight to public sector organizations and government about role of knowledge management in employ performance. Therefore, public policymakers are strongly advised to implement the activities of knowledge management for enhancing the performance of knowledge workers. The current research validated the substantial role of knowledge management in persuading and creating employee arrogances and behavioral objectives. To the best of authors’ knowledge, this study contribute to the impact of knowledge management on employee performance as its originality.

Keywords: employee performance, knowledge management, public sector, soft issues

Procedia PDF Downloads 141
1166 Self-Education, Recognition and Well-Being Insights into Qualitative-Reconstructive Educational Research on the Value of Non-formal Education in the Adolescence

Authors: Sandra Biewers Grimm

Abstract:

International studies such as Pisa have shown an increasing social inequality in the education system, which is determined in particular by social origin and migration status. This is especially the case in the Luxembourg school system, which creates challenges for many young people due to the multilingualism in the country. While the international and also the national debate on education in the immediate aftermath of the publications of the Pisa results mainly focused on the further development of school-based learning venues and formal educational processes, it initially remained largely unclear what role exactly out-of-school learning venues and non-formal and informal learning processes could play in this further development. This has changed in the meantime. Both in the political discourses and in the scientific disciplines, those voices have become louder that draw attention to the important educational function and the enormous educational potential of out-of-school learning places as a response to the crisis of the formal education system and more than this. Youth work as an actor and approach of non-formal education is particularly in demand here. Due to its principles of self-education, participation and openness, it is considered to have a special potential in supporting the acquisition of important key competencies. In this context, the study "Educational experiences in non-formal settings" at CCY takes a differentiated look behind the scenes of education-oriented youth work and describes on the basis of empirical data what and how young people learn in youth centers and which significance they attach to these educational experiences for their subjective life situation. In this sense, the aim of the study is to reconstruct the subjective educational experiences of young people in Open Youth Work as well as to explore the value that these experiences have for young people. In doing so, it enables scientifically founded conclusions about the educational potential of youth work from the user's perspective. Initially, the study focuses on defining the concept of education in the context of non-formal education and thus sets a theoretical framework for the empirical analysis. This socio-educational term of education differs from the relevant conception of education in curricular, formal education as the acquisition of knowledge. It also differs from the operationalization of education as competence, or the differentiation into cultural, social and personal or into factual, social or methodological competence, which is often used in the European context and which has long been interpreted as a "social science reading of the question of education" (XX). Now the aim is to define a "broader" concept of education that goes beyond the normative and educational policy dimensions of a "non-formal education" and includes the classical socio-educational dimensions. Furthermore, the study works with different methods of empirical social research: In addition to ethnographic observation and an online survey, group discussions were conducted with the young people. The presentation gives an insight into the context, the methodology and the results of this study.

Keywords: non-formal education, youth research, qualitative research, educational theory

Procedia PDF Downloads 163
1165 Controllable Modification of Glass-Crystal Composites with Ion-Exchange Technique

Authors: Andrey A. Lipovskii, Alexey V. Redkov, Vyacheslav V. Rusan, Dmitry K. Tagantsev, Valentina V. Zhurikhina

Abstract:

The presented research is related to the development of recently proposed technique of the formation of composite materials, like optical glass-ceramics, with predetermined structure and properties of the crystalline component. The technique is based on the control of the size and concentration of the crystalline grains using the phenomenon of glass-ceramics decrystallization (vitrification) induced by ion-exchange. This phenomenon was discovered and explained in the beginning of the 2000s, while related theoretical description was given in 2016 only. In general, the developed theory enables one to model the process and optimize the conditions of ion-exchange processing of glass-ceramics, which provide given properties of crystalline component, in particular, profile of the average size of the crystalline grains. The optimization is possible if one knows two dimensionless parameters of the theoretical model. One of them (β) is the value which is directly related to the solubility of crystalline component of the glass-ceramics in the glass matrix, and another (γ) is equal to the ratio of characteristic times of ion-exchange diffusion and crystalline grain dissolution. The presented study is dedicated to the development of experimental technique and simulation which allow determining these parameters. It is shown that these parameters can be deduced from the data on the space distributions of diffusant concentrations and average size of crystalline grains in the glass-ceramics samples subjected to ion-exchange treatment. Measurements at least at two temperatures and two processing times at each temperature are necessary. The composite material used was a silica-based glass-ceramics with crystalline grains of Li2OSiO2. Cubical samples of the glass-ceramics (6x6x6 mm3) underwent the ion exchange process in NaNO3 salt melt at 520 oC (for 16 and 48 h), 540 oC (for 8 and 24 h), 560 oC (for 4 and 12 h), and 580 oC (for 2 and 8 h). The ion exchange processing resulted in the glass-ceramics vitrification in the subsurface layers where ion-exchange diffusion took place. Slabs about 1 mm thick were cut from the central part of the samples and their big facets were polished. These slabs were used to find profiles of diffusant concentrations and average size of the crystalline grains. The concentration profiles were determined from refractive index profiles measured with Max-Zender interferometer, and profiles of the average size of the crystalline grains were determined with micro-Raman spectroscopy. Numerical simulation were based on the developed theoretical model of the glass-ceramics decrystallization induced by ion exchange. The simulation of the processes was carried out for different values of β and γ parameters under all above-mentioned ion exchange conditions. As a result, the temperature dependences of the parameters, which provided a reliable coincidence of the simulation and experimental data, were found. This ensured the adequate modeling of the process of the glass-ceramics decrystallization in 520-580 oC temperature interval. Developed approach provides a powerful tool for fine tuning of the glass-ceramics structure, namely, concentration and average size of crystalline grains.

Keywords: diffusion, glass-ceramics, ion exchange, vitrification

Procedia PDF Downloads 269
1164 Partial Least Square Regression for High-Dimentional and High-Correlated Data

Authors: Mohammed Abdullah Alshahrani

Abstract:

The research focuses on investigating the use of partial least squares (PLS) methodology for addressing challenges associated with high-dimensional correlated data. Recent technological advancements have led to experiments producing data characterized by a large number of variables compared to observations, with substantial inter-variable correlations. Such data patterns are common in chemometrics, where near-infrared (NIR) spectrometer calibrations record chemical absorbance levels across hundreds of wavelengths, and in genomics, where thousands of genomic regions' copy number alterations (CNA) are recorded from cancer patients. PLS serves as a widely used method for analyzing high-dimensional data, functioning as a regression tool in chemometrics and a classification method in genomics. It handles data complexity by creating latent variables (components) from original variables. However, applying PLS can present challenges. The study investigates key areas to address these challenges, including unifying interpretations across three main PLS algorithms and exploring unusual negative shrinkage factors encountered during model fitting. The research presents an alternative approach to addressing the interpretation challenge of predictor weights associated with PLS. Sparse estimation of predictor weights is employed using a penalty function combining a lasso penalty for sparsity and a Cauchy distribution-based penalty to account for variable dependencies. The results demonstrate sparse and grouped weight estimates, aiding interpretation and prediction tasks in genomic data analysis. High-dimensional data scenarios, where predictors outnumber observations, are common in regression analysis applications. Ordinary least squares regression (OLS), the standard method, performs inadequately with high-dimensional and highly correlated data. Copy number alterations (CNA) in key genes have been linked to disease phenotypes, highlighting the importance of accurate classification of gene expression data in bioinformatics and biology using regularized methods like PLS for regression and classification.

Keywords: partial least square regression, genetics data, negative filter factors, high dimensional data, high correlated data

Procedia PDF Downloads 49
1163 Emerging Technologies for Learning: In Need of a Pro-Active Educational Strategy

Authors: Pieter De Vries, Renate Klaassen, Maria Ioannides

Abstract:

This paper is about an explorative research into the use of emerging technologies for teaching and learning in higher engineering education. The assumption is that these technologies and applications, which are not yet widely adopted, will help to improve education and as such actively work on the ability to better deal with the mismatch of skills bothering our industries. Technologies such as 3D printing, the Internet of Things, Virtual Reality, and others, are in a dynamic state of development which makes it difficult to grasp the value for education. Also, the instruments in current educational research seem not appropriate to assess the value of such technologies. This explorative research aims to foster an approach to better deal with this new complexity. The need to find out is urgent, because these technologies will be dominantly present in the near future in all aspects of life, including education. The methodology used in this research comprised an inventory of emerging technologies and tools that potentially give way to innovation and are used or about to be used in technical universities. The inventory was based on both a literature review and a review of reports and web resources like blogs and others and included a series of interviews with stakeholders in engineering education and at representative industries. In addition, a number of small experiments were executed with the aim to analyze the requirements for the use of in this case Virtual Reality and the Internet of Things to better understanding the opportunities and limitations in the day-today learning environment. The major findings indicate that it is rather difficult to decide about the value of these technologies for education due to the dynamic state of change and therefor unpredictability and the lack of a coherent policy at the institutions. Most decisions are being made by teachers on an individual basis, who in their micro-environment are not equipped to select, test and ultimately decide about the use of these technologies. Most experiences are being made in the industry knowing that the skills to handle these technologies are in high demand. The industry though is worried about the inclination and the capability of education to help bridge the skills gap related to the emergence of new technologies. Due to the complexity, the diversity, the speed of development and the decay, education is challenged to develop an approach that can make these technologies work in an integrated fashion. For education to fully profit from the opportunities, these technologies offer it is eminent to develop a pro-active strategy and a sustainable approach to frame the emerging technologies development.

Keywords: emerging technologies, internet of things, pro-active strategy, virtual reality

Procedia PDF Downloads 191
1162 Absorption Kinetic and Tensile Mechanical Properties of Swollen Elastomer/Carbon Black Nanocomposites using Typical Solvents

Authors: F. Elhaouzi, H. Lahlali, M. Zaghrioui, I. El Aboudi A. BelfKira, A. Mdarhri

Abstract:

The effect of physico chemical properties of solvents on the transport process and mechanical properties in elastomeric nano composite materials is reported. The investigated samples are formed by a semi-crystalline ethylene-co-butyl acrylate polymer filled with hard spherical carbon black (CB) nano particles. The swelling behavior was studied by immersion the dried samples in selected solvents at room temperature during 2 days. For this purpose, two chemical compounds methyl derivatives of aromatic hydrocarbons of benzene, i.e. toluene and xylene, are used to search for the mass and molar volume dependence on the absorption kinetics. Mass gain relative to the mass of dry material at specific times was recorded to probe the absorption kinetics. The transport of solvent molecules in these filled elastomeric composites is following a Fickian diffusion mechanism. Additionally, the swelling ratio and diffusivity coefficient deduced from the Fickian law are found to decrease with the CB concentration. These results indicate that the CB nano particles increase the effective path length for diffusion and consequently limit the absorption of the solvent by occupation free volumes in the material. According to physico chemical properties of the two used solvents, it is found that the diffusion is more important for the toluene molecules solvent due to their low values of the molecular weight and volume molar compared to those for the xylene. Differential Scanning Calorimetry (DSC) and X-ray photo electron (XPS) were also used to probe the eventual change in the chemical composition for the swollen samples. Mechanically speaking, the stress-strain curves of uniaxial tensile tests pre- and post- swelling highlight a remarkably decrease of the strength and elongation at break of the swollen samples. This behavior can be attributed to the decrease of the load transfer density between the matrix and the CB in the presence of the solvent. We believe that the results reported in this experimental investigation can be useful for some demanding applications e.g. tires, sealing rubber.

Keywords: nanocomposite, absorption kinetics, mechanical behavior, diffusion, modelling, XPS, DSC

Procedia PDF Downloads 352
1161 The Association of Southeast Asian Nations (ASEAN) and the Dynamics of Resistance to Sovereignty Violation: The Case of East Timor (1975-1999)

Authors: Laura Southgate

Abstract:

The Association of Southeast Asian Nations (ASEAN), as well as much of the scholarship on the organisation, celebrates its ability to uphold the principle of regional autonomy, understood as upholding the norm of non-intervention by external powers in regional affairs. Yet, in practice, this has been repeatedly violated. This dichotomy between rhetoric and practice suggests an interesting avenue for further study. The East Timor crisis (1975-1999) has been selected as a case-study to test the dynamics of ASEAN state resistance to sovereignty violation in two distinct timeframes: Indonesia’s initial invasion of the territory in 1975, and the ensuing humanitarian crisis in 1999 which resulted in a UN-mandated, Australian-led peacekeeping intervention force. These time-periods demonstrate variation on the dependent variable. It is necessary to observe covariation in order to derive observations in support of a causal theory. To establish covariation, my independent variable is therefore a continuous variable characterised by variation in convergence of interest. Change of this variable should change the value of the dependent variable, thus establishing causal direction. This paper investigates the history of ASEAN’s relationship to the norm of non-intervention. It offers an alternative understanding of ASEAN’s history, written in terms of the relationship between a key ASEAN state, which I call a ‘vanguard state’, and selected external powers. This paper will consider when ASEAN resistance to sovereignty violation has succeeded, and when it has failed. It will contend that variation in outcomes associated with vanguard state resistance to sovereignty violation can be best explained by levels of interest convergence between the ASEAN vanguard state and designated external actors. Evidence will be provided to support the hypothesis that in 1999, ASEAN’s failure to resist violations to the sovereignty of Indonesia was a consequence of low interest convergence between Indonesia and the external powers. Conversely, in 1975, ASEAN’s ability to resist violations to the sovereignty of Indonesia was a consequence of high interest convergence between Indonesia and the external powers. As the vanguard state, Indonesia was able to apply pressure on the ASEAN states and obtain unanimous support for Indonesia’s East Timor policy in 1975 and 1999. However, the key factor explaining the variance in outcomes in both time periods resides in the critical role played by external actors. This view represents a serious challenge to much of the existing scholarship that emphasises ASEAN’s ability to defend regional autonomy. As these cases attempt to show, ASEAN autonomy is much more contingent than portrayed in the existing literature.

Keywords: ASEAN, east timor, intervention, sovereignty

Procedia PDF Downloads 358
1160 A Simulated Evaluation of Model Predictive Control

Authors: Ahmed AlNouss, Salim Ahmed

Abstract:

Process control refers to the techniques to control the variables in a process in order to maintain them at their desired values. Advanced process control (APC) is a broad term within the domain of control where it refers to different kinds of process control and control related tools, for example, model predictive control (MPC), statistical process control (SPC), fault detection and classification (FDC) and performance assessment. APC is often used for solving multivariable control problems and model predictive control (MPC) is one of only a few advanced control methods used successfully in industrial control applications. Advanced control is expected to bring many benefits to the plant operation; however, the extent of the benefits is plant specific and the application needs a large investment. This requires an analysis of the expected benefits before the implementation of the control. In a real plant simulation studies are carried out along with some experimentation to determine the improvement in the performance of the plant due to advanced control. In this research, such an exercise is undertaken to realize the needs of APC application. The main objectives of the paper are as follows: (1) To apply MPC to a number of simulations set up to realize the need of MPC by comparing its performance with that of proportional integral derivatives (PID) controllers. (2) To study the effect of controller parameters on control performance. (3) To develop appropriate performance index (PI) to compare the performance of different controller and develop novel idea to present tuning map of a controller. These objectives were achieved by applying PID controller and a special type of MPC which is dynamic matrix control (DMC) on the multi-tanks process simulated in loop-pro. Then the controller performance has been evaluated by changing the controller parameters. This performance was based on special indices related to the difference between set point and process variable in order to compare the both controllers. The same principle was applied for continuous stirred tank heater (CSTH) and continuous stirred tank reactor (CSTR) processes simulated in Matlab. However, in these processes some developed programs were written to evaluate the performance of the PID and MPC controllers. Finally these performance indices along with their controller parameters were plotted using special program called Sigmaplot. As a result, the improvement in the performance of the control loops was quantified using relevant indices to justify the need and importance of advanced process control. Also, it has been approved that, by using appropriate indices, predictive controller can improve the performance of the control loop significantly.

Keywords: advanced process control (APC), control loop, model predictive control (MPC), proportional integral derivatives (PID), performance indices (PI)

Procedia PDF Downloads 407
1159 Luminescent Properties of Sm³⁺-Doped Silica Nanophosphor Synthesized from Highly Active Amorphous Nanosilica Derived from Rice Husk

Authors: Celestine Mbakaan, Iorkyaa Ahemen, A. D. Onoja, A. N. Amah, Emmanuel Barki

Abstract:

Rice husk (RH) is a natural sheath that forms and covers the grain of rice. The husk composed of hard materials, including opaline silica and lignin. It separates from its grain during rice milling. RH also contains approximately 15 to 28 wt % of silica in hydrated amorphous form. Nanosilica was derived from the husk of different rice varieties after pre-treating the husk (RH) with HCl and calcination at 550°C. Nanosilica derived from the husk of Osi rice variety produced the highest silica yield, and further pretreatment with 0.8 M H₃PO₄ acid removed more mineral impurities. The silica obtained from this rice variety was selected as a host matrix for doping with Sm³⁺ ions. Rice husk silica (RH-SiO₂) doped with samarium (RH-SiO₂: xSm³⁺ (x=0.01, 0.05, and 0.1 molar ratios) nanophosphors were synthesized via the sol-gel method. The structural analysis by X-ray diffraction analysis (XRD) reveals amorphous structure while the surface morphology, as revealed by SEM and TEM, indicates agglomerates of nano-sized spherical particles with an average particle size measuring 21 nm. The nanophosphor has a large surface area measuring 198.0 m²/g, and Fourier transform infrared spectroscopy (FT-IR) shows only a single absorption band which is strong and broad with a valley at 1063 cm⁻¹. Diffuse reflectance spectroscopy (DRS) shows strong absorptions at 319, 345, 362, 375, 401, and 474 nm, which can be exclusively assigned to the 6H5/2→4F11/2, 3H7/2, 4F9/2, 4D5/2, 4K11/2, and 4M15/2 + 4I11/2, transitions of Sm³⁺ respectively. The photoluminescence excitation spectra show that near UV and blue LEDs can effectively be used as excitation sources to produce red-orange and yellow-orange emission from Sm³⁺ ion-doped RH-SiO₂ nanophosphors. The photoluminescence (PL) of the nanophosphors gives three main lines; 568, 605, and 652 nm, which are attributed to the intra-4f shell transitions from the excited level to ground levels, respectively under excitation wavelengths of 365 and 400 nm. The result, as confirmed from the 1931 CIE coordinates diagram, indicates the emission of red-orange light by RH-SiO₂: xSm³⁺ (x=0.01 and 0.1 molar ratios) and yellow-orange light from RH-SiO₂: 0.05 Sm³⁺. Finally, the result shows that RH-SiO₂ doped with samarium (Sm³⁺) ions can be applicable in display applications.

Keywords: luminescence, nanosilica, nanophosphors, Sm³⁺

Procedia PDF Downloads 133
1158 The Structuring of Economic of Brazilian Innovation and the Institutional Proposal to the Legal Management for Global Conformity to Treat the Technological Risks

Authors: Daniela Pellin, Wilson Engelmann

Abstract:

Brazil has sought to accelerate your development through technology and innovation as a response to the global influences, which has received in internal management practices. For this, it had edited the Brazilian Law of Innovation 13.243/2016. However observing the Law overestimated economic aspects the respective application will not consider the stakeholders and the technological risks because there is no legal treatment. The economic exploitation and the technological risks must be controlled by limits of democratic system to find better social development to contribute with the economics agents for making decision to conform with global directions. The research understands this is a problem to face given the social particularities of the country because there has been the literal import of the North American Triple Helix Theory consolidated in developed countries and the negative consequences when applied in developing countries. Because of this symptomatic scenario, it is necessary to create adjustment to conduct the management of the law besides social democratic interests to increase the country development. For this, therefore, the Government will have to adopt some conducts promoting side by side with universities, civil society and companies, informational transparency, catch of partnerships, create a Confort Letter document for preparation to ensure the operation, joint elaboration of a Manual of Good Practices, make accountability and data dissemination. Also the Universities must promote informational transparency, drawing up partnership contracts and generating revenue, development of information. In addition, the civil society must do data analysis about proposals received for discussing to give opinion related. At the end, companies have to give public and transparent information about investments and economic benefits, risks and innovation manufactured. The research intends as a general objective to demonstrate that the efficiency of the propeller deployment will be possible if the innovative decision-making process goes through the institutional logic. As specific objectives, the American influence must undergo some modifications to better suit the economic-legal incentives to potentiate the development of the social system. The hypothesis points to institutional model for application to the legal system can be elaborated based on emerging characteristics of the country, in such a way that technological risks can be foreseen and there will be global conformity with attention to the full development of society as proposed by the researchers.The method of approach will be the systemic-constructivist with bibliographical review, data collection and analysis with the construction of the institutional and democratic model for the management of the Law.

Keywords: development, governance of law, institutionalization, triple helix

Procedia PDF Downloads 140
1157 An Analysis System for Integrating High-Throughput Transcript Abundance Data with Metabolic Pathways in Green Algae

Authors: Han-Qin Zheng, Yi-Fan Chiang-Hsieh, Chia-Hung Chien, Wen-Chi Chang

Abstract:

As the most important non-vascular plants, algae have many research applications, including high species diversity, biofuel sources, adsorption of heavy metals and, following processing, health supplements. With the increasing availability of next-generation sequencing (NGS) data for algae genomes and transcriptomes, an integrated resource for retrieving gene expression data and metabolic pathway is essential for functional analysis and systems biology in algae. However, gene expression profiles and biological pathways are displayed separately in current resources, and making it impossible to search current databases directly to identify the cellular response mechanisms. Therefore, this work develops a novel AlgaePath database to retrieve gene expression profiles efficiently under various conditions in numerous metabolic pathways. AlgaePath, a web-based database, integrates gene information, biological pathways, and next-generation sequencing (NGS) datasets in Chlamydomonasreinhardtii and Neodesmus sp. UTEX 2219-4. Users can identify gene expression profiles and pathway information by using five query pages (i.e. Gene Search, Pathway Search, Differentially Expressed Genes (DEGs) Search, Gene Group Analysis, and Co-Expression Analysis). The gene expression data of 45 and 4 samples can be obtained directly on pathway maps in C. reinhardtii and Neodesmus sp. UTEX 2219-4, respectively. Genes that are differentially expressed between two conditions can be identified in Folds Search. Furthermore, the Gene Group Analysis of AlgaePath includes pathway enrichment analysis, and can easily compare the gene expression profiles of functionally related genes in a map. Finally, Co-Expression Analysis provides co-expressed transcripts of a target gene. The analysis results provide a valuable reference for designing further experiments and elucidating critical mechanisms from high-throughput data. More than an effective interface to clarify the transcript response mechanisms in different metabolic pathways under various conditions, AlgaePath is also a data mining system to identify critical mechanisms based on high-throughput sequencing.

Keywords: next-generation sequencing (NGS), algae, transcriptome, metabolic pathway, co-expression

Procedia PDF Downloads 407
1156 Diminishing Constitutional Hyper-Rigidity by Means of Digital Technologies: A Case Study on E-Consultations in Canada

Authors: Amy Buckley

Abstract:

The purpose of this article is to assess the problem of constitutional hyper-rigidity to consider how it and the associated tensions with democratic constitutionalism can be diminished by means of using digital democratic technologies. In other words, this article examines how digital technologies can assist us in ensuring fidelity to the will of the constituent power without paying the price of hyper-rigidity. In doing so, it is impossible to ignore that digital strategies can also harm democracy through, for example, manipulation, hacking, ‘fake news,’ and the like. This article considers the tension between constitutional hyper-rigidity and democratic constitutionalism and the relevant strengths and weaknesses of digital democratic strategies before undertaking a case study on Canadian e-consultations and drawing its conclusions. This article observes democratic constitutionalism through the lens of the theory of deliberative democracy to suggest that the application of digital strategies can, notwithstanding their pitfalls, improve a constituency’s amendment culture and, thus, diminish constitutional hyper-rigidity. Constitutional hyper-rigidity is not a new or underexplored concept. At a high level, a constitution can be said to be ‘hyper-rigid’ when its formal amendment procedure is so difficult to enact that it does not take place or is limited in its application. This article claims that hyper-rigidity is one problem with ordinary constitutionalism that fails to satisfy the principled requirements of democratic constitutionalism. Given the rise and development of technology that has taken place since the Digital Revolution, there has been a significant expansion in the possibility for digital democratic strategies to overcome the democratic constitutionalism failures resulting from constitutional hyper-rigidity. Typically, these strategies have included, inter alia, e- consultations, e-voting systems, and online polling forums, all of which significantly improve the ability of politicians and judges to directly obtain the opinion of constituents on any number of matters. This article expands on the application of these strategies through its Canadian e-consultation case study and presents them as a solution to poor amendment culture and, consequently, constitutional hyper-rigidity. Hyper-rigidity is a common descriptor of many written and unwritten constitutions, including the United States, Australian, and Canadian constitutions as just some examples. This article undertakes a case study on Canada, in particular, as it is a jurisdiction less commonly cited in academic literature generally concerned with hyper-rigidity and because Canada has to some extent, championed the use of e-consultations. In Part I of this article, I identify the problem, being that the consequence of constitutional hyper-rigidity is in tension with the principles of democratic constitutionalism. In Part II, I identify and explore a potential solution, the implementation of digital democratic strategies as a means of reducing constitutional hyper-rigidity. In Part III, I explore Canada’s e-consultations as a case study for assessing whether digital democratic strategies do, in fact, improve a constituency’s amendment culture thus reducing constitutional hyper-rigidity and the associated tension that arises with the principles of democratic constitutionalism. The idea is to run a case study and then assess whether I can generalise the conclusions.

Keywords: constitutional hyper-rigidity, digital democracy, deliberative democracy, democratic constitutionalism

Procedia PDF Downloads 76
1155 A POX Controller Module to Collect Web Traffic Statistics in SDN Environment

Authors: Wisam H. Muragaa, Kamaruzzaman Seman, Mohd Fadzli Marhusin

Abstract:

Software Defined Networking (SDN) is a new norm of networks. It is designed to facilitate the way of managing, measuring, debugging and controlling the network dynamically, and to make it suitable for the modern applications. Generally, measurement methods can be divided into two categories: Active and passive methods. Active measurement method is employed to inject test packets into the network in order to monitor their behaviour (ping tool as an example). Meanwhile the passive measurement method is used to monitor the traffic for the purpose of deriving measurement values. The measurement methods, both active and passive, are useful for the collection of traffic statistics, and monitoring of the network traffic. Although there has been a work focusing on measuring traffic statistics in SDN environment, it was only meant for measuring packets and bytes rates for non-web traffic. In this study, a feasible method will be designed to measure the number of packets and bytes in a certain time, and facilitate obtaining statistics for both web traffic and non-web traffic. Web traffic refers to HTTP requests that use application layer; while non-web traffic refers to ICMP and TCP requests. Thus, this work is going to be more comprehensive than previous works. With a developed module on POX OpenFlow controller, information will be collected from each active flow in the OpenFlow switch, and presented on Command Line Interface (CLI) and wireshark interface. Obviously, statistics that will be displayed on CLI and on wireshark interfaces include type of protocol, number of bytes and number of packets, among others. Besides, this module will show the number of flows added to the switch whenever traffic is generated from and to hosts in the same statistics list. In order to carry out this work effectively, our Python module will send a statistics request message to the switch requesting its current ports and flows statistics in every five seconds; while the switch will reply with the required information in a message called statistics reply message. Thus, POX controller will be notified and updated with any changes could happen in the entire network in a very short time. Therefore, our aim of this study is to prepare a list for the important statistics elements that are collected from the whole network, to be used for any further researches; particularly, those that are dealing with the detection of the network attacks that cause a sudden rise in the number of packets and bytes like Distributed Denial of Service (DDoS).

Keywords: mininet, OpenFlow, POX controller, SDN

Procedia PDF Downloads 235
1154 Multiscale Modeling of Damage in Textile Composites

Authors: Jaan-Willem Simon, Bertram Stier, Brett Bednarcyk, Evan Pineda, Stefanie Reese

Abstract:

Textile composites, in which the reinforcing fibers are woven or braided, have become very popular in numerous applications in aerospace, automotive, and maritime industry. These textile composites are advantageous due to their ease of manufacture, damage tolerance, and relatively low cost. However, physics-based modeling of the mechanical behavior of textile composites is challenging. Compared to their unidirectional counterparts, textile composites introduce additional geometric complexities, which cause significant local stress and strain concentrations. Since these internal concentrations are primary drivers of nonlinearity, damage, and failure within textile composites, they must be taken into account in order for the models to be predictive. The macro-scale approach to modeling textile-reinforced composites treats the whole composite as an effective, homogenized material. This approach is very computationally efficient, but it cannot be considered predictive beyond the elastic regime because the complex microstructural geometry is not considered. Further, this approach can, at best, offer a phenomenological treatment of nonlinear deformation and failure. In contrast, the mesoscale approach to modeling textile composites explicitly considers the internal geometry of the reinforcing tows, and thus, their interaction, and the effects of their curved paths can be modeled. The tows are treated as effective (homogenized) materials, requiring the use of anisotropic material models to capture their behavior. Finally, the micro-scale approach goes one level lower, modeling the individual filaments that constitute the tows. This paper will compare meso- and micro-scale approaches to modeling the deformation, damage, and failure of textile-reinforced polymer matrix composites. For the mesoscale approach, the woven composite architecture will be modeled using the finite element method, and an anisotropic damage model for the tows will be employed to capture the local nonlinear behavior. For the micro-scale, two different models will be used, the one being based on the finite element method, whereas the other one makes use of an embedded semi-analytical approach. The goal will be the comparison and evaluation of these approaches to modeling textile-reinforced composites in terms of accuracy, efficiency, and utility.

Keywords: multiscale modeling, continuum damage model, damage interaction, textile composites

Procedia PDF Downloads 354
1153 Preparation of Sorbent Materials for the Removal of Hardness and Organic Pollutants from Water and Wastewater

Authors: Thanaa Abdel Moghny, Mohamed Keshawy, Mahmoud Fathy, Abdul-Raheim M. Abdul-Raheim, Khalid I. Kabel, Ahmed F. El-Kafrawy, Mahmoud Ahmed Mousa, Ahmed E. Awadallah

Abstract:

Ecological pollution is of great concern for human health and the environment. Numerous organic and inorganic pollutants usually discharged into the water caused carcinogenic or toxic effect for human and different life form. In this respect, this work aims to treat water contaminated by organic and inorganic waste using sorbent based on polystyrene. Therefore, two different series of adsorbent material were prepared; the first one included the preparation of polymeric sorbent from the reaction of styrene acrylate ester and alkyl acrylate. The second series involved syntheses of composite ion exchange resins of waste polystyrene and   amorphous carbon thin film (WPS/ACTF) by solvent evaporation using micro emulsion polymerization. The produced ACTF/WPS nanocomposite was sulfonated to produce cation exchange resins ACTF/WPSS nanocomposite. The sorbents of the first series were characterized using FTIR, 1H NMR, and gel permeation chromatography. The thermal properties of the cross-linked sorbents were investigated using thermogravimetric analysis, and the morphology was characterized by scanning electron microscope (SEM). The removal of organic pollutant was determined through absorption tests in a various organic solvent. The chemical and crystalline structure of nanocomposite of second series has been proven by studies of FTIR spectrum, X-rays, thermal analysis, SEM and TEM analysis to study morphology of resins and ACTF that assembled with polystyrene chain. It is found that the composite resins ACTF/WPSS are thermally stable and show higher chemical stability than ion exchange WPSS resins. The composite resin was evaluated for calcium hardness removal. The result is evident that the ACTF/WPSS composite has more prominent inorganic pollutant removal than WPSS resin. So, we recommend the using of nanocomposite resin as new potential applications for water treatment process.

Keywords: nanocomposite, sorbent materials, waste water, waste polystyrene

Procedia PDF Downloads 429
1152 Profiling the Volatile Metabolome in Pear Leaves with Different Resistance to the Pear Psylla Cacopsylla bidens (Sulc) and Characterization of Phenolic Acid Decarboxylase

Authors: Mwafaq Ibdah, Mossab, Yahyaa, Dor Rachmany, Yoram Gerchman, Doron Holland, Liora Shaltiel-Harpaz

Abstract:

Pear Psylla is the most important pest of pear in all pear-growing regions, in Asian, European, and the USA. Pear psylla damages pears in several ways: high-density populations of these insects can cause premature leaf and fruit drop, diminish plant growth, and reduce fruit size. In addition, their honeydew promotes sooty mold on leaves and russeting on fruit. Pear psyllas are also considered vectors of pear pathogens such as Candidatus Phytoplasma pyri causing pear decline that can lead to loss of crop and tree vigor, and sometimes loss of trees. Psylla control is a major obstacle to efficient integrated pest management. Recently we have identified two naturally resistance pear accessions (Py.760-261 and Py.701-202) in the Newe Ya’ar live collection. GC-MS volatile metabolic profiling identified several volatile compounds common in these accessions but lacking, or much less common, in a sensitive accession, the commercial Spadona variety. Among these volatiles were styrene and its derivatives. When the resistant accessions were used as inter-stock, the volatile compounds appear in commercial Spadona scion leaves, and it showed reduced susceptibility to pear psylla. Laboratory experiments and applications of some of these volatile compounds were very effective against psylla eggs, nymphs, and adults. The genes and enzymes involved in the specific reactions that lead to the biosynthesis of styrene in plant are unknown. We have identified a phenolic acid decarboxylase that catalyzes the formation of p-hydroxystyrene, which occurs as a styrene analog in resistant pear genotypes. The His-tagged and affinity chromatography purified E. coli-expressed pear PyPAD1 protein could decarboxylate p-coumaric acid and ferulic acid to p-hydroxystyrene and 3-methoxy-4-hydroxystyrene. In addition, PyPAD1 had the highest activity toward p-coumaric acid. Expression analysis of the PyPAD gene revealed that its expressed as expected, i.e., high when styrene levels and psylla resistance were high.

Keywords: pear Psylla, volatile, GC-MS, resistance

Procedia PDF Downloads 147
1151 Development of a Framework for Assessment of Market Penetration of Oil Sands Energy Technologies in Mining Sector

Authors: Saeidreza Radpour, Md. Ahiduzzaman, Amit Kumar

Abstract:

Alberta’s mining sector consumed 871.3 PJ in 2012, which is 67.1% of the energy consumed in the industry sector and about 40% of all the energy consumed in the province of Alberta. Natural gas, petroleum products, and electricity supplied 55.9%, 20.8%, and 7.7%, respectively, of the total energy use in this sector. Oil sands mining and upgrading to crude oil make up most of the mining energy sector activities in Alberta. Crude oil is produced from the oil sands either by in situ methods or by the mining and extraction of bitumen from oil sands ore. In this research, the factors affecting oil sands production have been assessed and a framework has been developed for market penetration of new efficient technologies in this sector. Oil sands production amount is a complex function of many different factors, broadly categorized into technical, economic, political, and global clusters. The results of developed and implemented statistical analysis in this research show that the importance of key factors affecting on oil sands production in Alberta is ranked as: Global energy consumption (94% consistency), Global crude oil price (86% consistency), and Crude oil export (80% consistency). A framework for modeling oil sands energy technologies’ market penetration (OSETMP) has been developed to cover related technical, economic and environmental factors in this sector. It has been assumed that the impact of political and social constraints is reflected in the model by changes of global oil price or crude oil price in Canada. The market share of novel in situ mining technologies with low energy and water use are assessed and calculated in the market penetration framework include: 1) Partial upgrading, 2) Liquid addition to steam to enhance recovery (LASER), 3) Solvent-assisted process (SAP), also called solvent-cyclic steam-assisted gravity drainage (SC-SAGD), 4) Cyclic solvent, 5) Heated solvent, 6) Wedge well, 7) Enhanced modified steam and Gas push (emsagp), 8) Electro-thermal dynamic stripping process (ET-DSP), 9) Harris electro-magnetic heating applications (EMHA), 10) Paraffin froth separation. The results of the study will show the penetration profile of these technologies over a long term planning horizon.

Keywords: appliances efficiency improvement, diffusion models, market penetration, residential sector

Procedia PDF Downloads 330
1150 Structural Development and Multiscale Design Optimization of Additively Manufactured Unmanned Aerial Vehicle with Blended Wing Body Configuration

Authors: Malcolm Dinovitzer, Calvin Miller, Adam Hacker, Gabriel Wong, Zach Annen, Padmassun Rajakareyar, Jordan Mulvihill, Mostafa S.A. ElSayed

Abstract:

The research work presented in this paper is developed by the Blended Wing Body (BWB) Unmanned Aerial Vehicle (UAV) team, a fourth-year capstone project at Carleton University Department of Mechanical and Aerospace Engineering. Here, a clean sheet UAV with BWB configuration is designed and optimized using Multiscale Design Optimization (MSDO) approach employing lattice materials taking into consideration design for additive manufacturing constraints. The BWB-UAV is being developed with a mission profile designed for surveillance purposes with a minimum payload of 1000 grams. To demonstrate the design methodology, a single design loop of a sample rib from the airframe is shown in details. This includes presentation of the conceptual design, materials selection, experimental characterization and residual thermal stress distribution analysis of additively manufactured materials, manufacturing constraint identification, critical loads computations, stress analysis and design optimization. A dynamic turbulent critical load case was identified composed of a 1-g static maneuver with an incremental Power Spectral Density (PSD) gust which was used as a deterministic design load case for the design optimization. 2D flat plate Doublet Lattice Method (DLM) was used to simulate aerodynamics in the aeroelastic analysis. The aerodynamic results were verified versus a 3D CFD analysis applying Spalart-Allmaras and SST k-omega turbulence to the rigid UAV and vortex lattice method applied in the OpenVSP environment. Design optimization of a single rib was conducted using topology optimization as well as MSDO. Compared to a solid rib, weight savings of 36.44% and 59.65% were obtained for the topology optimization and the MSDO, respectively. These results suggest that MSDO is an acceptable alternative to topology optimization in weight critical applications while preserving the functional requirements.

Keywords: blended wing body, multiscale design optimization, additive manufacturing, unmanned aerial vehicle

Procedia PDF Downloads 376
1149 Best-Performing Color Space for Land-Sea Segmentation Using Wavelet Transform Color-Texture Features and Fusion of over Segmentation

Authors: Seynabou Toure, Oumar Diop, Kidiyo Kpalma, Amadou S. Maiga

Abstract:

Color and texture are the two most determinant elements for perception and recognition of the objects in an image. For this reason, color and texture analysis find a large field of application, for example in image classification and segmentation. But, the pioneering work in texture analysis was conducted on grayscale images, thus discarding color information. Many grey-level texture descriptors have been proposed and successfully used in numerous domains for image classification: face recognition, industrial inspections, food science medical imaging among others. Taking into account color in the definition of these descriptors makes it possible to better characterize images. Color texture is thus the subject of recent work, and the analysis of color texture images is increasingly attracting interest in the scientific community. In optical remote sensing systems, sensors measure separately different parts of the electromagnetic spectrum; the visible ones and even those that are invisible to the human eye. The amounts of light reflected by the earth in spectral bands are then transformed into grayscale images. The primary natural colors Red (R) Green (G) and Blue (B) are then used in mixtures of different spectral bands in order to produce RGB images. Thus, good color texture discrimination can be achieved using RGB under controlled illumination conditions. Some previous works investigate the effect of using different color space for color texture classification. However, the selection of the best performing color space in land-sea segmentation is an open question. Its resolution may bring considerable improvements in certain applications like coastline detection, where the detection result is strongly dependent on the performance of the land-sea segmentation. The aim of this paper is to present the results of a study conducted on different color spaces in order to show the best-performing color space for land-sea segmentation. In this sense, an experimental analysis is carried out using five different color spaces (RGB, XYZ, Lab, HSV, YCbCr). For each color space, the Haar wavelet decomposition is used to extract different color texture features. These color texture features are then used for Fusion of Over Segmentation (FOOS) based classification; this allows segmentation of the land part from the sea one. By analyzing the different results of this study, the HSV color space is found as the best classification performance while using color and texture features; which is perfectly coherent with the results presented in the literature.

Keywords: classification, coastline, color, sea-land segmentation

Procedia PDF Downloads 248
1148 Removal of Heavy Metals from Municipal Wastewater Using Constructed Rhizofiltration System

Authors: Christine A. Odinga, G. Sanjay, M. Mathew, S. Gupta, F. M. Swalaha, F. A. O. Otieno, F. Bux

Abstract:

Wastewater discharged from municipal treatment plants contain an amalgamation of trace metals. The presence of metal pollutants in wastewater poses a huge challenge to the choice and applications of the preferred treatment method. Conventional treatment methods are inefficient in the removal of trace metals due to their design approach. This study evaluated the treatment performance of a constructed rhizofiltration system in the removal of heavy metals from municipal wastewater. The study was conducted at an eThekwni municipal wastewater treatment plant in Kingsburgh - Durban in the province of KwaZulu-Natal. The construction details of the pilot-scale rhizofiltration unit included three different layers of substrate consisting of medium stones, coarse gravel and fine sand. The system had one section planted with Phragmites australis L. and Kyllinga nemoralis L. while the other section was unplanted and acted as the control. Influent, effluent and sediment from the system were sampled and assessed for the presence of and removal of selected trace heavy metals using standard methods. Efficiency of metals removal was established by gauging the transfer of metals into leaves, roots and stem of the plants by calculations based on standard statistical packages. The Langmuir model was used to assess the heavy metal adsorption mechanisms of the plants. Heavy metals were accumulated in the entire rhizofiltration system at varying percentages of 96.69% on planted and 48.98% on control side for cadmium. Chromium was 81% and 24%, Copper was 23.4% and 1.1%, Nickel was 72% and 46.5, Lead was 63% and 31%, while Zinc was 76% and 84% on the on the water and sediment of the planted and control sides of the rhizofilter respectively. The decrease in metal adsorption efficiencies on the planted side followed the pattern of Cd>Cr>Zn>Ni>Pb>Cu and Ni>Cd>Pb>Cr>Cu>Zn on the control side. Confirmatory analysis using Electron Scanning Microscopy revealed that higher amounts of metals was deposited in the root system with values ranging from 0.015mg/kg (Cr), 0.250 (Cu), 0.030 (Pb) for P. australis, and 0.055mg/kg (Cr), 0.470mg/kg (Cu) and 0.210mg/kg,(Pb) for K. nemoralis respectively. The system was found to be efficient in removing and reducing metals from wastewater and further research is necessary to establish the immediate mechanisms that the plants display in order to achieve these reductions.

Keywords: wastewater treatment, Phragmites australis L., Kyllinga nemoralis L., heavy metals, pathogens, rhizofiltration

Procedia PDF Downloads 264
1147 The Re-Emergence of Russia Foreign Policy (Case Study: Middle East)

Authors: Maryam Azish

Abstract:

Russia, as an emerging global player in recent years, has projected a special place in the Middle East. Despite all the challenges it has faced over the years, it has always considered its presence in various fields with a strategy that has defined its maneuvering power as a level of competition and even confrontation with the United States. Therefore, its current approach is considered important as an influential actor in the Middle East. After the collapse of the Soviet Union, when the Russians withdrew completely from the Middle East, the American scene remained almost unrivaled by the Americans. With the start of the US-led war in Iraq and Afghanistan and the subsequent developments that led to the US military and political defeat, a new chapter in regional security was created in which ISIL and Taliban terrorism went along with the Arab Spring to destabilize the Middle East. Because of this, the Americans took every opportunity to strengthen their military presence. Iraq, Syria and Afghanistan have always been the three areas where terrorism was shaped, and the countries of the region have each reacted to this evil phenomenon accordingly. The West dealt with this phenomenon on a case-by-case basis in the general circumstances that created the fluid situation in the Arab countries and the region. Russian President Vladimir Putin accused the US of falling asleep in the face of ISIS and terrorism in Syria. In fact, this was an opportunity for the Russians to revive their presence in Syria. This article suggests that utilizing the recognition policy along with the constructivism theory will offer a better knowledge of Russia’s endeavors to endorse its international position. Accordingly, Russia’s distinctiveness and its ambitions for a situation of great power have played a vital role in shaping national interests and, subsequently, in foreign policy, in Putin's era in particular. The focal claim of the paper is that scrutinize Russia’s foreign policy with realistic methods cannot be attained. Consequently, with an aim to fill the prevailing vacuum, this study exploits the politics of acknowledgment in the context of constructivism to examine Russia’s foreign policy in the Middle East. The results of this paper show that the key aim of Russian foreign policy discourse, accompanied by increasing power and wealth, is to recognize and reinstate the position of great power in the universal system. The Syrian crisis has created an opportunity for Russia to unite its position in the developing global and regional order after ages of dynamic and prevalent existence in the Middle East as well as contradicting US unilateralism. In the meantime, the writer thinks that the question of identifying Russia’s position in the global system by the West has played a foremost role in serving its national interests.

Keywords: constructivism, foreign Policy, middle East, Russia, regionalism

Procedia PDF Downloads 149
1146 Weapon-Being: Weaponized Design and Object-Oriented Ontology in Hypermodern Times

Authors: John Dimopoulos

Abstract:

This proposal attempts a refabrication of Heidegger’s classic thing-being and object-being analysis in order to provide better ontological tools for understanding contemporary culture, technology, and society. In his work, Heidegger sought to understand and comment on the problem of technology in an era of rampant innovation and increased perils for society and the planet. Today we seem to be at another crossroads in this course, coming after postmodernity, during which dreams and dangers of modernity augmented with critical speculations of the post-war era take shape. The new era which we are now living in, referred to as hypermodernity by researchers in various fields such as architecture and cultural theory, is defined by the horizontal implementation of digital technologies, cybernetic networks, and mixed reality. Technology today is rapidly approaching a turning point, namely the point of no return for humanity’s supervision over its creations. The techno-scientific civilization of the 21st century creates a series of problems, progressively more difficult and complex to solve and impossible to ignore, climate change, data safety, cyber depression, and digital stress being some of the most prevalent. Humans often have no other option than to address technology-induced problems with even more technology, as in the case of neuron networks, machine learning, and AI, thus widening the gap between creating technological artifacts and understanding their broad impact and possible future development. As all technical disciplines and particularly design, become enmeshed in a matrix of digital hyper-objects, a conceptual toolbox that allows us to handle the new reality becomes more and more necessary. Weaponized design, prevalent in many fields, such as social and traditional media, urban planning, industrial design, advertising, and the internet in general, hints towards an increase in conflicts. These conflicts between tech companies, stakeholders, and users with implications in politics, work, education, and production as apparent in the cases of Amazon workers’ strikes, Donald Trump’s 2016 campaign, Facebook and Microsoft data scandals, and more are often non-transparent to the wide public’s eye, thus consolidating new elites and technocratic classes and making the public scene less and less democratic. The new category proposed, weapon-being, is outlined in respect to the basic function of reducing complexity, subtracting materials, actants, and parameters, not strictly in favor of a humanistic re-orientation but in a more inclusive ontology of objects and subjects. Utilizing insights of Object-Oriented Ontology (OOO) and its schematization of technological objects, an outline for a radical ontology of technology is approached.

Keywords: design, hypermodernity, object-oriented ontology, weapon-being

Procedia PDF Downloads 152
1145 Synthesis and Prediction of Activity Spectra of Substances-Assisted Evaluation of Heterocyclic Compounds Containing Hydroquinoline Scaffolds

Authors: Gizachew Mulugeta Manahelohe, Khidmet Safarovich Shikhaliev

Abstract:

There has been a significant surge in interest in the synthesis of heterocyclic compounds that contain hydroquinoline fragments. This surge can be attributed to the broad range of pharmaceutical and industrial applications that these compounds possess. The present study provides a comprehensive account of the synthesis of both linear and fused heterocyclic systems that incorporate hydroquinoline fragments. Furthermore, the pharmacological activity spectra of the synthesized compounds were assessed using the in silico method, employing the prediction of activity spectra of substances (PASS) program. Hydroquinoline nitriles 7 and 8 were prepared through the reaction of the corresponding hydroquinolinecarbaldehyde using a hydroxylammonium chloride/pyridine/toluene system and iodine in aqueous ammonia under ambient conditions, respectively. 2-Phenyl-1,3-oxazol-5(4H)-ones 9a,b and 10a,b were synthesized via the condensation of compounds 5a,b and 6a,b with hippuric acid in acetic acid in 30–60% yield. When activated, 7-methylazolopyrimidines 11a and b were reacted with N-alkyl-2,2,4-trimethyl-1,2,3,4-tetrahydroquinoline-6-carbaldehydes 6a and b, and triazolo/pyrazolo[1,5-a]pyrimidin-6-yl carboxylic acids 12a and b were obtained in 60–70% yield. The condensation of 7-hydroxy-1,2,3,4-tetramethyl-1,2-dihydroquinoline 3 h with dimethylacetylenedicarboxylate (DMAD) and ethyl acetoacetate afforded cyclic products 16 and 17, respectively. The condensation reaction of 6-formyl-7-hydroxy-1,2,2,4-tetramethyl-1,2-dihydroquinoline 5e with methylene-active compounds such as ethyl cyanoacetate/dimethyl-3-oxopentanedioate/ethyl acetoacetate/diethylmalonate/Meldrum’s acid afforded 3-substituted coumarins containing dihydroquinolines 19 and 21. Pentacyclic coumarin 22 was obtained via the random condensation of malononitrile with 5e in the presence of a catalytic amount of piperidine in ethanol. The biological activities of the synthesized compounds were assessed using the PASS program. Based on the prognosis, compounds 13a, b, and 14 exhibited a high likelihood of being active as inhibitors of gluconate 2-dehydrogenase, as well as possessing antiallergic, antiasthmatic, and antiarthritic properties, with a probability value (Pa) ranging from 0.849 to 0.870. Furthermore, it was discovered that hydroquinoline carbonitriles 7 and 8 tended to act as effective progesterone antagonists and displayed antiallergic, antiasthmatic, and antiarthritic effects (Pa = 0.276–0.827). Among the hydroquinolines containing coumarin moieties, compounds 17, 19a, and 19c were predicted to be potent progesterone antagonists, with Pa values of 0.710, 0.630, and 0.615, respectively.

Keywords: heterocyclic compound, hydroquinoline, Vilsmeier–Haack formulation, quinolone

Procedia PDF Downloads 43
1144 Repeatable Surface Enhanced Raman Spectroscopy Substrates from SERSitive for Wide Range of Chemical and Biological Substances

Authors: Monika Ksiezopolska-Gocalska, Pawel Albrycht, Robert Holyst

Abstract:

Surface Enhanced Raman Spectroscopy (SERS) is a technique used to analyze very low concentrations of substances in solutions, even in aqueous solutions - which is its advantage over IR. This technique can be used in the pharmacy (to check the purity of products); forensics (whether at a crime scene there were any illegal substances); or medicine (serving as a medical test) and lots more. Due to the high potential of this technique, its increasing popularity in analytical laboratories, and simultaneously - the absence of appropriate platforms enhancing the SERS signal (crucial to observe the Raman effect at low analyte concentration in solutions (1 ppm)), we decided to invent our own SERS platforms. As an enhancing layer, we have chosen gold and silver nanoparticles, because these two have the best SERS properties, and each has an affinity for the other kind of particles, which increases the range of research capabilities. The next step was to commercialize them, which resulted in the creation of the company ‘SERSitive.eu’ focusing on production of highly sensitive (Ef = 10⁵ – 10⁶), homogeneous and reproducible (70 - 80%) substrates. SERStive SERS substrates are made using the electrodeposition of silver or silver-gold nanoparticles technique. Thanks to a very detailed analysis of data based on studies optimizing such parameters as deposition time, temperature of the reaction solution, applied potential, used reducer, or reagent concentrations using a standardized compound - p-mercaptobenzoic acid (PMBA) at a concentration of 10⁻⁶ M, we have developed a high-performance process for depositing precious metal nanoparticles on the surface of ITO glass. In order to check a quality of the SERSitive platforms, we examined the wide range of the chemical compounds and the biological substances. Apart from analytes that have great affinity to the metal surfaces (e.g. PMBA) we obtained very good results for those fitting less the SERS measurements. Successfully we received intensive, and what’s more important - very repetitive spectra for; amino acids (phenyloalanine, 10⁻³ M), drugs (amphetamine, 10⁻⁴ M), designer drugs (cathinone derivatives, 10⁻³ M), medicines and ending with bacteria (Listeria, Salmonella, Escherichia coli) and fungi.

Keywords: nanoparticles, Raman spectroscopy, SERS, SERS applications, SERS substrates, SERSitive

Procedia PDF Downloads 151
1143 Densities and Volumetric Properties of {Difurylmethane + [(C5 – C8) N-Alkane or an Amide]} Binary Systems at 293.15, 298.15 and 303.15 K: Modelling Excess Molar Volumes by Prigogine-Flory-Patterson Theory

Authors: Belcher Fulele, W. A. A. Ddamba

Abstract:

Study of solvent systems contributes to the understanding of intermolecular interactions that occur in binary mixtures. These interactions involves among others strong dipole-dipole interactions and weak van de Waals interactions which are of significant application in pharmaceuticals, solvent extractions, design of reactors and solvent handling and storage processes. Binary mixtures of solvents can thus be used as a model to interpret thermodynamic behavior that occur in a real solution mixture. Densities of pure DFM, n-alkanes (n-pentane, n-hexane, n-heptane and n-octane) and amides (N-methylformamide, N-ethylformamide, N,N-dimethylformamide and N,N-dimethylacetamide) as well as their [DFM + ((C5-C8) n-alkane or amide)] binary mixtures over the entire composition range, have been reported at temperature 293.15, 298.15 and 303.15 K and atmospheric pressure. These data has been used to derive the thermodynamic properties: the excess molar volume of solution, apparent molar volumes, excess partial molar volumes, limiting excess partial molar volumes, limiting partial molar volumes of each component of a binary mixture. The results are discussed in terms of possible intermolecular interactions and structural effects that occur in the binary mixtures. The variation of excess molar volume with DFM composition for the [DFM + (C5-C7) n-alkane] binary mixture exhibit a sigmoidal behavior while for the [DFM + n-octane] binary system, positive deviation of excess molar volume function was observed over the entire composition range. For each of the [DFM + (C5-C8) n-alkane] binary mixture, the excess molar volume exhibited a fall with increase in temperature. The excess molar volume for each of [DFM + (NMF or NEF or DMF or DMA)] binary system was negative over the entire DFM composition at each of the three temperatures investigated. The negative deviations in excess molar volume values follow the order: DMA > DMF > NEF > NMF. Increase in temperature has a greater effect on component self-association than it has on complex formation between molecules of components in [DFM + (NMF or NEF or DMF or DMA)] binary mixture which shifts complex formation equilibrium towards complex to give a drop in excess molar volume with increase in temperature. The Prigogine-Flory-Patterson model has been applied at 298.15 K and reveals that the free volume is the most important contributing term to the excess experimental molar volume data for [DFM + (n-pentane or n-octane)] binary system. For [DFM + (NMF or DMF or DMA)] binary mixture, the interactional term and characteristic pressure term contributions are the most important contributing terms in describing the sign of experimental excess molar volume. The mixture systems contributed to the understanding of interactions of polar solvents with proteins (amides) with non-polar solvents (alkanes) in biological systems.

Keywords: alkanes, amides, excess thermodynamic parameters, Prigogine-Flory-Patterson model

Procedia PDF Downloads 355
1142 An Adaptive Conversational AI Approach for Self-Learning

Authors: Airy Huang, Fuji Foo, Aries Prasetya Wibowo

Abstract:

In recent years, the focus of Natural Language Processing (NLP) development has been gradually shifting from the semantics-based approach to deep learning one, which performs faster with fewer resources. Although it performs well in many applications, the deep learning approach, due to the lack of semantics understanding, has difficulties in noticing and expressing a novel business case with a pre-defined scope. In order to meet the requirements of specific robotic services, deep learning approach is very labor-intensive and time consuming. It is very difficult to improve the capabilities of conversational AI in a short time, and it is even more difficult to self-learn from experiences to deliver the same service in a better way. In this paper, we present an adaptive conversational AI algorithm that combines both semantic knowledge and deep learning to address this issue by learning new business cases through conversations. After self-learning from experience, the robot adapts to the business cases originally out of scope. The idea is to build new or extended robotic services in a systematic and fast-training manner with self-configured programs and constructed dialog flows. For every cycle in which a chat bot (conversational AI) delivers a given set of business cases, it is trapped to self-measure its performance and rethink every unknown dialog flows to improve the service by retraining with those new business cases. If the training process reaches a bottleneck and incurs some difficulties, human personnel will be informed of further instructions. He or she may retrain the chat bot with newly configured programs, or new dialog flows for new services. One approach employs semantics analysis to learn the dialogues for new business cases and then establish the necessary ontology for the new service. With the newly learned programs, it completes the understanding of the reaction behavior and finally uses dialog flows to connect all the understanding results and programs, achieving the goal of self-learning process. We have developed a chat bot service mounted on a kiosk, with a camera for facial recognition and a directional microphone array for voice capture. The chat bot serves as a concierge with polite conversation for visitors. As a proof of concept. We have demonstrated to complete 90% of reception services with limited self-learning capability.

Keywords: conversational AI, chatbot, dialog management, semantic analysis

Procedia PDF Downloads 136
1141 From Theory to Practice: An Iterative Design Process in Implementing English Medium Instruction in Higher Education

Authors: Linda Weinberg, Miriam Symon

Abstract:

While few institutions of higher education in Israel offer international programs taught entirely in English, many Israeli students today can study at least one content course taught in English during their degree program. In particular, with the growth of international partnerships and opportunities for student mobility, English medium instruction is a growing phenomenon. There are however no official guidelines in Israel for how to develop and implement content courses in English and no training to help lecturers prepare for teaching their materials in a foreign language. Furthermore, the implications for the students and the nature of the courses themselves have not been sufficiently considered. In addition, the institution must have lecturers who are able to teach these courses effectively in English. An international project funded by the European Union addresses these issues and a set of guidelines which provide guidance for lecturers in adapting their courses for delivery in English have been developed. A train-the-trainer approach is adopted in order to cascade knowledge and experience in English medium instruction from experts to language teachers and on to content teachers thus maximizing the scope of professional development. To accompany training, a model English medium course has been created which serves the dual purpose of highlighting alternatives to the frontal lecture while integrating language learning objectives with content goals. This course can also be used as a standalone content course. The development of the guidelines and of the course utilized backwards, forwards and central design in an iterative process. The goals for combined language and content outcomes were identified first after which a suitable framework for achieving these goals was constructed. The assessment procedures evolved through collaboration between content and language specialists and subsequently were put into action during a piloting phase. Feedback from the piloting teachers and from the students highlight the need for clear channels of communication to encourage frank and honest discussion of expectations versus reality. While much of what goes on in the English medium classroom requires no better teaching skills than are required in any classroom, the understanding of students' abilities in achieving reasonable learning outcomes in a foreign language must be rationalized and accommodated within the course design. Concomitantly, preparatory language classes for students must be able to adapt to prepare students for specific language and cognitive skills and activities that courses conducted in English require. This paper presents findings from the implementation of a purpose-designed English medium instruction course arrived at through an iterative backwards, forwards and central design process utilizing feedback from students and lecturers alike leading to suggested guidelines for English medium instruction in higher education.

Keywords: English medium instruction, higher education, iterative design process, train-the-trainer

Procedia PDF Downloads 300
1140 Viability of EBT3 Film in Small Dimensions to Be Use for in-Vivo Dosimetry in Radiation Therapy

Authors: Abdul Qadir Jangda, Khadija Mariam, Usman Ahmed, Sharib Ahmed

Abstract:

The Gafchromic EBT3 film has the characteristic of high spatial resolution, weak energy dependence and near tissue equivalence which makes them viable to be used for in-vivo dosimetry in External Beam and Brachytherapy applications. The aim of this study is to assess the smallest film dimension that may be feasible for the use in in-vivo dosimetry. To evaluate the viability, the film sizes from 3 x 3 mm to 20 x 20 mm were calibrated with 6 MV Photon and 6 MeV electron beams. The Gafchromic EBT3 (Lot no. A05151201, Make: ISP) film was cut into five different sizes in order to establish the relationship between absorbed dose vs. film dimensions. The film dimension were 3 x 3, 5 x 5, 10 x 10, 15 x 15, and 20 x 20 mm. The films were irradiated on Varian Clinac® 2100C linear accelerator for dose range from 0 to 1000 cGy using PTW solid water phantom. The irradiation was performed as per clinical absolute dose rate calibratin setup, i.e. 100 cm SAD, 5.0 cm depth and field size of 10x10 cm2 and 100 cm SSD, 1.4 cm depth and 15x15 cm2 applicator for photon and electron respectively. The irradiated films were scanned with the landscape orientation and a post development time of 48 hours (minimum). Film scanning accomplished using Epson Expression 10000 XL Flatbed Scanner and quantitative analysis carried out with ImageJ freeware software. Results show that the dose variation with different film dimension ranging from 3 x 3 mm to 20 x 20 mm is very minimal with a maximum standard deviation of 0.0058 in Optical Density for a dose level of 3000 cGy and the the standard deviation increases with the increase in dose level. So the precaution must be taken while using the small dimension films for higher doses. Analysis shows that there is insignificant variation in the absorbed dose with a change in film dimension of EBT3 film. Study concludes that the film dimension upto 3 x 3 mm can safely be used up to a dose level of 3000 cGy without the need of recalibration for particular dimension in use for dosimetric application. However, for higher dose levels, one may need to calibrate the films for a particular dimension in use for higher accuracy. It was also noticed that the crystalline structure of the film got damage at the edges while cutting the film, which can contribute to the wrong dose if the region of interest includes the damage area of the film

Keywords: external beam radiotherapy, film calibration, film dosimetery, in-vivo dosimetery

Procedia PDF Downloads 494