Search results for: backtracking search algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5188

Search results for: backtracking search algorithm

208 The Influence of Fashion Bloggers on the Pre-Purchase Decision for Online Fashion Products among Generation Y Female Malaysian Consumers

Authors: Mohd Zaimmudin Mohd Zain, Patsy Perry, Lee Quinn

Abstract:

This study explores how fashion consumers are influenced by fashion bloggers towards pre-purchase decision for online fashion products in a non-Western context. Malaysians rank among the world’s most avid online shoppers, with apparel the third most popular purchase category. However, extant research on fashion blogging focuses on the developed Western market context. Numerous international fashion retailers have entered the Malaysian market from luxury to fast fashion segments of the market; however Malaysian fashion consumers must balance religious and social norms for modesty with their dress style and adoption of fashion trends. Consumers increasingly mix and match Islamic and Western elements of dress to create new styles enabling them to follow Western fashion trends whilst paying respect to social and religious norms. Social media have revolutionised the way that consumers can search for and find information about fashion products. For online fashion brands with no physical presence, social media provide a means of discovery for consumers. By allowing the creation and exchange of user-generated content (UGC) online, they provide a public forum that gives individual consumers their own voices, as well as access to product information that facilitates their purchase decisions. Social media empower consumers and brands have important roles in facilitating conversations among consumers and themselves, to help consumers connect with them and one another. Fashion blogs have become an important fashion information sources. By sharing their personal style and inspiring their followers with what they wear on popular social media platforms such as Instagram, fashion bloggers have become fashion opinion leaders. By creating UGC to spread useful information to their followers, they influence the pre-purchase decision. Hence, successful Western fashion bloggers such as Chiara Ferragni may earn millions of US dollars every year, and some have created their own fashion ranges and beauty products, become judges in fashion reality shows, won awards, and collaborated with high street and luxury brands. As fashion blogging has become more established worldwide, increasing numbers of fashion bloggers have emerged from non-Western backgrounds to promote Islamic fashion styles, such as Hassanah El-Yacoubi and Dian Pelangi. This study adopts a qualitative approach using netnographic content analysis of consumer comments on two famous Malaysian fashion bloggers’ Instagram accounts during January-March 2016 and qualitative interviews with 16 Malaysian Generation Y fashion consumers during September-October 2016. Netnography adapts ethnographic techniques to the study of online communities or computer-mediated communications. Template analysis of the data involved coding comments according to the theoretical framework, which was developed from the literature review. Initial data analysis shows the strong influence of Malaysian fashion bloggers on their followers in terms of lifestyle and morals as well as fashion style. Followers were guided towards the mix and match trend of dress with Western and Islamic elements, for example, showing how vivid colours or accessories could be worked into an outfit whilst still respecting social and religious norms. The blogger’s Instagram account is a form of online community where followers can communicate and gain guidance and support from other followers, as well as from the blogger.

Keywords: fashion bloggers, Malaysia, qualitative, social media

Procedia PDF Downloads 218
207 Nonlinear Dynamic Analysis of Base-Isolated Structures Using a Mixed Integration Method: Stability Aspects and Computational Efficiency

Authors: Nicolò Vaiana, Filip C. Filippou, Giorgio Serino

Abstract:

In order to reduce numerical computations in the nonlinear dynamic analysis of seismically base-isolated structures, a Mixed Explicit-Implicit time integration Method (MEIM) has been proposed. Adopting the explicit conditionally stable central difference method to compute the nonlinear response of the base isolation system, and the implicit unconditionally stable Newmark’s constant average acceleration method to determine the superstructure linear response, the proposed MEIM, which is conditionally stable due to the use of the central difference method, allows to avoid the iterative procedure generally required by conventional monolithic solution approaches within each time step of the analysis. The main aim of this paper is to investigate the stability and computational efficiency of the MEIM when employed to perform the nonlinear time history analysis of base-isolated structures with sliding bearings. Indeed, in this case, the critical time step could become smaller than the one used to define accurately the earthquake excitation due to the very high initial stiffness values of such devices. The numerical results obtained from nonlinear dynamic analyses of a base-isolated structure with a friction pendulum bearing system, performed by using the proposed MEIM, are compared to those obtained adopting a conventional monolithic solution approach, i.e. the implicit unconditionally stable Newmark’s constant acceleration method employed in conjunction with the iterative pseudo-force procedure. According to the numerical results, in the presented numerical application, the MEIM does not have stability problems being the critical time step larger than the ground acceleration one despite of the high initial stiffness of the friction pendulum bearings. In addition, compared to the conventional monolithic solution approach, the proposed algorithm preserves its computational efficiency even when it is adopted to perform the nonlinear dynamic analysis using a smaller time step.

Keywords: base isolation, computational efficiency, mixed explicit-implicit method, partitioned solution approach, stability

Procedia PDF Downloads 278
206 Management Tools for Assessment of Adverse Reactions Caused by Contrast Media at the Hospital

Authors: Pranee Suecharoen, Ratchadaporn Soontornpas, Jaturat Kanpittaya

Abstract:

Background: Contrast media has an important role for disease diagnosis through detection of pathologies. Contrast media can, however, cause adverse reactions after administration of its agents. Although non-ionic contrast media are commonly used, the incidence of adverse events is relatively low. The most common reactions found (10.5%) were mild and manageable and/or preventable. Pharmacists can play an important role in evaluating adverse reactions, including awareness of the specific preparation and the type of adverse reaction. As most common types of adverse reactions are idiosyncratic or pseudo-allergic reactions, common standards need to be established to prevent and control adverse reactions promptly and effectively. Objective: To measure the effect of using tools for symptom evaluation in order to reduce the severity, or prevent the occurrence, of adverse reactions from contrast media. Methods: Retrospective review descriptive research with data collected on adverse reactions assessment and Naranjo’s algorithm between June 2015 and May 2016. Results: 158 patients (10.53%) had adverse reactions. Of the 1,500 participants with an adverse event evaluation, 137 (9.13%) had a mild adverse reaction, including hives, nausea, vomiting, dizziness, and headache. These types of symptoms can be treated (i.e., with antihistamines, anti-emetics) and the patient recovers completely within one day. The group with moderate adverse reactions, numbering 18 cases (1.2%), had hypertension or hypotension, and shortness of breath. Severe adverse reactions numbered 3 cases (0.2%) and included swelling of the larynx, cardiac arrest, and loss of consciousness, requiring immediate treatment. No other complications under close medical supervision were recorded (i.e., corticosteroids use, epinephrine, dopamine, atropine, or life-saving devices). Using the guideline, therapies are divided into general and specific and are performed according to the severity, risk factors and ingestion of contrast media agents. Patients who have high-risk factors were screened and treated (i.e., prophylactic premedication) for prevention of severe adverse reactions, especially those with renal failure. Thus, awareness for the need for prescreening of different risk factors is necessary for early recognition and prompt treatment. Conclusion: Studying adverse reactions can be used to develop a model for reducing the level of severity and setting a guideline for a standardized, multidisciplinary approach to adverse reactions.

Keywords: role of pharmacist, management of adverse reactions, guideline for contrast media, non-ionic contrast media

Procedia PDF Downloads 303
205 Narratives of Self-Renewal: Looking for A Middle Earth In-Between Psychoanalysis and the Search for Consciousness

Authors: Marilena Fatigante

Abstract:

Contemporary psychoanalysis is increasingly acknowledging the existential demands of clients in psychotherapy. A significant aspect of the personal crises that patients face today is often rooted in the difficulty to find meaning in their own existence, even after working through or resolving traumatic memories and experiences. Tracing back to the correspondence between Freud and Romain Rolland (1927), psychoanalysis could not ignore that investigation of the psyche also encompasses the encounter with deep, psycho-sensory experiences, which involve a sense of "being one with the external world as a whole", the well-known “oceanic feeling”, as Rolland posed it. Despite the recognition of Non-ordinary States of Consciousness (NSC) as catalysts for transformation in clinical practice, highlighted by neuroscience and results from psychedelic-assisted therapies, there is few research on how psychoanalytic knowledge can integrate with other treatment traditions. These traditions, commonly rooted in non -Western, unconventional, and non-formal psychological knowledge, emphasize the individual’s innate tendency toward existential integrity and transcendence of self-boundaries. Inspired by an autobiographical account, this paper examines narratives of 12 individuals, who engaged in psychoanalytic therapy and also underwent treatment involving a non-formal helping relationship with an expert guide in consciousness, which included experience of this nature. The guide relies on 35 yrs of experience in Psychological, multidisciplinary studies in Human Sciences and Art, and demonstrates knowledge of many wisdom traditions, ranging from Eastern to Western philosophy, including Psychoanalysis and its development in cultural perspective (e.g, Ethnopsychiatry). Analyses focused primarily on two dimensions that research has identified as central in assessing the degree of treatment “success” in the patients’ narrative accounts of their therapies: agency and coherence, defined respectively as the increase, expressed in language, of the client’s perceived ability to manage his/her own challenges and the capacity, inherent in “narrative” itself as a resource for meaning making (Bruner, 1990), to provide the subject with a sense of unity, endowing his /her life experience with temporal and logical sequentiality. The present study reports that, in all narratives from the participants, agency and coherence are described differently than in “common” psychotherapy narratives. Although the participants consistently identified themselves as responsible agentic subject, the sense of agency derived from the non-conventional guidance pathway is never reduced to a personal, individual accomplishment. Rather, the more a new, fuller sense of “Life” (more than “Self”) develops out of the guidance pathway they engage with the expert guide, the more they “surrender” their own sense of autonomy and self-containment. Something, which Safran (2016) identified as well talking about the sense of surrender and “grace” in psychoanalytic sessions. Secondly, narratives of individuals engaging with the expert guide describe coherence not as repairing or enforcing continuity but as enhancing their ability to navigate dramatic discontinuities, falls, abrupt leaps and passages marked by feelings of loss and bereavement. The paper ultimately explores whether valid criteria can be established to analyze experiences of non-conventional paths of self-evolution. These paths are not opposed or alternative to conventional ones, and should not be simplistically dismissed as exotic or magical.

Keywords: oceanic feeling, non conventional guidance, consciousness, narratives, treatment outcomes

Procedia PDF Downloads 38
204 Artificial Neural Networks Application on Nusselt Number and Pressure Drop Prediction in Triangular Corrugated Plate Heat Exchanger

Authors: Hany Elsaid Fawaz Abdallah

Abstract:

This study presents a new artificial neural network(ANN) model to predict the Nusselt Number and pressure drop for the turbulent flow in a triangular corrugated plate heat exchanger for forced air and turbulent water flow. An experimental investigation was performed to create a new dataset for the Nusselt Number and pressure drop values in the following range of dimensionless parameters: The plate corrugation angles (from 0° to 60°), the Reynolds number (from 10000 to 40000), pitch to height ratio (from 1 to 4), and Prandtl number (from 0.7 to 200). Based on the ANN performance graph, the three-layer structure with {12-8-6} hidden neurons has been chosen. The training procedure includes back-propagation with the biases and weight adjustment, the evaluation of the loss function for the training and validation dataset and feed-forward propagation of the input parameters. The linear function was used at the output layer as the activation function, while for the hidden layers, the rectified linear unit activation function was utilized. In order to accelerate the ANN training, the loss function minimization may be achieved by the adaptive moment estimation algorithm (ADAM). The ‘‘MinMax’’ normalization approach was utilized to avoid the increase in the training time due to drastic differences in the loss function gradients with respect to the values of weights. Since the test dataset is not being used for the ANN training, a cross-validation technique is applied to the ANN network using the new data. Such procedure was repeated until loss function convergence was achieved or for 4000 epochs with a batch size of 200 points. The program code was written in Python 3.0 using open-source ANN libraries such as Scikit learn, TensorFlow and Keras libraries. The mean average percent error values of 9.4% for the Nusselt number and 8.2% for pressure drop for the ANN model have been achieved. Therefore, higher accuracy compared to the generalized correlations was achieved. The performance validation of the obtained model was based on a comparison of predicted data with the experimental results yielding excellent accuracy.

Keywords: artificial neural networks, corrugated channel, heat transfer enhancement, Nusselt number, pressure drop, generalized correlations

Procedia PDF Downloads 87
203 Mobile and Hot Spot Measurement with Optical Particle Counting Based Dust Monitor EDM264

Authors: V. Ziegler, F. Schneider, M. Pesch

Abstract:

With the EDM264, GRIMM offers a solution for mobile short- and long-term measurements in outdoor areas and at production sites. For research as well as permanent areal observations on a near reference quality base. The model EDM264 features a powerful and robust measuring cell based on optical particle counting (OPC) principle with all the advantages that users of GRIMM's portable aerosol spectrometers are used to. The system is embedded in a compact weather-protection housing with all-weather sampling, heated inlet system, data logger, and meteorological sensor. With TSP, PM10, PM4, PM2.5, PM1, and PMcoarse, the EDM264 provides all fine dust fractions real-time, valid for outdoor applications and calculated with the proven GRIMM enviro-algorithm, as well as six additional dust mass fractions pm10, pm2.5, pm1, inhalable, thoracic and respirable for IAQ and workplace measurements. This highly versatile instrument performs real-time monitoring of particle number, particle size and provides information on particle surface distribution as well as dust mass distribution. GRIMM's EDM264 has 31 equidistant size channels, which are PSL traceable. A high-end data logger enables data acquisition and wireless communication via LTE, WLAN, or wired via Ethernet. Backup copies of the measurement data are stored in the device directly. The rinsing air function, which protects the laser and detector in the optical cell, further increases the reliability and long term stability of the EDM264 under different environmental and climatic conditions. The entire sample volume flow of 1.2 L/min is analyzed by 100% in the optical cell, which assures excellent counting efficiency at low and high concentrations and complies with the ISO 21501-1standard for OPCs. With all these features, the EDM264 is a world-leading dust monitor for precise monitoring of particulate matter and particle number concentration. This highly reliable instrument is an indispensable tool for many users who need to measure aerosol levels and air quality outdoors, on construction sites, or at production facilities.

Keywords: aerosol research, aerial observation, fence line monitoring, wild fire detection

Procedia PDF Downloads 151
202 Sustainable Housing and Urban Development: A Study on the Soon-To-Be-Old Population's Impetus to Migrate

Authors: Tristance Kee

Abstract:

With the unprecedented increase in elderly population globally, it is critical to search for new sustainable housing and urban development alternatives to traditional housing options. This research examines concepts of elderly migration pattern in the context of a high density city in Hong Kong to Mainland China. The research objectives are to: 1) explore the relationships between soon-to-be-old elderly and their intentions to move to Mainland upon retirement and their demographic characteristics; and 2) What are the desired amenities, locational factors and activities that are expected in the soon-to-be-old generation’s retirement housing environment? Primary data was collected through questionnaire survey conducted using random sampling method with respondents aged between 45-64 years old. The face-to-face survey was completed by 500 respondents. The survey was divided into four sections. The first section focused on respondent’s demographic information such as gender, age, education attainment, monthly income, housing tenure type and their visits to Mainland China. The second section focused on their retirement plans in terms of intended retirement age, prospective retirement funding and retirement housing options. The third section focused on the respondent’s attitudes toward retiring in Mainland for housing. It asked about their intentions to migrate retire into Mainland and incentives to retire in Hong Kong. The fourth section focused on respondent’s ideal housing environment including preferred housing amenities, desired living environment and retirement activities. The dependent variable in this study was ‘respondent’s consideration to move to Mainland China upon retirement’. Eight primary independent variables were integrated into the study to identify the correlations between them and retirement migration plan. The independent variables include: gender, age, marital status, monthly income, present housing tenure type, property ownership in Hong Kong, relationship with Mainland and the frequency of visiting Mainland China. In addition to the above independent variables, respondents were asked to indicate their retirement plans (retirement age, funding sources and retirement housing options), incentives to migrate to retire (choices included: property ownership, family relations, cost of living, living environment, medical facilities, government welfare benefits, etc.), perceived ideal retirement life qualities including desired amenities (sports, medical and leisure facilities etc.), desired locational qualities (green open space, convenient transport options and accessibility to urban settings etc.) and desired retirement activities (home-based leisure, elderly friendly sports, cultural activities, child care, social activities, etc.). The finding shows correlations between the used independent variables and consideration to migrate for housing options. The two independent variables indicated a possible correlation were gender and the frequency of visiting Mainland at present. When considering the increasing property prices across the border and strong social relationships, potential retirement migration is a very subjective decision that could vary from person to person. This research adds knowledge to housing research and migration study. Although the research is based in Mainland, most of the characteristics identified including better medical services, government welfare and sound urban amenities are shared qualities for all sustainable urban development and housing strategies.

Keywords: elderly migration, housing alternative, soon-to-be-old, sustainable environment

Procedia PDF Downloads 211
201 Analysis of Ozone Episodes in the Forest and Vegetation Areas with Using HYSPLIT Model: A Case Study of the North-West Side of Biga Peninsula, Turkey

Authors: Deniz Sari, Selahattin İncecik, Nesimi Ozkurt

Abstract:

Surface ozone, which named as one of the most critical pollutants in the 21th century, threats to human health, forest and vegetation. Specifically, in rural areas surface ozone cause significant influences on agricultural productions and trees. In this study, in order to understand to the surface ozone levels in rural areas we focus on the north-western side of Biga Peninsula which covers by the mountainous and forested area. Ozone concentrations were measured for the first time with passive sampling at 10 sites and two online monitoring stations in this rural area from 2013 and 2015. Using with the daytime hourly O3 measurements during light hours (08:00–20:00) exceeding the threshold of 40 ppb over the 3 months (May, June and July) for agricultural crops, and over the six months (April to September) for forest trees AOT40 (Accumulated hourly O3 concentrations Over a Threshold of 40 ppb) cumulative index was calculated. AOT40 is defined by EU Directive 2008/50/EC to evaluate whether ozone pollution is a risk for vegetation, and is calculated by using hourly ozone concentrations from monitoring systems. In the present study, we performed the trajectory analysis by The Hybrid Single-Particle Lagrangian Integrated Trajectory (HYSPLIT) model to follow the long-range transport sources contributing to the high ozone levels in the region. The ozone episodes observed between 2013 and 2015 were analysed using the HYSPLIT model developed by the NOAA-ARL. In addition, the cluster analysis is used to identify homogeneous groups of air mass transport patterns can be conducted through air trajectory clustering by grouping similar trajectories in terms of air mass movement. Backward trajectories produced for 3 years by HYSPLIT model were assigned to different clusters according to their moving speed and direction using a k-means clustering algorithm. According to cluster analysis results, northerly flows to study area cause to high ozone levels in the region. The results present that the ozone values in the study area are above the critical levels for forest and vegetation based on EU Directive 2008/50/EC.

Keywords: AOT40, Biga Peninsula, HYSPLIT, surface ozone

Procedia PDF Downloads 255
200 Modeling the Effects of Leachate-Impacted Groundwater on the Water Quality of a Large Tidal River

Authors: Emery Coppola Jr., Marwan Sadat, Il Kim, Diane Trube, Richard Kurisko

Abstract:

Contamination sites like landfills often pose significant risks to receptors like surface water bodies. Surface water bodies are often a source of recreation, including fishing and swimming, which not only enhances their value but also serves as a direct exposure pathway to humans, increasing their need for protection from water quality degradation. In this paper, a case study presents the potential effects of leachate-impacted groundwater from a large closed sanitary landfill on the surface water quality of the nearby Raritan River, situated in New Jersey. The study, performed over a two year period, included in-depth field evaluation of both the groundwater and surface water systems, and was supplemented by computer modeling. The analysis required delineation of a representative average daily groundwater discharge from the Landfill shoreline into the large, highly tidal Raritan River, with a corresponding estimate of daily mass loading of potential contaminants of concern. The average daily groundwater discharge into the river was estimated from a high-resolution water level study and a 24-hour constant-rate aquifer pumping test. The significant tidal effects induced on groundwater levels during the aquifer pumping test were filtered out using an advanced algorithm, from which aquifer parameter values were estimated using conventional curve match techniques. The estimated hydraulic conductivity values obtained from individual observation wells closely agree with tidally-derived values for the same wells. Numerous models were developed and used to simulate groundwater contaminant transport and surface water quality impacts. MODFLOW with MT3DMS was used to simulate the transport of potential contaminants of concern from the down-gradient edge of the Landfill to the Raritan River shoreline. A surface water dispersion model based upon a bathymetric and flow study of the river was used to simulate the contaminant concentrations over space within the river. The modeling results helped demonstrate that because of natural attenuation, the Landfill does not have a measurable impact on the river, which was confirmed by an extensive surface water quality study.

Keywords: groundwater flow and contaminant transport modeling, groundwater/surface water interaction, landfill leachate, surface water quality modeling

Procedia PDF Downloads 260
199 Investigation of Subsurface Structures within Bosso Local Government for Groundwater Exploration Using Magnetic and Resistivity Data

Authors: Adetona Abbassa, Aliyu Shakirat B.

Abstract:

The study area is part of Bosso local Government, enclosed within Longitude 6.25’ to 6.31’ and Latitude 9.35’ to 9.45’, an area of 16x8 km², within the basement region of central Nigeria. The region is a host to Nigerian Airforce base 12 (NAF 12quick response) and its staff quarters, the headquarters of Bosso local government, the Independent National Electoral Commission’s two offices, four government secondary schools, six primary schools and Minna international airport. The area suffers an acute shortage of water from November when rains stop to June when rains commence within North Central Nigeria. A way of addressing this problem is a reconnaissance method to delineate possible fractures and fault lines that exists within the region by sampling the Aeromagnetic data and using an appropriate analytical algorithm to delineate these fractures. This is followed by an appropriate ground truthing method that will confirm if the fracture is connected to underground water movement. The first vertical derivative for structural analysis, reveals a set of lineaments labeled AA’, BB’, CC’, DD’, EE’ and FF’ all trending in the Northeast – Southwest directions. AA’ is just below latitude 9.45’ above Maikunkele village, cutting off the upper part of the field, it runs through Kangwo, Nini, Lawo and other communities. BB’ is at Latitude 9.43’ it truncated at about 2Km before Maikunkele and Kuyi. CC’ is around 9.40’ sitting below Maikunkele runs down through Nanaum. DD’ is from Latitude 9.38’; interestingly no community within this region where the fault passes through. A result from the three sites where Vertical Electrical Sounding was carried out reveals three layers comprised of topsoil, intermediate Clay formation and weathered/fractured or fresh basement. The depth to basement map was also produced, depth to the basement from the ground surface with VES A₂, B5, D₂ and E₁ to be relatively deeper with depth values range between 25 to 35 m while the shallower region of the area has a depth range value between 10 to 20 m. Hence, VES A₂, A₅, B₄, B₅, C₂, C₄, D₄, D₅, E₁, E₃, and F₄ are high conductivity zone that are prolific for groundwater potential. The depth range of the aquifer potential zones is between 22.7 m to 50.4 m. The result from site C is quite unique though the 3 layers were detected in the majority of the VES points, the maximum depth to the basement in 90% of the VES points is below 8 km, only three VES points shows considerably viability, which are C₆, E₂ and F₂ with depths of 35.2 m and 38 m respectively but lack of connectivity will be a big challenge of chargeability.

Keywords: lithology, aeromagnetic, aquifer, geoelectric, iso-resistivity, basement, vertical electrical sounding(VES)

Procedia PDF Downloads 139
198 CSoS-STRE: A Combat System-of-System Space-Time Resilience Enhancement Framework

Authors: Jiuyao Jiang, Jiahao Liu, Jichao Li, Kewei Yang, Minghao Li, Bingfeng Ge

Abstract:

Modern warfare has transitioned from the paradigm of isolated combat forces to system-to-system confrontations due to advancements in combat technologies and application concepts. A combat system-of-systems (CSoS) is a combat network composed of independently operating entities that interact with one another to provide overall operational capabilities. Enhancing the resilience of CSoS is garnering increasing attention due to its significant practical value in optimizing network architectures, improving network security and refining operational planning. Accordingly, a unified framework called CSoS space-time resilience enhancement (CSoS-STRE) has been proposed, which enhances the resilience of CSoS by incorporating spatial features. Firstly, a multilayer spatial combat network model has been constructed, which incorporates an information layer depicting the interrelations among combat entities based on the OODA loop, along with a spatial layer that considers the spatial characteristics of equipment entities, thereby accurately reflecting the actual combat process. Secondly, building upon the combat network model, a spatiotemporal resilience optimization model is proposed, which reformulates the resilience optimization problem as a classical linear optimization model with spatial features. Furthermore, the model is extended from scenarios without obstacles to those with obstacles, thereby further emphasizing the importance of spatial characteristics. Thirdly, a resilience-oriented recovery optimization method based on improved non dominated sorting genetic algorithm II (R-INSGA) is proposed to determine the optimal recovery sequence for the damaged entities. This method not only considers spatial features but also provides the optimal travel path for multiple recovery teams. Finally, the feasibility, effectiveness, and superiority of the CSoS-STRE are demonstrated through a case study. Simultaneously, under deliberate attack conditions based on degree centrality and maximum operational loop performance, the proposed CSoS-STRE method is compared with six baseline recovery strategies, which are based on performance, time, degree centrality, betweenness centrality, closeness centrality, and eigenvector centrality. The comparison demonstrates that CSoS-STRE achieves faster convergence and superior performance.

Keywords: space-time resilience enhancement, resilience optimization model, combat system-of-systems, recovery optimization method, no-obstacles and obstacles

Procedia PDF Downloads 15
197 Simple Model of Social Innovation Based on Entrepreneurship Incidence in Mexico

Authors: Vicente Espinola, Luis Torres, Christhian Gonzalez

Abstract:

Entrepreneurship is a topic of current interest in Mexico and the World, which has been fostered through public policies with great impact on its generation. The strategies used in Mexico have not been successful, being motivational strategies aimed at the masses with the intention that someone in the process generates a venture. The strategies used for its development have been "picking of winners" favoring those who have already overcome the initial stages of undertaking without effective support. This situation shows a disarticulation that appears even more in social entrepreneurship; due to this, it is relevant to research on those elements that could develop them and thus integrate a model of entrepreneurship and social innovation for Mexico. Social entrepreneurship should be generating social innovation, which is translated into business models in order to make the benefits reach the population. These models are proposed putting the social impact before the economic impact, without forgetting its sustainability in the medium and long term. In this work, we present a simple model of innovation and social entrepreneurship for Guanajuato, Mexico. This algorithm was based on how social innovation could be generated in a systemic way for Mexico through different institutions that promote innovation. In this case, the technological parks of the state of Guanajuato were studied because these are considered one of the areas of Mexico where its main objectives are to make technology transfer to companies but overlooking the social sector and entrepreneurs. An experimental design of n = 60 was carried out with potential entrepreneurs to identify their perception of the social approach that the enterprises should have, the skills they consider required to create a venture, as well as their interest in generating ventures that solve social problems. This experiment had a 2K design, the value of k = 3 and the computational simulation was performed in R statistical language. A simple model of interconnected variables is proposed, which allows us to identify where it is necessary to increase efforts for the generation of social enterprises. The 96.67% of potential entrepreneurs expressed interest in ventures that solve social problems. In the analysis of the variables interaction, it was identified that the isolated development of entrepreneurial skills would only replicate the generation of traditional ventures. The variable of social approach presented positive interactions, which may influence the generation of social entrepreneurship if this variable was strengthened and permeated in the processes of training and development of entrepreneurs. In the future, it will be necessary to analyze the institutional actors that are present in the social entrepreneurship ecosystem, in order to analyze the interaction necessary to strengt the innovation and social entrepreneurship ecosystem.

Keywords: social innovation, model, entrepreneurship, technological parks

Procedia PDF Downloads 273
196 A Study of Non-Coplanar Imaging Technique in INER Prototype Tomosynthesis System

Authors: Chia-Yu Lin, Yu-Hsiang Shen, Cing-Ciao Ke, Chia-Hao Chang, Fan-Pin Tseng, Yu-Ching Ni, Sheng-Pin Tseng

Abstract:

Tomosynthesis is an imaging system that generates a 3D image by scanning in a limited angular range. It could provide more depth information than traditional 2D X-ray single projection. Radiation dose in tomosynthesis is less than computed tomography (CT). Because of limited angular range scanning, there are many properties depending on scanning direction. Therefore, non-coplanar imaging technique was developed to improve image quality in traditional tomosynthesis. The purpose of this study was to establish the non-coplanar imaging technique of tomosynthesis system and evaluate this technique by the reconstructed image. INER prototype tomosynthesis system contains an X-ray tube, a flat panel detector, and a motion machine. This system could move X-ray tube in multiple directions during the acquisition. In this study, we investigated three different imaging techniques that were 2D X-ray single projection, traditional tomosynthesis, and non-coplanar tomosynthesis. An anthropopathic chest phantom was used to evaluate the image quality. It contained three different size lesions (3 mm, 5 mm and, 8 mm diameter). The traditional tomosynthesis acquired 61 projections over a 30 degrees angular range in one scanning direction. The non-coplanar tomosynthesis acquired 62 projections over 30 degrees angular range in two scanning directions. A 3D image was reconstructed by iterative image reconstruction algorithm (ML-EM). Our qualitative method was to evaluate artifacts in tomosynthesis reconstructed image. The quantitative method was used to calculate a peak-to-valley ratio (PVR) that means the intensity ratio of the lesion to the background. We used PVRs to evaluate the contrast of lesions. The qualitative results showed that in the reconstructed image of non-coplanar scanning, anatomic structures of chest and lesions could be identified clearly and no significant artifacts of scanning direction dependent could be discovered. In 2D X-ray single projection, anatomic structures overlapped and lesions could not be discovered. In traditional tomosynthesis image, anatomic structures and lesions could be identified clearly, but there were many artifacts of scanning direction dependent. The quantitative results of PVRs show that there were no significant differences between non-coplanar tomosynthesis and traditional tomosynthesis. The PVRs of the non-coplanar technique were slightly higher than traditional technique in 5 mm and 8 mm lesions. In non-coplanar tomosynthesis, artifacts of scanning direction dependent could be reduced and PVRs of lesions were not decreased. The reconstructed image was more isotropic uniformity in non-coplanar tomosynthesis than in traditional tomosynthesis. In the future, scan strategy and scan time will be the challenges of non-coplanar imaging technique.

Keywords: image reconstruction, non-coplanar imaging technique, tomosynthesis, X-ray imaging

Procedia PDF Downloads 366
195 Modeling Competition Between Subpopulations with Variable DNA Content in Resource-Limited Microenvironments

Authors: Parag Katira, Frederika Rentzeperis, Zuzanna Nowicka, Giada Fiandaca, Thomas Veith, Jack Farinhas, Noemi Andor

Abstract:

Resource limitations shape the outcome of competitions between genetically heterogeneous pre-malignant cells. One example of such heterogeneity is in the ploidy (DNA content) of pre-malignant cells. A whole-genome duplication (WGD) transforms a diploid cell into a tetraploid one and has been detected in 28-56% of human cancers. If a tetraploid subclone expands, it consistently does so early in tumor evolution, when cell density is still low, and competition for nutrients is comparatively weak – an observation confirmed for several tumor types. WGD+ cells need more resources to synthesize increasing amounts of DNA, RNA, and proteins. To quantify resource limitations and how they relate to ploidy, we performed a PAN cancer analysis of WGD, PET/CT, and MRI scans. Segmentation of >20 different organs from >900 PET/CT scans were performed with MOOSE. We observed a strong correlation between organ-wide population-average estimates of Oxygen and the average ploidy of cancers growing in the respective organ (Pearson R = 0.66; P= 0.001). In-vitro experiments using near-diploid and near-tetraploid lineages derived from a breast cancer cell line supported the hypothesis that DNA content influences Glucose- and Oxygen-dependent proliferation-, death- and migration rates. To model how subpopulations with variable DNA content compete in the resource-limited environment of the human brain, we developed a stochastic state-space model of the brain (S3MB). The model discretizes the brain into voxels, whereby the state of each voxel is defined by 8+ variables that are updated over time: stiffness, Oxygen, phosphate, glucose, vasculature, dead cells, migrating cells and proliferating cells of various DNA content, and treat conditions such as radiotherapy and chemotherapy. Well-established Fokker-Planck partial differential equations govern the distribution of resources and cells across voxels. We applied S3MB on sequencing and imaging data obtained from a primary GBM patient. We performed whole genome sequencing (WGS) of four surgical specimens collected during the 1ˢᵗ and 2ⁿᵈ surgeries of the GBM and used HATCHET to quantify its clonal composition and how it changes between the two surgeries. HATCHET identified two aneuploid subpopulations of ploidy 1.98 and 2.29, respectively. The low-ploidy clone was dominant at the time of the first surgery and became even more dominant upon recurrence. MRI images were available before and after each surgery and registered to MNI space. The S3MB domain was initiated from 4mm³ voxels of the MNI space. T1 post and T2 flair scan acquired after the 1ˢᵗ surgery informed tumor cell densities per voxel. Magnetic Resonance Elastography scans and PET/CT scans informed stiffness and Glucose access per voxel. We performed a parameter search to recapitulate the GBM’s tumor cell density and ploidy composition before the 2ⁿᵈ surgery. Results suggest that the high-ploidy subpopulation had a higher Glucose-dependent proliferation rate (0.70 vs. 0.49), but a lower Glucose-dependent death rate (0.47 vs. 1.42). These differences resulted in spatial differences in the distribution of the two subpopulations. Our results contribute to a better understanding of how genomics and microenvironments interact to shape cell fate decisions and could help pave the way to therapeutic strategies that mimic prognostically favorable environments.

Keywords: tumor evolution, intra-tumor heterogeneity, whole-genome doubling, mathematical modeling

Procedia PDF Downloads 73
194 Individual Cylinder Ignition Advance Control Algorithms of the Aircraft Piston Engine

Authors: G. Barański, P. Kacejko, M. Wendeker

Abstract:

The impact of the ignition advance control algorithms of the ASz-62IR-16X aircraft piston engine on a combustion process has been presented in this paper. This aircraft engine is a nine-cylinder 1000 hp engine with a special electronic control ignition system. This engine has two spark plugs per cylinder with an ignition advance angle dependent on load and the rotational speed of the crankshaft. Accordingly, in most cases, these angles are not optimal for power generated. The scope of this paper is focused on developing algorithms to control the ignition advance angle in an electronic ignition control system of an engine. For this type of engine, i.e. radial engine, an ignition advance angle should be controlled independently for each cylinder because of the design of such an engine and its crankshaft system. The ignition advance angle is controlled in an open-loop way, which means that the control signal (i.e. ignition advance angle) is determined according to the previously developed maps, i.e. recorded tables of the correlation between the ignition advance angle and engine speed and load. Load can be measured by engine crankshaft speed or intake manifold pressure. Due to a limited memory of a controller, the impact of other independent variables (such as cylinder head temperature or knock) on the ignition advance angle is given as a series of one-dimensional arrays known as corrective characteristics. The value of the ignition advance angle specified combines the value calculated from the primary characteristics and several correction factors calculated from correction characteristics. Individual cylinder control can proceed in line with certain indicators determined from pressure registered in a combustion chamber. Control is assumed to be based on the following indicators: maximum pressure, maximum pressure angle, indicated mean effective pressure. Additionally, a knocking combustion indicator was defined. Individual control can be applied to a single set of spark plugs only, which results from two fundamental ideas behind designing a control system. Independent operation of two ignition control systems – if two control systems operate simultaneously. It is assumed that the entire individual control should be performed for a front spark plug only and a rear spark plug shall be controlled with a fixed (or specific) offset relative to the front one or from a reference map. The developed algorithms will be verified by simulation and engine test sand experiments. This work has been financed by the Polish National Centre for Research and Development, INNOLOT, under Grant Agreement No. INNOLOT/I/1/NCBR/2013.

Keywords: algorithm, combustion process, radial engine, spark plug

Procedia PDF Downloads 293
193 Liability of AI in Workplace: A Comparative Approach Between Shari’ah and Common Law

Authors: Barakat Adebisi Raji

Abstract:

In the workplace, Artificial Intelligence has, in recent years, emerged as a transformative technology that revolutionizes how organizations operate and perform tasks. It is a technology that has a significant impact on transportation, manufacturing, education, cyber security, robotics, agriculture, healthcare, and so many other organizations. By harnessing AI technology, workplaces can enhance productivity, streamline processes, and make more informed decisions. Given the potential of AI to change the way we work and its impact on the labor market in years to come, employers understand that it entails legal challenges and risks despite the advantages inherent in it. Therefore, as AI continues to integrate into various aspects of the workplace, understanding the legal and ethical implications becomes paramount. Also central to this study is the question of who is held liable where AI makes any defaults; the person (company) who created the AI, the person who programmed the AI algorithm or the person who uses the AI? Thus, the aim of this paper is to provide a detailed overview of how AI-related liabilities are addressed under each legal tradition and shed light on potential areas of accord and divergence between the two legal cultures. The objectives of this paper are to (i) examine the ability of Common law and Islamic law to accommodate the issues and damage caused by AI in the workplace and the legality of compensation for such injury sustained; (ii) to discuss the extent to which AI can be described as a legal personality to bear responsibility: (iii) examine the similarities and disparities between Common Law and Islamic Jurisprudence on the liability of AI in the workplace. The methodology adopted in this work was qualitative, and the method was purely a doctrinal research method where information is gathered from the primary and secondary sources of law, such as comprehensive materials found in journal articles, expert-authored books and online news sources. Comparative legal method was also used to juxtapose the approach of Islam and Common Law. The paper concludes that since AI, in its current legal state, is not recognized as a legal entity, operators or manufacturers of AI should be held liable for any damage that arises, and the determination of who bears the responsibility should be dependent on the circumstances surrounding each scenario. The study recommends the granting of legal personality to AI systems, the establishment of legal rights and liabilities for AI, the establishment of a holistic Islamic virtue-based AI ethics framework, and the consideration of Islamic ethics.

Keywords: AI, health- care, agriculture, cyber security, common law, Shari'ah

Procedia PDF Downloads 37
192 Model Reference Adaptive Approach for Power System Stabilizer for Damping of Power Oscillations

Authors: Jožef Ritonja, Bojan Grčar, Boštjan Polajžer

Abstract:

In recent years, electricity trade between neighboring countries has become increasingly intense. Increasing power transmission over long distances has resulted in an increase in the oscillations of the transmitted power. The damping of the oscillations can be carried out with the reconfiguration of the network or the replacement of generators, but such solution is not economically reasonable. The only cost-effective solution to improve the damping of power oscillations is to use power system stabilizers. Power system stabilizer represents a part of synchronous generator control system. It utilizes semiconductor’s excitation system connected to the rotor field excitation winding to increase the damping of the power system. The majority of the synchronous generators are equipped with the conventional power system stabilizers with fixed parameters. The control structure of the conventional power system stabilizers and the tuning procedure are based on the linear control theory. Conventional power system stabilizers are simple to realize, but they show non-sufficient damping improvement in the entire operating conditions. This is the reason that advanced control theories are used for development of better power system stabilizers. In this paper, the adaptive control theory for power system stabilizers design and synthesis is studied. The presented work is focused on the use of model reference adaptive control approach. Control signal, which assures that the controlled plant output will follow the reference model output, is generated by the adaptive algorithm. Adaptive gains are obtained as a combination of the "proportional" term and with the σ-term extended "integral" term. The σ-term is introduced to avoid divergence of the integral gains. The necessary condition for asymptotic tracking is derived by means of hyperstability theory. The benefits of the proposed model reference adaptive power system stabilizer were evaluated as objectively as possible by means of a theoretical analysis, numerical simulations and laboratory realizations. Damping of the synchronous generator oscillations in the entire operating range was investigated. Obtained results show the improved damping in the entire operating area and the increase of the power system stability. The results of the presented work will help by the development of the model reference power system stabilizer which should be able to replace the conventional stabilizers in power systems.

Keywords: power system, stability, oscillations, power system stabilizer, model reference adaptive control

Procedia PDF Downloads 138
191 Smart Laboratory for Clean Rivers in India - An Indo-Danish Collaboration

Authors: Nikhilesh Singh, Shishir Gaur, Anitha K. Sharma

Abstract:

Climate change and anthropogenic stress have severely affected ecosystems all over the globe. Indian rivers are under immense pressure, facing challenges like pollution, encroachment, extreme fluctuation in the flow regime, local ignorance and lack of coordination between stakeholders. To counter all these issues a holistic river rejuvenation plan is needed that tests, innovates and implements sustainable solutions in the river space for sustainable river management. Smart Laboratory for Clean Rivers (SLCR) an Indo-Danish collaboration project, provides a living lab setup that brings all the stakeholders (government agencies, academic and industrial partners and locals) together to engage, learn, co-creating and experiment for a clean and sustainable river that last for ages. Just like every mega project requires piloting, SLCR has opted for a small catchment of the Varuna River, located in the Middle Ganga Basin in India. Considering the integrated approach of river rejuvenation, SLCR embraces various techniques and upgrades for rejuvenation. Likely, maintaining flow in the channel in the lean period, Managed Aquifer Recharge (MAR) is a proven technology. In SLCR, Floa-TEM high-resolution lithological data is used in MAR models to have better decision-making for MAR structures nearby of the river to enhance the river aquifer exchanges. Furthermore, the concerns of quality in the river are a big issue. A city like Varanasi which is located in the last stretch of the river, generates almost 260 MLD of domestic waste in the catchment. The existing STP system is working at full efficiency. Instead of installing a new STP for the future, SLCR is upgrading those STPs with an IoT-based system that optimizes according to the nutrient load and energy consumption. SLCR also advocate nature-based solutions like a reed bed for the drains having less flow. In search of micropollutants, SLCR uses fingerprint analysis involves employing advanced techniques like chromatography and mass spectrometry to create unique chemical profiles. However, rejuvenation attempts cannot be possible without involving the entire catchment. A holistic water management plan that includes storm management, water harvesting structure to efficiently manage the flow of water in the catchment and installation of several buffer zones to restrict pollutants entering into the river. Similarly, carbon (emission and sequestration) is also an important parameter for the catchment. By adopting eco-friendly practices, a ripple effect positively influences the catchment's water dynamics and aids in the revival of river systems. SLCR has adopted 4 villages to make them carbon-neutral and water-positive. Moreover, for the 24×7 monitoring of the river and the catchment, robust IoT devices are going to be installed to observe, river and groundwater quality, groundwater level, river discharge and carbon emission in the catchment and ultimately provide fuel for the data analytics. In its completion, SLCR will provide a river restoration manual, which will strategise the detailed plan and way of implementation for stakeholders. Lastly, the entire process is planned in such a way that will be managed by local administrations and stakeholders equipped with capacity-building activity. This holistic approach makes SLCR unique in the field of river rejuvenation.

Keywords: sustainable management, holistic approach, living lab, integrated river management

Procedia PDF Downloads 60
190 Structural Optimization, Design, and Fabrication of Dissolvable Microneedle Arrays

Authors: Choupani Andisheh, Temucin Elif Sevval, Bediz Bekir

Abstract:

Due to their various advantages compared to many other drug delivery systems such as hypodermic injections and oral medications, microneedle arrays (MNAs) are a promising drug delivery system. To achieve enhanced performance of the MN, it is crucial to develop numerical models, optimization methods, and simulations. Accordingly, in this work, the optimized design of dissolvable MNAs, as well as their manufacturing, is investigated. For this purpose, a mechanical model of a single MN, having the geometry of an obelisk, is developed using commercial finite element software. The model considers the condition in which the MN is under pressure at the tip caused by the reaction force when penetrating the skin. Then, a multi-objective optimization based on non-dominated sorting genetic algorithm II (NSGA-II) is performed to obtain geometrical properties such as needle width, tip (apex) angle, and base fillet radius. The objective of the optimization study is to reach a painless and effortless penetration into the skin along with minimizing its mechanical failures caused by the maximum stress occurring throughout the structure. Based on the obtained optimal design parameters, master (male) molds are then fabricated from PMMA using a mechanical micromachining process. This fabrication method is selected mainly due to the geometry capability, production speed, production cost, and the variety of materials that can be used. Then to remove any chip residues, the master molds are cleaned using ultrasonic cleaning. These fabricated master molds can then be used repeatedly to fabricate Polydimethylsiloxane (PDMS) production (female) molds through a micro-molding approach. Finally, Polyvinylpyrrolidone (PVP) as a dissolvable polymer is cast into the production molds under vacuum to produce the dissolvable MNAs. This fabrication methodology can also be used to fabricate MNAs that include bioactive cargo. To characterize and demonstrate the performance of the fabricated needles, (i) scanning electron microscope images are taken to show the accuracy of the fabricated geometries, and (ii) in-vitro piercing tests are performed on artificial skin. It is shown that optimized MN geometries can be precisely fabricated using the presented fabrication methodology and the fabricated MNAs effectively pierce the skin without failure.

Keywords: microneedle, microneedle array fabrication, micro-manufacturing structural optimization, finite element analysis

Procedia PDF Downloads 113
189 Conceptual Design of a Residential House Based on IDEA 4E - Discussion of the Process of Interdisciplinary Pre-Project Research and Optimal Design Solutions Created as Part of Project-Based Learning

Authors: Dorota Winnicka-Jasłowska, Małgorzata Jastrzębska, Jan Kaczmarczyk, Beata Łaźniewska-Piekarczyk, Piotr Skóra, Beata Kobiałko, Agata Kołodziej, Błażej Mól, Ewelina Lasyk, Karolina Brzęczek, Michał Król

Abstract:

Creating economical, comfortable, and healthy buildings which respect the environment is a necessity resulting from legal regulations, but it is also a response to the expectations of a modern investor. Developing the concept of a residential house based on the 4E and the 2+2+(1) IDEAs is a complex process that requires specialist knowledge of many trades and requires adaptation of comprehensive solutions. IDEA 4E assumes the use of energy-saving, ecological, ergonomics, and economic solutions. In addition, IDEA 2+2+(1) assuming appropriate surface and functional-spatial solutions for a family at different stages of a building's life, i.e. 2, 4, or 5 members, enforces certain flexibility of the designed building, which may change with the number and age of its users. The building should therefore be easy to rearrange or expand. The task defined in this way was carried out by an interdisciplinary team of students of the Silesian University of Technology as part of PBL. The team consisted of 6 undergraduate and graduate students representing the following faculties: 3 students of architecture, 2 civil engineering students, and 1 student of environmental engineering. The work of the team was supported by 3 academic teachers representing the above-mentioned faculties and additional experts. The project was completed in one semester. The article presents the successive stages of the project. At first pre-design studies were carried out. They allowed to define the guidelines for the project. For this purpose, the "Model house" questionnaire was developed. The questions concerned determining the utility needs of a potential family that would live in a model house - specifying the types of rooms, their size, and equipment. A total of 114 people participated in the study. The answers to the questions in the survey helped to build the functional programme of the designed house. Other research consisted in the search for optimal technological and construction solutions and the most appropriate building materials based mainly on recycling. Appropriate HVAC systems responsible for the building's microclimate were also selected, i.e. low, temperature heating, mechanical ventilation, and the use of energy from renewable sources was planned so as to obtain a nearly zero-energy building. Additionally, rainwater retention and its local use were planned. The result of the project was a design of a model residential building that meets the presented assumptions. A 3D VR spatial model of the designed building and its surroundings was also made. The final result was the organization of an exhibition for students and the academic community. Participation in the interdisciplinary project allowed the project team members to better understand the consequences of the adopted solutions for achieving the assumed effect and the need to work out a compromise. The implementation of the project made all its participants aware of the importance of cooperation as well as systematic and clear communication. The need to define milestones and their consistent enforcement is an important element guaranteeing the achievement of the intended end result. The implementation of PBL enables students to the acquire competences important in their future professional work.

Keywords: architecture and urban planning, civil engineering, environmental engineering, project-based learning, sustainable building

Procedia PDF Downloads 115
188 Isotope Effects on Inhibitors Binding to HIV Reverse Transcriptase

Authors: Agnieszka Krzemińska, Katarzyna Świderek, Vicente Molinier, Piotr Paneth

Abstract:

In order to understand in details the interactions between ligands and the enzyme isotope effects were studied between clinically used drugs that bind in the active site of Human Immunodeficiency Virus Reverse Transcriptase, HIV-1 RT, as well as triazole-based inhibitor that binds in the allosteric pocket of this enzyme. The magnitudes and origins of the resulting binding isotope effects were analyzed. Subsequently, binding isotope effect of the same triazole-based inhibitor bound in the active site were analyzed and compared. Together, these results show differences in binding origins in two sites of the enzyme and allow to analyze binding mode and place of newly synthesized inhibitors. Typical protocol is described below on the example of triazole ligand in the allosteric pocket. Triazole was docked into allosteric cavity of HIV-1 RT with Glide using extra-precision mode as implemented in Schroedinger software. The structure of HIV-1 RT was obtained from Protein Data Bank as structure of PDB ID 2RKI. The pKa for titratable amino acids was calculated using PROPKA software, and in order to neutralize the system 15 Cl- were added using tLEaP package implemented in AMBERTools ver.1.5. Also N-terminals and C-terminals were build using tLEaP. The system was placed in 144x160x144Å3 orthorhombic box of water molecules using NAMD program. Missing parameters for triazole were obtained at the AM1 level using Antechamber software implemented in AMBERTools. The energy minimizations were carried out by means of a conjugate gradient algorithm using NAMD. Then system was heated from 0 to 300 K with temperature increment 0.001 K. Subsequently 2 ns Langevin−Verlet (NVT) MM MD simulation with AMBER force field implemented in NAMD was carried out. Periodic Boundary Conditions and cut-offs for the nonbonding interactions, range radius from 14.5 to 16 Å, are used. After 2 ns relaxation 200 ps of QM/MM MD at 300 K were simulated. The triazole was treated quantum mechanically at the AM1 level, protein was described using AMBER and water molecules were described using TIP3P, as implemented in fDynamo library. Molecules 20 Å apart from the triazole were kept frozen, with cut-offs established on range radius from 14.5 to 16 Å. In order to describe interactions between triazole and RT free energy of binding using Free Energy Perturbation method was done. The change in frequencies from ligand in solution to ligand bounded in enzyme was used to calculate binding isotope effects.

Keywords: binding isotope effects, molecular dynamics, HIV, reverse transcriptase

Procedia PDF Downloads 431
187 The Use of Artificial Intelligence in Digital Forensics and Incident Response in a Constrained Environment

Authors: Dipo Dunsin, Mohamed C. Ghanem, Karim Ouazzane

Abstract:

Digital investigators often have a hard time spotting evidence in digital information. It has become hard to determine which source of proof relates to a specific investigation. A growing concern is that the various processes, technology, and specific procedures used in the digital investigation are not keeping up with criminal developments. Therefore, criminals are taking advantage of these weaknesses to commit further crimes. In digital forensics investigations, artificial intelligence is invaluable in identifying crime. It has been observed that an algorithm based on artificial intelligence (AI) is highly effective in detecting risks, preventing criminal activity, and forecasting illegal activity. Providing objective data and conducting an assessment is the goal of digital forensics and digital investigation, which will assist in developing a plausible theory that can be presented as evidence in court. Researchers and other authorities have used the available data as evidence in court to convict a person. This research paper aims at developing a multiagent framework for digital investigations using specific intelligent software agents (ISA). The agents communicate to address particular tasks jointly and keep the same objectives in mind during each task. The rules and knowledge contained within each agent are dependent on the investigation type. A criminal investigation is classified quickly and efficiently using the case-based reasoning (CBR) technique. The MADIK is implemented using the Java Agent Development Framework and implemented using Eclipse, Postgres repository, and a rule engine for agent reasoning. The proposed framework was tested using the Lone Wolf image files and datasets. Experiments were conducted using various sets of ISA and VMs. There was a significant reduction in the time taken for the Hash Set Agent to execute. As a result of loading the agents, 5 percent of the time was lost, as the File Path Agent prescribed deleting 1,510, while the Timeline Agent found multiple executable files. In comparison, the integrity check carried out on the Lone Wolf image file using a digital forensic tool kit took approximately 48 minutes (2,880 ms), whereas the MADIK framework accomplished this in 16 minutes (960 ms). The framework is integrated with Python, allowing for further integration of other digital forensic tools, such as AccessData Forensic Toolkit (FTK), Wireshark, Volatility, and Scapy.

Keywords: artificial intelligence, computer science, criminal investigation, digital forensics

Procedia PDF Downloads 212
186 Automatic and High Precise Modeling for System Optimization

Authors: Stephanie Chen, Mitja Echim, Christof Büskens

Abstract:

To describe and propagate the behavior of a system mathematical models are formulated. Parameter identification is used to adapt the coefficients of the underlying laws of science. For complex systems this approach can be incomplete and hence imprecise and moreover too slow to be computed efficiently. Therefore, these models might be not applicable for the numerical optimization of real systems, since these techniques require numerous evaluations of the models. Moreover not all quantities necessary for the identification might be available and hence the system must be adapted manually. Therefore, an approach is described that generates models that overcome the before mentioned limitations by not focusing on physical laws, but on measured (sensor) data of real systems. The approach is more general since it generates models for every system detached from the scientific background. Additionally, this approach can be used in a more general sense, since it is able to automatically identify correlations in the data. The method can be classified as a multivariate data regression analysis. In contrast to many other data regression methods this variant is also able to identify correlations of products of variables and not only of single variables. This enables a far more precise and better representation of causal correlations. The basis and the explanation of this method come from an analytical background: the series expansion. Another advantage of this technique is the possibility of real-time adaptation of the generated models during operation. Herewith system changes due to aging, wear or perturbations from the environment can be taken into account, which is indispensable for realistic scenarios. Since these data driven models can be evaluated very efficiently and with high precision, they can be used in mathematical optimization algorithms that minimize a cost function, e.g. time, energy consumption, operational costs or a mixture of them, subject to additional constraints. The proposed method has successfully been tested in several complex applications and with strong industrial requirements. The generated models were able to simulate the given systems with an error in precision less than one percent. Moreover the automatic identification of the correlations was able to discover so far unknown relationships. To summarize the above mentioned approach is able to efficiently compute high precise and real-time-adaptive data-based models in different fields of industry. Combined with an effective mathematical optimization algorithm like WORHP (We Optimize Really Huge Problems) several complex systems can now be represented by a high precision model to be optimized within the user wishes. The proposed methods will be illustrated with different examples.

Keywords: adaptive modeling, automatic identification of correlations, data based modeling, optimization

Procedia PDF Downloads 409
185 Development of Coastal Inundation–Inland and River Flow Interface Module Based on 2D Hydrodynamic Model

Authors: Eun-Taek Sin, Hyun-Ju Jang, Chang Geun Song, Yong-Sik Han

Abstract:

Due to the climate change, the coastal urban area repeatedly suffers from the loss of property and life by flooding. There are three main causes of inland submergence. First, when heavy rain with high intensity occurs, the water quantity in inland cannot be drained into rivers by increase in impervious surface of the land development and defect of the pump, storm sewer. Second, river inundation occurs then water surface level surpasses the top of levee. Finally, Coastal inundation occurs due to rising sea water. However, previous studies ignored the complex mechanism of flooding, and showed discrepancy and inadequacy due to linear summation of each analysis result. In this study, inland flooding and river inundation were analyzed together by HDM-2D model. Petrov-Galerkin stabilizing method and flux-blocking algorithm were applied to simulate the inland flooding. In addition, sink/source terms with exponentially growth rate attribute were added to the shallow water equations to include the inland flooding analysis module. The applications of developed model gave satisfactory results, and provided accurate prediction in comprehensive flooding analysis. The applications of developed model gave satisfactory results, and provided accurate prediction in comprehensive flooding analysis. To consider the coastal surge, another module was developed by adding seawater to the existing Inland Flooding-River Inundation binding module for comprehensive flooding analysis. Based on the combined modules, the Coastal Inundation – Inland & River Flow Interface was simulated by inputting the flow rate and depth data in artificial flume. Accordingly, it was able to analyze the flood patterns of coastal cities over time. This study is expected to help identify the complex causes of flooding in coastal areas where complex flooding occurs, and assist in analyzing damage to coastal cities. Acknowledgements—This research was supported by a grant ‘Development of the Evaluation Technology for Complex Causes of Inundation Vulnerability and the Response Plans in Coastal Urban Areas for Adaptation to Climate Change’ [MPSS-NH-2015-77] from the Natural Hazard Mitigation Research Group, Ministry of Public Safety and Security of Korea.

Keywords: flooding analysis, river inundation, inland flooding, 2D hydrodynamic model

Procedia PDF Downloads 362
184 Separation of Lanthanides Ions from Mineral Waste with Functionalized Pillar[5]Arenes: Synthesis, Physicochemical Characterization and Molecular Dynamics Studies

Authors: Ariesny Vera, Rodrigo Montecinos

Abstract:

The rare-earth elements (REEs) or rare-earth metals (REMs), correspond to seventeen chemical elements composed by the fifteen lanthanoids, as well as scandium and yttrium. Lanthanoids corresponds to lanthanum and the f-block elements, from cerium to lutetium. Scandium and yttrium are considered rare-earth elements because they have ionic radii similar to the lighter f-block elements. These elements were called rare earths because they are simply more difficult to extract and separate individually than the most metals and, generally, they do not accumulate in minerals, they are rarely found in easily mined ores and are often unfavorably distributed in common ores/minerals. REEs show unique chemical and physical properties, in comparison to the other metals in the periodic table. Nowadays, these physicochemical properties are utilized in a wide range of synthetic, catalytic, electronic, medicinal, and military applications. Because of their applications, the global demand for rare earth metals is becoming progressively more important in the transition to a self-sustaining society and greener economy. However, due to the difficult separation between lanthanoid ions, the high cost and pollution of these processes, the scientists search the development of a method that combines selectivity and quantitative separation of lanthanoids from the leaching liquor, while being more economical and environmentally friendly processes. This motivation has favored the design and development of more efficient and environmentally friendly cation extractors with the incorporation of compounds as ionic liquids, membrane inclusion polymers (PIM) and supramolecular systems. Supramolecular chemistry focuses on the development of host-guest systems, in which a host molecule can recognize and bind a certain guest molecule or ion. Normally, the formation of a host-guest complex involves non-covalent interactions Additionally, host-guest interactions can be influenced among others effects by the structural nature of host and guests. The different macrocyclic hosts for lanthanoid species that have been studied are crown ethers, cyclodextrins, cucurbituryls, calixarenes and pillararenes.Among all the factors that can influence and affect lanthanoid (III) coordination, perhaps the most basic of them is the systematic control using macrocyclic substituents that promote a selective coordination. In this sense, macrocycles pillar[n]arenes (P[n]As) present a relatively easy functionalization and they have more π-rich cavity than other host molecules. This gives to P[n]As a negative electrostatic potential in the cavity which would be responsible for the selectivity of these compounds towards cations. Furthermore, the cavity size, the linker, and the functional groups of the polar headgroups could be modified in order to control the association of lanthanoid cations. In this sense, different P[n]As systems, specifically derivatives of the pentamer P[5]A functionalized with amide, amine, phosphate and sulfate derivatives, have been designed in terms of experimental synthesis and molecular dynamics, and the interaction between these P[5]As and some lanthanoid ions such as La³+, Eu³+ and Lu³+ has been studied by physicochemical characterization by 1H-NMR, ITC and fluorescence in the case of Eu³+ systems. The molecular dynamics study of these systems was developed in hexane as solvent, also taking into account the lanthanoid ions mentioned above, and the respective comparison studies between the different ions.

Keywords: lanthanoids, macrocycles, pillar[n]arenes, rare-earth metal extraction, supramolecular chemistry, supramolecular complexes.

Procedia PDF Downloads 77
183 Screening and Improved Production of an Extracellular β-Fructofuranosidase from Bacillus Sp

Authors: Lynette Lincoln, Sunil S. More

Abstract:

With the rising demand of sugar used today, it is proposed that world sugar is expected to escalate up to 203 million tonnes by 2021. Hydrolysis of sucrose (table sugar) into glucose and fructose equimolar mixture is catalyzed by β-D-fructofuranoside fructohydrolase (EC 3.2.1.26), commonly called as invertase. For fluid filled center in chocolates, preparation of artificial honey, as a sweetener and especially to ensure that food stuffs remain fresh, moist and soft for longer spans invertase is applied widely and is extensively being used. From an industrial perspective, properties such as increased solubility, osmotic pressure and prevention of crystallization of sugar in food products are highly desired. Screening for invertase does not involve plate assay/qualitative test to determine the enzyme production. In this study, we use a three-step screening strategy for identification of a novel bacterial isolate from soil which is positive for invertase production. The primary step was serial dilution of soil collected from sugarcane fields (black soil, Maddur region of Mandya district, Karnataka, India) was grown on a Czapek-Dox medium (pH 5.0) containing sucrose as the sole C-source. Only colonies with the capability to utilize/breakdown sucrose exhibited growth. Bacterial isolates released invertase in order to take up sucrose, splitting the disaccharide into simple sugars. Secondly, invertase activity was determined from cell free extract by measuring the glucose released in the medium at 540 nm. Morphological observation of the most potent bacteria was examined by several identification tests using Bergey’s manual, which enabled us to know the genus of the isolate to be Bacillus. Furthermore, this potent bacterial colony was subjected to 16S rDNA PCR amplification and a single discrete PCR amplicon band of 1500 bp was observed. The 16S rDNA sequence was used to carry out BLAST alignment search tool of NCBI Genbank database to obtain maximum identity score of sequence. Molecular sequencing and identification was performed by Xcelris Labs Ltd. (Ahmedabad, India). The colony was identified as Bacillus sp. BAB-3434, indicating to be the first novel strain for extracellular invertase production. Molasses, a by-product of the sugarcane industry is a dark viscous liquid obtained upon crystallization of sugar. An enhanced invertase production and optimization studies were carried out by one-factor-at-a-time approach. Crucial parameters such as time course (24 h), pH (6.0), temperature (45 °C), inoculum size (2% v/v), N-source (yeast extract, 0.2% w/v) and C-source (molasses, 4% v/v) were found to be optimum demonstrating an increased yield. The findings of this study reveal a simple screening method of an extracellular invertase from a rapidly growing Bacillus sp., and selection of best factors that elevate enzyme activity especially utilization of molasses which served as an ideal substrate and also as C-source, results in a cost-effective production under submerged conditions. The invert mixture could be a replacement for table sugar which is an economic advantage and reduce the tedious work of sugar growers. On-going studies involve purification of extracellular invertase and determination of transfructosylating activity as at high concentration of sucrose, invertase produces fructooligosaccharides (FOS) which possesses probiotic properties.

Keywords: Bacillus sp., invertase, molasses, screening, submerged fermentation

Procedia PDF Downloads 231
182 CyberSteer: Cyber-Human Approach for Safely Shaping Autonomous Robotic Behavior to Comply with Human Intention

Authors: Vinicius G. Goecks, Gregory M. Gremillion, William D. Nothwang

Abstract:

Modern approaches to train intelligent agents rely on prolonged training sessions, high amounts of input data, and multiple interactions with the environment. This restricts the application of these learning algorithms in robotics and real-world applications, in which there is low tolerance to inadequate actions, interactions are expensive, and real-time processing and action are required. This paper addresses this issue introducing CyberSteer, a novel approach to efficiently design intrinsic reward functions based on human intention to guide deep reinforcement learning agents with no environment-dependent rewards. CyberSteer uses non-expert human operators for initial demonstration of a given task or desired behavior. The trajectories collected are used to train a behavior cloning deep neural network that asynchronously runs in the background and suggests actions to the deep reinforcement learning module. An intrinsic reward is computed based on the similarity between actions suggested and taken by the deep reinforcement learning algorithm commanding the agent. This intrinsic reward can also be reshaped through additional human demonstration or critique. This approach removes the need for environment-dependent or hand-engineered rewards while still being able to safely shape the behavior of autonomous robotic agents, in this case, based on human intention. CyberSteer is tested in a high-fidelity unmanned aerial vehicle simulation environment, the Microsoft AirSim. The simulated aerial robot performs collision avoidance through a clustered forest environment using forward-looking depth sensing and roll, pitch, and yaw references angle commands to the flight controller. This approach shows that the behavior of robotic systems can be shaped in a reduced amount of time when guided by a non-expert human, who is only aware of the high-level goals of the task. Decreasing the amount of training time required and increasing safety during training maneuvers will allow for faster deployment of intelligent robotic agents in dynamic real-world applications.

Keywords: human-robot interaction, intelligent robots, robot learning, semisupervised learning, unmanned aerial vehicles

Procedia PDF Downloads 259
181 Design and Implementation of Generative Models for Odor Classification Using Electronic Nose

Authors: Kumar Shashvat, Amol P. Bhondekar

Abstract:

In the midst of the five senses, odor is the most reminiscent and least understood. Odor testing has been mysterious and odor data fabled to most practitioners. The delinquent of recognition and classification of odor is important to achieve. The facility to smell and predict whether the artifact is of further use or it has become undesirable for consumption; the imitation of this problem hooked on a model is of consideration. The general industrial standard for this classification is color based anyhow; odor can be improved classifier than color based classification and if incorporated in machine will be awfully constructive. For cataloging of odor for peas, trees and cashews various discriminative approaches have been used Discriminative approaches offer good prognostic performance and have been widely used in many applications but are incapable to make effectual use of the unlabeled information. In such scenarios, generative approaches have better applicability, as they are able to knob glitches, such as in set-ups where variability in the series of possible input vectors is enormous. Generative models are integrated in machine learning for either modeling data directly or as a transitional step to form an indeterminate probability density function. The algorithms or models Linear Discriminant Analysis and Naive Bayes Classifier have been used for classification of the odor of cashews. Linear Discriminant Analysis is a method used in data classification, pattern recognition, and machine learning to discover a linear combination of features that typifies or divides two or more classes of objects or procedures. The Naive Bayes algorithm is a classification approach base on Bayes rule and a set of qualified independence theory. Naive Bayes classifiers are highly scalable, requiring a number of restraints linear in the number of variables (features/predictors) in a learning predicament. The main recompenses of using the generative models are generally a Generative Models make stronger assumptions about the data, specifically, about the distribution of predictors given the response variables. The Electronic instrument which is used for artificial odor sensing and classification is an electronic nose. This device is designed to imitate the anthropological sense of odor by providing an analysis of individual chemicals or chemical mixtures. The experimental results have been evaluated in the form of the performance measures i.e. are accuracy, precision and recall. The investigational results have proven that the overall performance of the Linear Discriminant Analysis was better in assessment to the Naive Bayes Classifier on cashew dataset.

Keywords: odor classification, generative models, naive bayes, linear discriminant analysis

Procedia PDF Downloads 387
180 Local Binary Patterns-Based Statistical Data Analysis for Accurate Soccer Match Prediction

Authors: Mohammad Ghahramani, Fahimeh Saei Manesh

Abstract:

Winning a soccer game is based on thorough and deep analysis of the ongoing match. On the other hand, giant gambling companies are in vital need of such analysis to reduce their loss against their customers. In this research work, we perform deep, real-time analysis on every soccer match around the world that distinguishes our work from others by focusing on particular seasons, teams and partial analytics. Our contributions are presented in the platform called “Analyst Masters.” First, we introduce various sources of information available for soccer analysis for teams around the world that helped us record live statistical data and information from more than 50,000 soccer matches a year. Our second and main contribution is to introduce our proposed in-play performance evaluation. The third contribution is developing new features from stable soccer matches. The statistics of soccer matches and their odds before and in-play are considered in the image format versus time including the halftime. Local Binary patterns, (LBP) is then employed to extract features from the image. Our analyses reveal incredibly interesting features and rules if a soccer match has reached enough stability. For example, our “8-minute rule” implies if 'Team A' scores a goal and can maintain the result for at least 8 minutes then the match would end in their favor in a stable match. We could also make accurate predictions before the match of scoring less/more than 2.5 goals. We benefit from the Gradient Boosting Trees, GBT, to extract highly related features. Once the features are selected from this pool of data, the Decision trees decide if the match is stable. A stable match is then passed to a post-processing stage to check its properties such as betters’ and punters’ behavior and its statistical data to issue the prediction. The proposed method was trained using 140,000 soccer matches and tested on more than 100,000 samples achieving 98% accuracy to select stable matches. Our database from 240,000 matches shows that one can get over 20% betting profit per month using Analyst Masters. Such consistent profit outperforms human experts and shows the inefficiency of the betting market. Top soccer tipsters achieve 50% accuracy and 8% monthly profit in average only on regional matches. Both our collected database of more than 240,000 soccer matches from 2012 and our algorithm would greatly benefit coaches and punters to get accurate analysis.

Keywords: soccer, analytics, machine learning, database

Procedia PDF Downloads 238
179 Assessing Online Learning Paths in an Learning Management Systems Using a Data Mining and Machine Learning Approach

Authors: Alvaro Figueira, Bruno Cabral

Abstract:

Nowadays, students are used to be assessed through an online platform. Educators have stepped up from a period in which they endured the transition from paper to digital. The use of a diversified set of question types that range from quizzes to open questions is currently common in most university courses. In many courses, today, the evaluation methodology also fosters the students’ online participation in forums, the download, and upload of modified files, or even the participation in group activities. At the same time, new pedagogy theories that promote the active participation of students in the learning process, and the systematic use of problem-based learning, are being adopted using an eLearning system for that purpose. However, although there can be a lot of feedback from these activities to student’s, usually it is restricted to the assessments of online well-defined tasks. In this article, we propose an automatic system that informs students of abnormal deviations of a 'correct' learning path in the course. Our approach is based on the fact that by obtaining this information earlier in the semester, may provide students and educators an opportunity to resolve an eventual problem regarding the student’s current online actions towards the course. Our goal is to prevent situations that have a significant probability to lead to a poor grade and, eventually, to failing. In the major learning management systems (LMS) currently available, the interaction between the students and the system itself is registered in log files in the form of registers that mark beginning of actions performed by the user. Our proposed system uses that logged information to derive new one: the time each student spends on each activity, the time and order of the resources used by the student and, finally, the online resource usage pattern. Then, using the grades assigned to the students in previous years, we built a learning dataset that is used to feed a machine learning meta classifier. The produced classification model is then used to predict the grades a learning path is heading to, in the current year. Not only this approach serves the teacher, but also the student to receive automatic feedback on her current situation, having past years as a perspective. Our system can be applied to online courses that integrate the use of an online platform that stores user actions in a log file, and that has access to other student’s evaluations. The system is based on a data mining process on the log files and on a self-feedback machine learning algorithm that works paired with the Moodle LMS.

Keywords: data mining, e-learning, grade prediction, machine learning, student learning path

Procedia PDF Downloads 122