Search results for: complex network approach
19312 On Performance of Cache Replacement Schemes in NDN-IoT
Authors: Rasool Sadeghi, Sayed Mahdi Faghih Imani, Negar Najafi
Abstract:
The inherent features of Named Data Networking (NDN) provides a robust solution for Internet of Thing (IoT). Therefore, NDN-IoT has emerged as a combined architecture which exploits the benefits of NDN for interconnecting of the heterogeneous objects in IoT. In NDN-IoT, caching schemes are a key role to improve the network performance. In this paper, we consider the effectiveness of cache replacement schemes in NDN-IoT scenarios. We investigate the impact of replacement schemes on average delay, average hop count, and average interest retransmission when replacement schemes are Least Frequently Used (LFU), Least Recently Used (LRU), First-In-First-Out (FIFO) and Random. The simulation results demonstrate that LFU and LRU present a stable performance when the cache size changes. Moreover, the network performance improves when the number of consumers increases.Keywords: NDN-IoT, cache replacement, performance, ndnSIM
Procedia PDF Downloads 36519311 Relevance of Reliability Approaches to Predict Mould Growth in Biobased Building Materials
Authors: Lucile Soudani, Hervé Illy, Rémi Bouchié
Abstract:
Mould growth in living environments has been widely reported for decades all throughout the world. A higher level of moisture in housings can lead to building degradation, chemical component emissions from construction materials as well as enhancing mould growth within the envelope elements or on the internal surfaces. Moreover, a significant number of studies have highlighted the link between mould presence and the prevalence of respiratory diseases. In recent years, the proportion of biobased materials used in construction has been increasing, as seen as an effective lever to reduce the environmental impact of the building sector. Besides, bio-based materials are also hygroscopic materials: when in contact with the wet air of a surrounding environment, their porous structures enable a better capture of water molecules, thus providing a more suitable background for mould growth. Many studies have been conducted to develop reliable models to be able to predict mould appearance, growth, and decay over many building materials and external exposures. Some of them require information about temperature and/or relative humidity, exposure times, material sensitivities, etc. Nevertheless, several studies have highlighted a large disparity between predictions and actual mould growth in experimental settings as well as in occupied buildings. The difficulty of considering the influence of all parameters appears to be the most challenging issue. As many complex phenomena take place simultaneously, a preliminary study has been carried out to evaluate the feasibility to sadopt a reliability approach rather than a deterministic approach. Both epistemic and random uncertainties were identified specifically for the prediction of mould appearance and growth. Several studies published in the literature were selected and analysed, from the agri-food or automotive sectors, as the deployed methodology appeared promising.Keywords: bio-based materials, mould growth, numerical prediction, reliability approach
Procedia PDF Downloads 4619310 Net Neutrality and Asymmetric Platform Competition
Authors: Romain Lestage, Marc Bourreau
Abstract:
In this paper we analyze the interplay between access to the last-mile network and net neutrality in the market for Internet access. We consider two Internet Service Providers (ISPs), which act as platforms between Internet users and Content Providers (CPs). One of the ISPs is vertically integrated and provides access to its last-mile network to the other (non-integrated) ISP. We show that a lower access price increases the integrated ISP's incentives to charge CPs positive termination fees (i.e., to deviate from net neutrality), and decreases the non-integrated ISP's incentives to charge positive termination fees.Keywords: net neutrality, access regulation, internet access, two-sided markets
Procedia PDF Downloads 37619309 A Framework for Internet Education: Personalised Approach
Authors: Zoe Wong
Abstract:
The purpose of this paper is to develop a framework for internet education. This framework uses the personalized learning approach for everyone who can freely develop their qualifications & careers. The key components of the framework includes students, teachers, assessments and infrastructure. It allows remove the challenges and limitations of the current educational system and allows learners' to cope with progressing learning materials.Keywords: internet education, personalized approach, information technology, framework
Procedia PDF Downloads 35819308 Computer-Assisted Management of Building Climate and Microgrid with Model Predictive Control
Authors: Vinko Lešić, Mario Vašak, Anita Martinčević, Marko Gulin, Antonio Starčić, Hrvoje Novak
Abstract:
With 40% of total world energy consumption, building systems are developing into technically complex large energy consumers suitable for application of sophisticated power management approaches to largely increase the energy efficiency and even make them active energy market participants. Centralized control system of building heating and cooling managed by economically-optimal model predictive control shows promising results with estimated 30% of energy efficiency increase. The research is focused on implementation of such a method on a case study performed on two floors of our faculty building with corresponding sensors wireless data acquisition, remote heating/cooling units and central climate controller. Building walls are mathematically modeled with corresponding material types, surface shapes and sizes. Models are then exploited to predict thermal characteristics and changes in different building zones. Exterior influences such as environmental conditions and weather forecast, people behavior and comfort demands are all taken into account for deriving price-optimal climate control. Finally, a DC microgrid with photovoltaics, wind turbine, supercapacitor, batteries and fuel cell stacks is added to make the building a unit capable of active participation in a price-varying energy market. Computational burden of applying model predictive control on such a complex system is relaxed through a hierarchical decomposition of the microgrid and climate control, where the former is designed as higher hierarchical level with pre-calculated price-optimal power flows control, and latter is designed as lower level control responsible to ensure thermal comfort and exploit the optimal supply conditions enabled by microgrid energy flows management. Such an approach is expected to enable the inclusion of more complex building subsystems into consideration in order to further increase the energy efficiency.Keywords: price-optimal building climate control, Microgrid power flow optimisation, hierarchical model predictive control, energy efficient buildings, energy market participation
Procedia PDF Downloads 46519307 Asymmetric Linkages Between Global Sustainable Index (Green Bond) and Cryptocurrency Markets with Portfolio Implications
Authors: Faheem Ur Rehman, Muhammad Khalil Khan, Miao Qing
Abstract:
This study investigated the asymmetric links and portfolio strategies between green bonds and the markets of three different cryptocurrencies, i.e., green, Islamic, and conventional, using data from January 1, 2018, to April 8, 2022, and employing asymmetric TVP-VAR model to quantify risk spillovers in the network analysis. In addition, we use the minimum variance, minimum correlation, and minimum connectedness methodologies to assess the portfolio implications. The results of the asymmetric dynamic connectedness index (TCI) model show that by adopting cryptocurrencies for digital finance, risk spillovers are found to be reduced. The findings of net directional connectedness demonstrate that during the study period, green bonds consistently get return spillovers from all other network variables. Positive return spillovers are bigger in magnitude than negative ones. These results imply that the influence of the green bond market on the cryptocurrency markets is decreasing. Positive return spillovers generate higher connectedness values for (HG, BNB, and TRX) coins and persistent net recipients in the specific network. On the other hand, Cardano and ADA coins are persistent net transmitters in the system. XLM and MIOTA's responsibilities shift over time, and there is evidence of asymmetry when both positive and negative returns are considered. According to the pairwise portfolio weights, BNB vs. BTC has the largest portfolio weights in the system, followed by BNB vs. Ethereum, suggesting the best investment strategies in the network.Keywords: asymmetric TVP-VAR, global sustainable index, cryptocurrency, portfolios
Procedia PDF Downloads 7819306 Global Mittag-Leffler Stability of Fractional-Order Bidirectional Associative Memory Neural Network with Discrete and Distributed Transmission Delays
Authors: Swati Tyagi, Syed Abbas
Abstract:
Fractional-order Hopfield neural networks are generally used to model the information processing among the interacting neurons. To show the constancy of the processed information, it is required to analyze the stability of these systems. In this work, we perform Mittag-Leffler stability for the corresponding Caputo fractional-order bidirectional associative memory (BAM) neural networks with various time-delays. We derive sufficient conditions to ensure the existence and uniqueness of the equilibrium point by using the theory of topological degree theory. By applying the fractional Lyapunov method and Mittag-Leffler functions, we derive sufficient conditions for the global Mittag-Leffler stability, which further imply the global asymptotic stability of the network equilibrium. Finally, we present two suitable examples to show the effectiveness of the obtained results.Keywords: bidirectional associative memory neural network, existence and uniqueness, fractional-order, Lyapunov function, Mittag-Leffler stability
Procedia PDF Downloads 36419305 Estimation of Reservoirs Fracture Network Properties Using an Artificial Intelligence Technique
Authors: Reda Abdel Azim, Tariq Shehab
Abstract:
The main objective of this study is to develop a subsurface fracture map of naturally fractured reservoirs by overcoming the limitations associated with different data sources in characterising fracture properties. Some of these limitations are overcome by employing a nested neuro-stochastic technique to establish inter-relationship between different data, as conventional well logs, borehole images (FMI), core description, seismic attributes, and etc. and then characterise fracture properties in terms of fracture density and fractal dimension for each data source. Fracture density is an important property of a system of fracture network as it is a measure of the cumulative area of all the fractures in a unit volume of a fracture network system and Fractal dimension is also used to characterize self-similar objects such as fractures. At the wellbore locations, fracture density and fractal dimension can only be estimated for limited sections where FMI data are available. Therefore, artificial intelligence technique is applied to approximate the quantities at locations along the wellbore, where the hard data is not available. It should be noted that Artificial intelligence techniques have proven their effectiveness in this domain of applications.Keywords: naturally fractured reservoirs, artificial intelligence, fracture intensity, fractal dimension
Procedia PDF Downloads 25419304 Reactive Analysis of Different Protocol in Mobile Ad Hoc Network
Authors: Manoj Kumar
Abstract:
Routing protocols have a central role in any mobile ad hoc network (MANET). There are many routing protocols that exhibit different performance levels in different scenarios. In this paper, we compare AODV, DSDV, DSR, and ZRP routing protocol in mobile ad hoc networks to determine the best operational conditions for each protocol. We analyze these routing protocols by extensive simulations in OPNET simulator and show how to pause time and the number of nodes affect their performance. In this study, performance is measured in terms of control traffic received, control traffic sent, data traffic received, sent data traffic, throughput, retransmission attempts.Keywords: AODV, DSDV, DSR, ZRP
Procedia PDF Downloads 51819303 Advancing Urban Sustainability through Data-Driven Machine Learning Solutions
Authors: Nasim Eslamirad, Mahdi Rasoulinezhad, Francesco De Luca, Sadok Ben Yahia, Kimmo Sakari Lylykangas, Francesco Pilla
Abstract:
With the ongoing urbanization, cities face increasing environmental challenges impacting human well-being. To tackle these issues, data-driven approaches in urban analysis have gained prominence, leveraging urban data to promote sustainability. Integrating Machine Learning techniques enables researchers to analyze and predict complex environmental phenomena like Urban Heat Island occurrences in urban areas. This paper demonstrates the implementation of data-driven approach and interpretable Machine Learning algorithms with interpretability techniques to conduct comprehensive data analyses for sustainable urban design. The developed framework and algorithms are demonstrated for Tallinn, Estonia to develop sustainable urban strategies to mitigate urban heat waves. Geospatial data, preprocessed and labeled with UHI levels, are used to train various ML models, with Logistic Regression emerging as the best-performing model based on evaluation metrics to derive a mathematical equation representing the area with UHI or without UHI effects, providing insights into UHI occurrences based on buildings and urban features. The derived formula highlights the importance of building volume, height, area, and shape length to create an urban environment with UHI impact. The data-driven approach and derived equation inform mitigation strategies and sustainable urban development in Tallinn and offer valuable guidance for other locations with varying climates.Keywords: data-driven approach, machine learning transparent models, interpretable machine learning models, urban heat island effect
Procedia PDF Downloads 3719302 Computational Team Dynamics and Interaction Patterns in New Product Development Teams
Authors: Shankaran Sitarama
Abstract:
New Product Development (NPD) is invariably a team effort and involves effective teamwork. NPD team has members from different disciplines coming together and working through the different phases all the way from conceptual design phase till the production and product roll out. Creativity and Innovation are some of the key factors of successful NPD. Team members going through the different phases of NPD interact and work closely yet challenge each other during the design phases to brainstorm on ideas and later converge to work together. These two traits require the teams to have a divergent and a convergent thinking simultaneously. There needs to be a good balance. The team dynamics invariably result in conflicts among team members. While some amount of conflict (ideational conflict) is desirable in NPD teams to be creative as a group, relational conflicts (or discords among members) could be detrimental to teamwork. Team communication truly reflect these tensions and team dynamics. In this research, team communication (emails) between the members of the NPD teams is considered for analysis. The email communication is processed through a semantic analysis algorithm (LSA) to analyze the content of communication and a semantic similarity analysis to arrive at a social network graph that depicts the communication amongst team members based on the content of communication. The amount of communication (content and not frequency of communication) defines the interaction strength between the members. Social network adjacency matrix is thus obtained for the team. Standard social network analysis techniques based on the Adjacency Matrix (AM) and Dichotomized Adjacency Matrix (DAM) based on network density yield network graphs and network metrics like centrality. The social network graphs are then rendered for visual representation using a Metric Multi-Dimensional Scaling (MMDS) algorithm for node placements and arcs connecting the nodes (representing team members) are drawn. The distance of the nodes in the placement represents the tie-strength between the members. Stronger tie-strengths render nodes closer. Overall visual representation of the social network graph provides a clear picture of the team’s interactions. This research reveals four distinct patterns of team interaction that are clearly identifiable in the visual representation of the social network graph and have a clearly defined computational scheme. The four computational patterns of team interaction defined are Central Member Pattern (CMP), Subgroup and Aloof member Pattern (SAP), Isolate Member Pattern (IMP), and Pendant Member Pattern (PMP). Each of these patterns has a team dynamics implication in terms of the conflict level in the team. For instance, Isolate member pattern, clearly points to a near break-down in communication with the member and hence a possible high conflict level, whereas the subgroup or aloof member pattern points to a non-uniform information flow in the team and some moderate level of conflict. These pattern classifications of teams are then compared and correlated to the real level of conflict in the teams as indicated by the team members through an elaborate self-evaluation, team reflection, feedback form and results show a good correlation.Keywords: team dynamics, team communication, team interactions, social network analysis, sna, new product development, latent semantic analysis, LSA, NPD teams
Procedia PDF Downloads 7019301 Relation between Pavement Roughness and Distress Parameters for Highways
Authors: Suryapeta Harini
Abstract:
Road surface roughness is one of the essential aspects of the road's functional condition, indicating riding comfort in both the transverse and longitudinal directions. The government of India has made maintaining good surface evenness a prerequisite for all highway projects. Pavement distress data was collected with a Network Survey Vehicle (NSV) on a National Highway. It determines the smoothness and frictional qualities of the pavement surface, which are related to driving safety and ease. Based on the data obtained in the field, a regression equation was created with the IRI value and the visual distresses. The suggested system can use wireless acceleration sensors and GPS to gather vehicle status and location data, as well as calculate the international roughness index (IRI). Potholes, raveling, rut depth, cracked area, and repair work are all affected by pavement roughness, according to the current study. The study was carried out in one location. Data collected through using Bump integrator was used for the validation. The bump integrator (BI) obtained using deflection from the network survey vehicle was correlated with the distress parameter to establish an equation.Keywords: roughness index, network survey vehicle, regression, correlation
Procedia PDF Downloads 17619300 Role of ICT and Wage Inequality in Organization
Authors: Shoji Katagiri
Abstract:
This study deals with wage inequality in organization and shows the relationship between ICT and wage in organization. To do so, we incorporate ICT’s factors in organization into our model. ICT’s factors are efficiencies of Enterprise Resource Planning (ERP), Computer Assisted Design/Computer Assisted Manufacturing (CAD/CAM), and NETWORK. The improvement of ICT’s factors decrease the learning cost to solve problem pertaining to the hierarchy in organization. The improvement of NETWORK increases the wage inequality within workers and decreases within managers and entrepreneurs. The improvements of CAD/CAM and ERP increases the wage inequality within all agent, and partially increase it between the agents in hierarchy.Keywords: endogenous economic growth, ICT, inequality, capital accumulation
Procedia PDF Downloads 26019299 Dynamics of Piaget’s Cognitive Learning Approach and Vygotsky’s Sociocultural Theory in Different Stages of Medical and Allied Health Education
Authors: Ferissa B. Ablola
Abstract:
The two learning theories which were evidently used in medical education include cognitive and sociocultural frameworks. The interplay of different learning theories in education is vital since most of the existing theories have specific focus of development. In addition, a certain theory is best fit with a particular learning outcome and audience profile. The application of learning theories is education is said to be dynamic and becomes more complex with increasing educational level. This systematic review aims to describe the possible shift from integration of cognitive learning theory to employment of socio-cultural approach in medical and health-allied education over the years among students, educators and the learning institution through systematic review following the PRISMA guidelines. In addition, the changes in teaching modality and individual acceptance of the shift of learning framework among cognitive constructivist and social constructivist will also be documented. This present review may serve as baseline information on the connection of two widely used theories in medical education in different year levels. Further, this study emphasizes the significance of the alignment of different learning theories and combination of insights from several educational frameworks, would permit the creation of a teaching/learning design with real theoretical depth. A more inclusive systematic review is necessary to involve more related studies, and exploration of interaction among other learning theories in health and other fields of study is encouraged.Keywords: learning theory, cognitive, sociocultural, medical education
Procedia PDF Downloads 2619298 Theoretical Study on the Visible-Light-Induced Radical Coupling Reactions Mediated by Charge Transfer Complex
Authors: Lishuang Ma
Abstract:
Charge transfer (CT) complex, also known as Electron donor-acceptor (EDA) complex, has received attentions increasingly in the field of synthetic chemistry community, due to the CT complex can absorb the visible light through the intermolecular charge transfer excited states, various of catalyst-free photochemical transformations under mild visible-light conditions. However, a number of fundamental questions are still ambiguous, such as the origin of visible light absorption, the photochemical and photophysical properties of the CT complex, as well as the detailed mechanism of the radical coupling pathways mediated by CT complex. Since these are critical factors for target-specific design and synthesis of more new-type CT complexes. To this end, theoretical investigations were performed in our group to answer these questions based on multiconfigurational perturbation theory. The photo-induced fluoroalkylation reactions are mediated by CT complexes, which are formed by the association of an acceptor of perfluoroalkyl halides RF−X (X = Br, I) and a suitable donor molecule such as β-naphtholate anion, were chosen as a paradigm example in this work. First, spectrum simulations were carried out by both CASPT2//CASSCF/PCM and TD-DFT/PCM methods. The computational results showed that the broadening spectra in visible light range (360-550nm) of the CT complexes originate from the 1(σπ*) excitation, accompanied by an intermolecular electron transfer, which was also found closely related to the aggregate states of the donor and acceptor. Moreover, from charge translocation analysis, the CT complex that showed larger charge transfer in the round state would exhibit smaller charge transfer in excited stated of 1(σπ*), causing blue shift relatively. Then, the excited-state potential energy surface (PES) was calculated at CASPT2//CASSCF(12,10)/ PCM level of theory to explore the photophysical properties of the CT complexes. The photo-induced C-X (X=I, Br) bond cleavage was found to occur in the triplet state, which is accessible through a fast intersystem crossing (ISC) process that is controlled by the strong spin-orbit coupling resulting from the heavy iodine and bromine atoms. Importantly, this rapid fragmentation process can compete and suppress the backward electron transfer (BET) event, facilitating the subsequent effective photochemical transformations. Finally, the reaction pathways of the radical coupling were also inspected, which showed that the radical chain propagation pathway could easy to accomplish with a small energy barrier no more than 3.0 kcal/mol, which is the key factor that promote the efficiency of the photochemical reactions induced by CT complexes. In conclusion, theoretical investigations were performed to explore the photophysical and photochemical properties of the CT complexes, as well as the mechanism of radical coupling reactions mediated by CT complex. The computational results and findings in this work can provide some critical insights into mechanism-based design for more new-type EDA complexesKeywords: charge transfer complex, electron transfer, multiconfigurational perturbation theory, radical coupling
Procedia PDF Downloads 14319297 The Simple Two-Step Polydimethylsiloxane (PDMS) Transferring Process for High Aspect Ratio Microstructures
Authors: Shaoxi Wang, Pouya Rezai
Abstract:
High aspect ratio is the necessary parts of complex microstructures. Some methods available to achieve high aspect ratio requires expensive materials or complex process; others is difficult to research simple high aspect ratio structures. The paper presents a simple and cheap two-step Polydimethylsioxane (PDMS) transferring process to get high aspect ratio single pillars, which only requires covering the PDMS mold with Brij@52 surface solution. The experimental results demonstrate the method efficiency and effective.Keywords: high aspect ratio, microstructure, PDMS, Brij
Procedia PDF Downloads 26419296 Towards Sustainable Evolution of Bioeconomy: The Role of Technology and Innovation Management
Authors: Ronald Orth, Johanna Haunschild, Sara Tsog
Abstract:
The bioeconomy is an inter- and cross-disciplinary field covering a large number and wide scope of existing and emerging technologies. It has a great potential to contribute to the transformation process of industry landscape and ultimately drive the economy towards sustainability. However, bioeconomy per se is not necessarily sustainable and technology should be seen as an enabler rather than panacea to all our ecological, social and economic issues. Therefore, to draw and maximize benefits from bioeconomy in terms of sustainability, we propose that innovative activities should encompass not only novel technologies and bio-based new materials but also multifocal innovations. For multifocal innovation endeavors, innovation management plays a substantial role, as any innovation emerges in a complex iterative process where communication and knowledge exchange among relevant stake holders has a pivotal role. The knowledge generation and innovation are although at the core of transition towards a more sustainable bio-based economy, to date, there is a significant lack of concepts and models that approach bioeconomy from the innovation management approach. The aim of this paper is therefore two-fold. First, it inspects the role of transformative approach in the adaptation of bioeconomy that contributes to the environmental, ecological, social and economic sustainability. Second, it elaborates the importance of technology and innovation management as a tool for smooth, prompt and effective transition of firms to the bioeconomy. We conduct a qualitative literature study on the sustainability challenges that bioeconomy entails thus far using Science Citation Index and based on grey literature, as major economies e.g. EU, USA, China and Brazil have pledged to adopt bioeconomy and have released extensive publications on the topic. We will draw an example on the forest based business sector that is transforming towards the new green economy more rapidly as expected, although this sector has a long-established conventional business culture with consolidated and fully fledged industry. Based on our analysis we found that a successful transition to sustainable bioeconomy is conditioned on heterogenous and contested factors in terms of stakeholders , activities and modes of innovation. In addition, multifocal innovations occur when actors from interdisciplinary fields engage in intensive and continuous interaction where the focus of innovation is allocated to a field of mutually evolving socio-technical practices that correspond to the aims of the novel paradigm of transformative innovation policy. By adopting an integrated and systems approach as well as tapping into various innovation networks and joining global innovation clusters, firms have better chance of creating an entire new chain of value added products and services. This requires professionals that have certain capabilities and skills such as: foresight for future markets, ability to deal with complex issues, ability to guide responsible R&D, ability of strategic decision making, manage in-depth innovation systems analysis including value chain analysis. Policy makers, on the other hand, need to acknowledge the essential role of firms in the transformative innovation policy paradigm.Keywords: bioeconomy, innovation and technology management, multifocal innovation, sustainability, transformative innovation policy
Procedia PDF Downloads 12519295 Breast Cancer Metastasis Detection and Localization through Transfer-Learning Convolutional Neural Network Classification Based on Convolutional Denoising Autoencoder Stack
Authors: Varun Agarwal
Abstract:
Introduction: With the advent of personalized medicine, histopathological review of whole slide images (WSIs) for cancer diagnosis presents an exceedingly time-consuming, complex task. Specifically, detecting metastatic regions in WSIs of sentinel lymph node biopsies necessitates a full-scanned, holistic evaluation of the image. Thus, digital pathology, low-level image manipulation algorithms, and machine learning provide significant advancements in improving the efficiency and accuracy of WSI analysis. Using Camelyon16 data, this paper proposes a deep learning pipeline to automate and ameliorate breast cancer metastasis localization and WSI classification. Methodology: The model broadly follows five stages -region of interest detection, WSI partitioning into image tiles, convolutional neural network (CNN) image-segment classifications, probabilistic mapping of tumor localizations, and further processing for whole WSI classification. Transfer learning is applied to the task, with the implementation of Inception-ResNetV2 - an effective CNN classifier that uses residual connections to enhance feature representation, adding convolved outputs in the inception unit to the proceeding input data. Moreover, in order to augment the performance of the transfer learning CNN, a stack of convolutional denoising autoencoders (CDAE) is applied to produce embeddings that enrich image representation. Through a saliency-detection algorithm, visual training segments are generated, which are then processed through a denoising autoencoder -primarily consisting of convolutional, leaky rectified linear unit, and batch normalization layers- and subsequently a contrast-normalization function. A spatial pyramid pooling algorithm extracts the key features from the processed image, creating a viable feature map for the CNN that minimizes spatial resolution and noise. Results and Conclusion: The simplified and effective architecture of the fine-tuned transfer learning Inception-ResNetV2 network enhanced with the CDAE stack yields state of the art performance in WSI classification and tumor localization, achieving AUC scores of 0.947 and 0.753, respectively. The convolutional feature retention and compilation with the residual connections to inception units synergized with the input denoising algorithm enable the pipeline to serve as an effective, efficient tool in the histopathological review of WSIs.Keywords: breast cancer, convolutional neural networks, metastasis mapping, whole slide images
Procedia PDF Downloads 13019294 Multiscale Analysis of Shale Heterogeneity in Silurian Longmaxi Formation from South China
Authors: Xianglu Tang, Zhenxue Jiang, Zhuo Li
Abstract:
Characterization of shale multi scale heterogeneity is an important part to evaluate size and space distribution of shale gas reservoirs in sedimentary basins. The origin of shale heterogeneity has always been a hot research topic for it determines shale micro characteristics description and macro quality reservoir prediction. Shale multi scale heterogeneity was discussed based on thin section observation, FIB-SEM, QEMSCAN, TOC, XRD, mercury intrusion porosimetry (MIP), and nitrogen adsorption analysis from 30 core samples in Silurian Longmaxi formation. Results show that shale heterogeneity can be characterized by pore structure and mineral composition. The heterogeneity of shale pore is showed by different size pores at nm-μm scale. Macropores (pore diameter > 50 nm) have a large percentage of pore volume than mesopores (pore diameter between 2~ 50 nm) and micropores (pore diameter < 2nm). However, they have a low specific surface area than mesopores and micropores. Fractal dimensions of the pores from nitrogen adsorption data are higher than 2.7, what are higher than 2.8 from MIP data, showing extremely complex pore structure. This complexity in pore structure is mainly due to the organic matter and clay minerals with complex pore network structures, and diagenesis makes it more complicated. The heterogeneity of shale minerals is showed by mineral grains, lamina, and different lithology at nm-km scale under the continuous changing horizon. Through analyzing the change of mineral composition at each scale, random arrangement of mineral equal proportion, seasonal climate changes, large changes of sedimentary environment, and provenance supply are considered to be the main reasons that cause shale minerals heterogeneity from microcosmic to macroscopic. Due to scale effect, the change of shale multi scale heterogeneity is a discontinuous process, and there is a transformation boundary between homogeneous and in homogeneous. Therefore, a shale multi scale heterogeneity changing model is established by defining four types of homogeneous unit at different scales, which can be used to guide the prediction of shale gas distribution from micro scale to macro scale.Keywords: heterogeneity, homogeneous unit, multiscale, shale
Procedia PDF Downloads 45219293 An Experimental Approach to the Influence of Tipping Points and Scientific Uncertainties in the Success of International Fisheries Management
Authors: Jules Selles
Abstract:
The Atlantic and Mediterranean bluefin tuna fishery have been considered as the archetype of an overfished and mismanaged fishery. This crisis has demonstrated the role of public awareness and the importance of the interactions between science and management about scientific uncertainties. This work aims at investigating the policy making process associated with a regional fisheries management organization. We propose a contextualized computer-based experimental approach, in order to explore the effects of key factors on the cooperation process in a complex straddling stock management setting. Namely, we analyze the effects of the introduction of a socio-economic tipping point and the uncertainty surrounding the estimation of the resource level. Our approach is based on a Gordon-Schaefer bio-economic model which explicitly represents the decision making process. Each participant plays the role of a stakeholder of ICCAT and represents a coalition of fishing nations involved in the fishery and decide unilaterally a harvest policy for the coming year. The context of the experiment induces the incentives for exploitation and collaboration to achieve common sustainable harvest plans at the Atlantic bluefin tuna stock scale. Our rigorous framework allows testing how stakeholders who plan the exploitation of a fish stock (a common pool resource) respond to two kinds of effects: i) the inclusion of a drastic shift in the management constraints (beyond a socio-economic tipping point) and ii) an increasing uncertainty in the scientific estimation of the resource level.Keywords: economic experiment, fisheries management, game theory, policy making, Atlantic Bluefin tuna
Procedia PDF Downloads 25319292 Application and Assessment of Artificial Neural Networks for Biodiesel Iodine Value Prediction
Authors: Raquel M. De sousa, Sofiane Labidi, Allan Kardec D. Barros, Alex O. Barradas Filho, Aldalea L. B. Marques
Abstract:
Several parameters are established in order to measure biodiesel quality. One of them is the iodine value, which is an important parameter that measures the total unsaturation within a mixture of fatty acids. Limitation of unsaturated fatty acids is necessary since warming of a higher quantity of these ones ends in either formation of deposits inside the motor or damage of lubricant. Determination of iodine value by official procedure tends to be very laborious, with high costs and toxicity of the reagents, this study uses an artificial neural network (ANN) in order to predict the iodine value property as an alternative to these problems. The methodology of development of networks used 13 esters of fatty acids in the input with convergence algorithms of backpropagation type were optimized in order to get an architecture of prediction of iodine value. This study allowed us to demonstrate the neural networks’ ability to learn the correlation between biodiesel quality properties, in this case iodine value, and the molecular structures that make it up. The model developed in the study reached a correlation coefficient (R) of 0.99 for both network validation and network simulation, with Levenberg-Maquardt algorithm.Keywords: artificial neural networks, biodiesel, iodine value, prediction
Procedia PDF Downloads 60619291 Business and Human Rights: An Analysis of the UK Modern Slavery Act 2015
Authors: Prapin Nuchpiam
Abstract:
Sustainable Development Goals (SDGs) have become a global agenda for all. The role of the business sector is significant in promoting sustainable development, particularly to prevent, address, and remedy human rights abuses committed in business operations. Modern slavery is one of the complex issues of human rights. The paper aims to study the UK Modern Slavery Act (MSA) 2015, whose main purpose is to tackle modern slavery in all its forms: human trafficking, slavery, forced labor, and domestic servitude. The Act has a great significance in its approach to involving businesses in combating modern slavery without imposing stricter regulations on them. In doing so, Section 54 of the MSA requires commercial organizations to disclose a statement confirming the transparency in their corporate supply chains. Even though the statement is required by law, in practice, it is rather similar to the ‘comply or explain’ scheme. In other words, compliance is mainly enforced due to fear of reputational risk, rather than of lawbreaking. Thailand has been reported a number of modern slavery cases, particularly in the production stage of supply chains. With desperate attempts to solve modern slavery, the Thai government tends to seek stricter regulation and stronger punishment as the main approach. The paper will analyze the effective implementation of section 54and conclude whether and to what extent the MSA can be applied to the case of Thailand.Keywords: human rights, responsible business, SDGs, the UK modern slavery act 2015
Procedia PDF Downloads 12319290 Reduced Complexity Iterative Solution For I/Q Imbalance Problem in DVB-T2 Systems
Authors: Karim S. Hassan, Hisham M. Hamed, Yassmine A. Fahmy, Ahmed F. Shalash
Abstract:
The mismatch between in-phase and quadrature signals in Orthogonal frequency division multiplexing (OFDM) systems, such as DVB-T2, results in a severe degradation in performance. Several general solutions have been proposed in the past, but these are largely computationally intensive, leading to complex implementations. In this paper, we propose a relatively simple iterative solution, which provides good results in relatively few iterations, using fixed precision arithmetic. An additional advantage is that complex digital blocks, such as dividers and square root, are not required. Thus, the proposed solution may be implemented in relatively simple hardware.Keywords: OFDM, DVB-T2, I/Q imbalance, I/Q mismatch, iterative method, fixed point, reduced complexity
Procedia PDF Downloads 54219289 A Modeling Approach for Blockchain-Oriented Information Systems Design
Abstract:
The blockchain technology is regarded as the most promising technology that has the potential to trigger a technological revolution. However, besides the bitcoin industry, we have not yet seen a large-scale application of blockchain in those domains that are supposed to be impacted, such as supply chain, financial network, and intelligent manufacturing. The reasons not only lie in the difficulties of blockchain implementation, but are also root in the challenges of blockchain-oriented information systems design. As the blockchain members are self-interest actors that belong to organizations with different existing information systems. As they expect different information inputs and outputs of the blockchain application, a common language protocol is needed to facilitate communications between blockchain members. Second, considering the decentralization of blockchain organization, there is not any central authority to organize and coordinate the business processes. Thus, the information systems built on blockchain should support more adaptive business process. This paper aims to address these difficulties by providing a modeling approach for blockchain-oriented information systems design. We will investigate the information structure of distributed-ledger data with conceptual modeling techniques and ontology theories, and build an effective ontology mapping method for the inter-organization information flow and blockchain information records. Further, we will study the distributed-ledger-ontology based business process modeling to support adaptive enterprise on blockchain.Keywords: blockchain, ontology, information systems modeling, business process
Procedia PDF Downloads 44919288 Design of Circular Patch Antenna in Terahertz Band for Medical Applications
Authors: Moulfi Bouchra, Ferouani Souheyla, Ziani Kerarti Djalal, Moulessehoul Wassila
Abstract:
The wireless body network (WBAN) is the most interesting network these days and especially with the appearance of contagious illnesses such as covid 19, which require surveillance in the house. In this article, we have designed a circular microstrip antenna. Gold is the material used respectively for the patch and the ground plane and Gallium (εr=12.94) is chosen as the dielectric substrate. The dimensions of the antenna are 82.10*62.84 μm2 operating at a frequency of 3.85 THz. The proposed, designed antenna has a return loss of -46.046 dB and a gain of 3.74 dBi, and it can measure various physiological parameters and sensors that help in the overall monitoring of an individual's health condition.Keywords: circular patch antenna, Terahertz transmission, WBAN applications, real-time monitoring
Procedia PDF Downloads 30719287 Parameters Identification and Sensitivity Study for Abrasive WaterJet Milling Model
Authors: Didier Auroux, Vladimir Groza
Abstract:
This work is part of STEEP Marie-Curie ITN project, and it focuses on the identification of unknown parameters of the proposed generic Abrasive WaterJet Milling (AWJM) PDE model, that appears as an ill-posed inverse problem. The necessity of studying this problem comes from the industrial milling applications where the possibility to predict and model the final surface with high accuracy is one of the primary tasks in the absence of any knowledge of the model parameters that should be used. In this framework, we propose the identification of model parameters by minimizing a cost function, measuring the difference between experimental and numerical solutions. The adjoint approach based on corresponding Lagrangian gives the opportunity to find out the unknowns of the AWJM model and their optimal values that could be used to reproduce the required trench profile. Due to the complexity of the nonlinear problem and a large number of model parameters, we use an automatic differentiation software tool (TAPENADE) for the adjoint computations. By adding noise to the artificial data, we show that in fact the parameter identification problem is highly unstable and strictly depends on input measurements. Regularization terms could be effectively used to deal with the presence of data noise and to improve the identification correctness. Based on this approach we present results in 2D and 3D of the identification of the model parameters and of the surface prediction both with self-generated data and measurements obtained from the real production. Considering different types of model and measurement errors allows us to obtain acceptable results for manufacturing and to expect the proper identification of unknowns. This approach also gives us the ability to distribute the research on more complex cases and consider different types of model and measurement errors as well as 3D time-dependent model with variations of the jet feed speed.Keywords: Abrasive Waterjet Milling, inverse problem, model parameters identification, regularization
Procedia PDF Downloads 31619286 Cache Analysis and Software Optimizations for Faster on-Chip Network Simulations
Authors: Khyamling Parane, B. M. Prabhu Prasad, Basavaraj Talawar
Abstract:
Fast simulations are critical in reducing time to market in CMPs and SoCs. Several simulators have been used to evaluate the performance and power consumed by Network-on-Chips. Researchers and designers rely upon these simulators for design space exploration of NoC architectures. Our experiments show that simulating large NoC topologies take hours to several days for completion. To speed up the simulations, it is necessary to investigate and optimize the hotspots in simulator source code. Among several simulators available, we choose Booksim2.0, as it is being extensively used in the NoC community. In this paper, we analyze the cache and memory system behaviour of Booksim2.0 to accurately monitor input dependent performance bottlenecks. Our measurements show that cache and memory usage patterns vary widely based on the input parameters given to Booksim2.0. Based on these measurements, the cache configuration having least misses has been identified. To further reduce the cache misses, we use software optimization techniques such as removal of unused functions, loop interchanging and replacing post-increment operator with pre-increment operator for non-primitive data types. The cache misses were reduced by 18.52%, 5.34% and 3.91% by employing above technology respectively. We also employ thread parallelization and vectorization to improve the overall performance of Booksim2.0. The OpenMP programming model and SIMD are used for parallelizing and vectorizing the more time-consuming portions of Booksim2.0. Speedups of 2.93x and 3.97x were observed for the Mesh topology with 30 × 30 network size by employing thread parallelization and vectorization respectively.Keywords: cache behaviour, network-on-chip, performance profiling, vectorization
Procedia PDF Downloads 19719285 Budget Optimization for Maintenance of Bridges in Egypt
Authors: Hesham Abd Elkhalek, Sherif M. Hafez, Yasser M. El Fahham
Abstract:
Allocating limited budget to maintain bridge networks and selecting effective maintenance strategies for each bridge represent challenging tasks for maintenance managers and decision makers. In Egypt, bridges are continuously deteriorating. In many cases, maintenance works are performed due to user complaints. The objective of this paper is to develop a practical and reliable framework to manage the maintenance, repair, and rehabilitation (MR&R) activities of Bridges network considering performance and budget limits. The model solves an optimization problem that maximizes the average condition of the entire network given the limited available budget using Genetic Algorithm (GA). The framework contains bridge inventory, condition assessment, repair cost calculation, deterioration prediction, and maintenance optimization. The developed model takes into account multiple parameters including serviceability requirements, budget allocation, element importance on structural safety and serviceability, bridge impact on network, and traffic. A questionnaire is conducted to complete the research scope. The proposed model is implemented in software, which provides a friendly user interface. The framework provides a multi-year maintenance plan for the entire network for up to five years. A case study of ten bridges is presented to validate and test the proposed model with data collected from Transportation Authorities in Egypt. Different scenarios are presented. The results are reasonable, feasible and within acceptable domain.Keywords: bridge management systems (BMS), cost optimization condition assessment, fund allocation, Markov chain
Procedia PDF Downloads 29119284 A Multi Criteria Approach for Prioritization of Low Volume Rural Roads for Maintenance and Improvement
Authors: L. V. S. S. Phaneendra Bolem, S. Shankar
Abstract:
Low Volume Rural Roads (LVRRs) constitute an integral component of the road system in all countries. These encompass all aspects of the social and economic development of rural communities. It is known that on a worldwide basis the number of low traffic roads far exceeds the length of high volume roads. Across India, 90% of the roads are LVRRs, and they often form the most important link in terms of providing access to educational, medical, recreational and commercial activities in local and regional areas. In the recent past, Government of India (GoI), with the initiation of the ambitious programme namely 'Pradhan Mantri Gram Sadak Yojana' (PMGSY) gave greater importance to LVRRs realizing their role in economic development of rural communities. The vast expansion of the road network has brought connectivity to the rural areas of the country. Further, it is noticed that due to increasing axle loads and lack of timely maintenance, is accelerated the process of deterioration of LVRRs. In addition to this due to limited budget for maintenance of these roads systematic and scientific approach in utilizing the available resources has been necessitated. This would enable better prioritization and ranking for the maintenance and make ‘all-weather roads’. Taking this into account the present study has adopted a multi-criteria approach. The multi-criteria approach includes parameters such as social, economic, environmental and pavement condition as the main criterion and some sub-criteria to find the best suitable parameters and their weight. For this purpose the expert’s opinion survey was carried out using Delphi Technique (DT) considering Likert scale, pairwise comparison and ranking methods and entire data was analyzed. Finally, this study developed the maintenance criterion considering the socio-economic, environmental and pavement condition parameters for effective maintenance of low volume roads based on the engineering judgment.Keywords: Delphi technique, experts opinion survey, low volume rural road maintenance, multi criteria analysis
Procedia PDF Downloads 16619283 Using Artificial Intelligence Method to Explore the Important Factors in the Reuse of Telecare by the Elderly
Authors: Jui-Chen Huang
Abstract:
This research used artificial intelligence method to explore elderly’s opinions on the reuse of telecare, its effect on their service quality, satisfaction and the relationship between customer perceived value and intention to reuse. This study conducted a questionnaire survey on the elderly. A total of 124 valid copies of a questionnaire were obtained. It adopted Backpropagation Network (BPN) to propose an effective and feasible analysis method, which is different from the traditional method. Two third of the total samples (82 samples) were taken as the training data, and the one third of the samples (42 samples) were taken as the testing data. The training and testing data RMSE (root mean square error) are 0.022 and 0.009 in the BPN, respectively. As shown, the errors are acceptable. On the other hand, the training and testing data RMSE are 0.100 and 0.099 in the regression model, respectively. In addition, the results showed the service quality has the greatest effects on the intention to reuse, followed by the satisfaction, and perceived value. This result of the Backpropagation Network method is better than the regression analysis. This result can be used as a reference for future research.Keywords: artificial intelligence, backpropagation network (BPN), elderly, reuse, telecare
Procedia PDF Downloads 212