Search results for: innovation network
4902 Conceptualizing a Biomimetic Fablab Based on the Makerspace Concept and Biomimetics Design Research
Authors: Petra Gruber, Ariana Rupp, Peter Niewiarowski
Abstract:
This paper presents a concept for a biomimetic fablab as a physical space for education, research and development of innovation inspired by nature. Biomimetics as a discipline finds increasing recognition in academia and has started to be institutionalized at universities in programs and centers. The Biomimicry Research and Innovation Center was founded in 2012 at the University of Akron as an interdisciplinary venture for the advancement of innovation inspired by nature and is part of a larger community fostering the approach of bioimimicry in the Great Lakes region of the US. With 30 faculty members the center has representatives from Colleges of Arts and Sciences (e.g., biology, chemistry, geoscience, and philosophy) Engineering (e.g., mechanical, civil, and biomedical), Polymer Science, and Myers School of Arts. A platform for training PhDs in Biomimicry (17 students currently enrolled) is co-funded by educational institutions and industry partners. Research at the center touches on many areas but is also currently biased towards materials and structures, with highlights being materials based on principles found in spider silk and gecko attachment mechanisms. As biomimetics is also a novel scientific discipline, there is little standardisation in programming and the equipment of research facilities. As a field targeting innovation, design and prototyping processes are fundamental parts of the developments. For experimental design and prototyping, MIT's maker space concept seems to fit well to the requirements, but facilities need to be more specialised in terms of accessing biological systems and knowledge, specific research, production or conservation requirements. For the education and research facility BRIC we conceptualize the concept of a biomimicry fablab, that ties into the existing maker space concept and creates the setting for interdisciplinary research and development carried out in the program. The concept takes on the process of biomimetics as a guideline to define core activities that shall be enhanced by the allocation of specific spaces and tools. The limitations of such a facility and the intersections to further specialised labs housed in the classical departments are of special interest. As a preliminary proof of concept two biomimetic design courses carried out in 2016 are investigated in terms of needed tools and infrastructure. The spring course was a problem based biomimetic design challenge in collaboration with an innovation company interested in product design for assisted living and medical devices. The fall course was a solution based biomimetic design course focusing on order and hierarchy in nature with the goal of finding meaningful translations into art and technology. The paper describes the background of the BRIC center, identifies and discusses the process of biomimetics, evaluates the classical maker space concept and explores how these elements can shape the proposed research facility of a biomimetic fablab by examining two examples of design courses held in 2016.Keywords: biomimetics, biomimicry, design, biomimetic fablab
Procedia PDF Downloads 2954901 Managerial Advice-Seeking and Supply Chain Resilience: A Social Capital Perspective
Authors: Ethan Nikookar, Yalda Boroushaki, Larissa Statsenko, Jorge Ochoa Paniagua
Abstract:
Given the serious impact that supply chain disruptions can have on a firm's bottom-line performance, both industry and academia are interested in supply chain resilience, a capability of the supply chain that enables it to cope with disruptions. To date, much of the research has focused on the antecedents of supply chain resilience. This line of research has suggested various firm-level capabilities that are associated with greater supply chain resilience. A consensus has emerged among researchers that supply chain flexibility holds the greatest potential to create resilience. Supply chain flexibility achieves resilience by creating readiness to respond to disruptions with little cost and time by means of reconfiguring supply chain resources to mitigate the impacts of the disruption. Decisions related to supply chain disruptions are made by supply chain managers; however, the role played by supply chain managers' reference networks has been overlooked in the supply chain resilience literature. This study aims to understand the impact of supply chain managers on their firms' supply chain resilience. Drawing on social capital theory and social network theory, this paper proposes a conceptual model to explore the role of supply chain managers in developing the resilience of supply chains. Our model posits that higher level of supply chain managers' embeddedness in their reference network is associated with increased resilience of their firms' supply chain. A reference network includes individuals from whom supply chain managers seek advice on supply chain related matters. The relationships between supply chain managers' embeddedness in reference network and supply chain resilience are mediated by supply chain flexibility.Keywords: supply chain resilience, embeddedness, reference networks, social capitals
Procedia PDF Downloads 2284900 A New Internal Architecture Based On Feature Selection for Holonic Manufacturing System
Authors: Jihan Abdulazeez Ahmed, Adnan Mohsin Abdulazeez Brifcani
Abstract:
This paper suggests a new internal architecture of holon based on feature selection model using the combination of Bees Algorithm (BA) and Artificial Neural Network (ANN). BA is used to generate features while ANN is used as a classifier to evaluate the produced features. Proposed system is applied on the Wine data set, the statistical result proves that the proposed system is effective and has the ability to choose informative features with high accuracy.Keywords: artificial neural network, bees algorithm, feature selection, Holon
Procedia PDF Downloads 4574899 An Assessment of Drainage Network System in Nigeria Urban Areas using Geographical Information Systems: A Case Study of Bida, Niger State
Authors: Yusuf Hussaini Atulukwu, Daramola Japheth, Tabitit S. Tabiti, Daramola Elizabeth Lara
Abstract:
In view of the recent limitations faced by the township concerning poorly constructed and in some cases non - existence of drainage facilities that resulted into incessant flooding in some parts of the community poses threat to life,property and the environment. The research seeks to address this issue by showing the spatial distribution of drainage network in Bida Urban using Geographic information System techniques. Relevant features were extracted from existing Bida based Map using un-screen digitization and x, y, z, data of existing drainages were acquired using handheld Global Positioning System (GPS). These data were uploaded into ArcGIS 9.2, software, and stored in the relational database structure that was used to produce the spatial data drainage network of the township. The result revealed that about 40 % of the drainages are blocked with sand and refuse, 35 % water-logged as a result of building across erosion channels and dilapidated bridges as a result of lack of drainage along major roads. The study thus concluded that drainage network systems in Bida community are not in good working condition and urgent measures must be initiated in order to avoid future disasters especially with the raining season setting in. Based on the above findings, the study therefore recommends that people within the locality should avoid dumping municipal waste within the drainage path while sand blocked or weed blocked drains should be clear by the authority concerned. In the same vein the authority should ensured that contract of drainage construction be awarded to professionals and all the natural drainages caused by erosion should be addressed to avoid future disasters.Keywords: drainage network, spatial, digitization, relational database, waste
Procedia PDF Downloads 3344898 Applied Bayesian Regularized Artificial Neural Network for Up-Scaling Wind Speed Profile and Distribution
Authors: Aghbalou Nihad, Charki Abderafi, Saida Rahali, Reklaoui Kamal
Abstract:
Maximize the benefit from the wind energy potential is the most interest of the wind power stakeholders. As a result, the wind tower size is radically increasing. Nevertheless, choosing an appropriate wind turbine for a selected site require an accurate estimate of vertical wind profile. It is also imperative from cost and maintenance strategy point of view. Then, installing tall towers or even more expensive devices such as LIDAR or SODAR raises the costs of a wind power project. Various models were developed coming within this framework. However, they suffer from complexity, generalization and lacks accuracy. In this work, we aim to investigate the ability of neural network trained using the Bayesian Regularization technique to estimate wind speed profile up to height of 100 m based on knowledge of wind speed lower heights. Results show that the proposed approach can achieve satisfactory predictions and proof the suitability of the proposed method for generating wind speed profile and probability distributions based on knowledge of wind speed at lower heights.Keywords: bayesian regularization, neural network, wind shear, accuracy
Procedia PDF Downloads 5024897 Conceptual Framework of Continuous Academic Lecturer Model in Islamic Higher Education
Authors: Lailial Muhtifah, Sirtul Marhamah
Abstract:
This article forwards the conceptual framework of continuous academic lecturer model in Islamic higher education (IHE). It is intended to make a contribution to the broader issue of how the concept of excellence can promote adherence to standards in higher education and drive quality enhancement. This model reveals a process and steps to increase performance and achievement of excellence regular lecturer gradually. Studies in this model are very significant to realize excellence academic culture in IHE. Several steps were identified from previous studies through literature study and empirical findings. A qualitative study was conducted at institute. Administrators and lecturers were interviewed, and lecturers learning communities observed to explore institute culture policies, and procedures. The original in this study presents and called Continuous Academic Lecturer Model (CALM) with its components, namely Standard, Quality, and Excellent as the basis for this framework (SQE). Innovation Excellence Framework requires Leaders to Support (LS) lecturers to achieve a excellence culture. So, the model named CALM-SQE+LS. Several components of performance and achievement of CALM-SQE+LS Model should be disseminated and cultivated to all lecturers in university excellence in terms of innovation. The purpose of this article is to define the concept of “CALM-SQE+LS”. Originally, there were three components in the Continuous Academic Lecturer Model i.e. standard, quality, and excellence plus leader support. This study is important to the community as specific cases that may inform educational leaders on mechanisms that may be leveraged to ensure successful implementation of policies and procedures outline of CALM with its components (SQE+LS) in institutional culture and professional leader literature. The findings of this study learn how continuous academic lecturer is part of a group's culture, how it benefits in university. This article blends the available criteria into several sub-component to give new insights towards empowering lecturer the innovation excellence at the IHE. The proposed conceptual framework is also presented.Keywords: continuous academic lecturer model, excellence, quality, standard
Procedia PDF Downloads 2014896 Weed Out the Bad Seeds: The Impact of Strategic Portfolio Management on Patent Quality
Authors: A. Lefebre, M. Willekens, K. Debackere
Abstract:
Since the 1990s, patent applications have been booming, especially in the field of telecommunications. However, this increase in patent filings has been associated with an (alleged) decrease in patent quality. The plethora of low-quality patents devalues the high-quality ones, thus weakening the incentives for inventors to patent inventions. Despite the rich literature on strategic patenting, previous research has neglected to emphasize the importance of patent portfolio management and its impact on patent quality. In this paper, we compare related patent portfolios vs. nonrelated patents and investigate whether the patent quality and innovativeness differ between the two types. In the analyses, patent quality is proxied by five individual proxies (number of inventors, claims, renewal years, designated states, and grant lag), and these proxies are then aggregated into a quality index. Innovativeness is proxied by two measures: the originality and radicalness index. Results suggest that related patent portfolios have, on average, a lower patent quality compared to nonrelated patents, thus suggesting that firms use them for strategic purposes rather than for the extended protection they could offer. Even upon testing the individual proxies as a dependent variable, we find evidence that related patent portfolios are of lower quality compared to nonrelated patents, although not all results show significant coefficients. Furthermore, these proxies provide evidence of the importance of adding fixed effects to the model. Since prior research has found that these proxies are inherently flawed and never fully capture the concept of patent quality, we have chosen to run the analyses with individual proxies as supplementary analyses; however, we stick with the comprehensive index as our main model. This ensures that the results are not dependent upon one certain proxy but allows for multiple views of the concept. The presence of divisional applications might be linked to the level of innovativeness of the underlying invention. It could be the case that the parent application is so important that firms are going through the administrative burden of filing for divisional applications to ensure the protection of the invention and the preemption of competition. However, it could also be the case that the preempting is a result of divisional applications being used strategically as a backup plan and prolonging strategy, thus negatively impacting the innovation in the portfolio. Upon testing the level of novelty and innovation in the related patent portfolios by means of the originality and radicalness index, we find evidence for a significant negative association with related patent portfolios. The minimum innovation that has been brought on by the patents in the related patent portfolio is lower compared to the minimum innovation that can be found in nonrelated portfolios, providing evidence for the second argument.Keywords: patent portfolio management, patent quality, related patent portfolios, strategic patenting
Procedia PDF Downloads 944895 Application of Artificial Neural Network and Background Subtraction for Determining Body Mass Index (BMI) in Android Devices Using Bluetooth
Authors: Neil Erick Q. Madariaga, Noel B. Linsangan
Abstract:
Body Mass Index (BMI) is one of the different ways to monitor the health of a person. It is based on the height and weight of the person. This study aims to compute for the BMI using an Android tablet by obtaining the height of the person by using a camera and measuring the weight of the person by using a weighing scale or load cell. The height of the person was estimated by applying background subtraction to the image captured and applying different processes such as getting the vanishing point and applying Artificial Neural Network. The weight was measured by using Wheatstone bridge load cell configuration and sending the value to the computer by using Gizduino microcontroller and Bluetooth technology after the amplification using AD620 instrumentation amplifier. The application will process the images and read the measured values and show the BMI of the person. The study met all the objectives needed and further studies will be needed to improve the design project.Keywords: body mass index, artificial neural network, vanishing point, bluetooth, wheatstone bridge load cell
Procedia PDF Downloads 3244894 Embedded Semantic Segmentation Network Optimized for Matrix Multiplication Accelerator
Authors: Jaeyoung Lee
Abstract:
Autonomous driving systems require high reliability to provide people with a safe and comfortable driving experience. However, despite the development of a number of vehicle sensors, it is difficult to always provide high perceived performance in driving environments that vary from time to season. The image segmentation method using deep learning, which has recently evolved rapidly, provides high recognition performance in various road environments stably. However, since the system controls a vehicle in real time, a highly complex deep learning network cannot be used due to time and memory constraints. Moreover, efficient networks are optimized for GPU environments, which degrade performance in embedded processor environments equipped simple hardware accelerators. In this paper, a semantic segmentation network, matrix multiplication accelerator network (MMANet), optimized for matrix multiplication accelerator (MMA) on Texas instrument digital signal processors (TI DSP) is proposed to improve the recognition performance of autonomous driving system. The proposed method is designed to maximize the number of layers that can be performed in a limited time to provide reliable driving environment information in real time. First, the number of channels in the activation map is fixed to fit the structure of MMA. By increasing the number of parallel branches, the lack of information caused by fixing the number of channels is resolved. Second, an efficient convolution is selected depending on the size of the activation. Since MMA is a fixed, it may be more efficient for normal convolution than depthwise separable convolution depending on memory access overhead. Thus, a convolution type is decided according to output stride to increase network depth. In addition, memory access time is minimized by processing operations only in L3 cache. Lastly, reliable contexts are extracted using the extended atrous spatial pyramid pooling (ASPP). The suggested method gets stable features from an extended path by increasing the kernel size and accessing consecutive data. In addition, it consists of two ASPPs to obtain high quality contexts using the restored shape without global average pooling paths since the layer uses MMA as a simple adder. To verify the proposed method, an experiment is conducted using perfsim, a timing simulator, and the Cityscapes validation sets. The proposed network can process an image with 640 x 480 resolution for 6.67 ms, so six cameras can be used to identify the surroundings of the vehicle as 20 frame per second (FPS). In addition, it achieves 73.1% mean intersection over union (mIoU) which is the highest recognition rate among embedded networks on the Cityscapes validation set.Keywords: edge network, embedded network, MMA, matrix multiplication accelerator, semantic segmentation network
Procedia PDF Downloads 1294893 Energy Efficient Massive Data Dissemination Through Vehicle Mobility in Smart Cities
Authors: Salman Naseer
Abstract:
One of the main challenges of operating a smart city (SC) is collecting the massive data generated from multiple data sources (DS) and to transmit them to the control units (CU) for further data processing and analysis. These ever-increasing data demands require not only more and more capacity of the transmission channels but also results in resource over-provision to meet the resilience requirements, thus the unavoidable waste because of the data fluctuations throughout the day. In addition, the high energy consumption (EC) and carbon discharges from these data transmissions posing serious issues to the environment we live in. Therefore, to overcome the issues of intensive EC and carbon emissions (CE) of massive data dissemination in Smart Cities, we propose an energy efficient and carbon reduction approach by utilizing the daily mobility of the existing vehicles as an alternative communications channel to accommodate the data dissemination in smart cities. To illustrate the effectiveness and efficiency of our approach, we take the Auckland City in New Zealand as an example, assuming massive data generated by various sources geographically scattered throughout the Auckland region to the control centres located in city centre. The numerical results show that our proposed approach can provide up to 5 times lower delay as transferring the large volume of data by utilizing the existing daily vehicles’ mobility than the conventional transmission network. Moreover, our proposed approach offers about 30% less EC and CE than that of conventional network transmission approach.Keywords: smart city, delay tolerant network, infrastructure offloading, opportunistic network, vehicular mobility, energy consumption, carbon emission
Procedia PDF Downloads 1424892 A TgCNN-Based Surrogate Model for Subsurface Oil-Water Phase Flow under Multi-Well Conditions
Authors: Jian Li
Abstract:
The uncertainty quantification and inversion problems of subsurface oil-water phase flow usually require extensive repeated forward calculations for new runs with changed conditions. To reduce the computational time, various forms of surrogate models have been built. Related research shows that deep learning has emerged as an effective surrogate model, while most surrogate models with deep learning are purely data-driven, which always leads to poor robustness and abnormal results. To guarantee the model more consistent with the physical laws, a coupled theory-guided convolutional neural network (TgCNN) based surrogate model is built to facilitate computation efficiency under the premise of satisfactory accuracy. The model is a convolutional neural network based on multi-well reservoir simulation. The core notion of this proposed method is to bridge two separate blocks on top of an overall network. They underlie the TgCNN model in a coupled form, which reflects the coupling nature of pressure and water saturation in the two-phase flow equation. The model is driven by not only labeled data but also scientific theories, including governing equations, stochastic parameterization, boundary, and initial conditions, well conditions, and expert knowledge. The results show that the TgCNN-based surrogate model exhibits satisfactory accuracy and efficiency in subsurface oil-water phase flow under multi-well conditions.Keywords: coupled theory-guided convolutional neural network, multi-well conditions, surrogate model, subsurface oil-water phase
Procedia PDF Downloads 864891 Transmission Line Protection Challenges under High Penetration of Renewable Energy Sources and Proposed Solutions: A Review
Authors: Melake Kuflom
Abstract:
European power networks involve the use of multiple overhead transmission lines to construct a highly duplicated system that delivers reliable and stable electrical energy to the distribution level. The transmission line protection applied in the existing GB transmission network are normally independent unit differential and time stepped distance protection schemes, referred to as main-1 & main-2 respectively, with overcurrent protection as a backup. The increasing penetration of renewable energy sources, commonly referred as “weak sources,” into the power network resulted in the decline of fault level. Traditionally, the fault level of the GB transmission network has been strong; hence the fault current contribution is more than sufficient to ensure the correct operation of the protection schemes. However, numerous conventional coal and nuclear generators have been or about to shut down due to the societal requirement for CO2 emission reduction, and this has resulted in a reduction in the fault level on some transmission lines, and therefore an adaptive transmission line protection is required. Generally, greater utilization of renewable energy sources generated from wind or direct solar energy results in a reduction of CO2 carbon emission and can increase the system security and reliability but reduces the fault level, which has an adverse effect on protection. Consequently, the effectiveness of conventional protection schemes under low fault levels needs to be reviewed, particularly for future GB transmission network operating scenarios. The proposed paper will evaluate the transmission line challenges under high penetration of renewable energy sources andprovides alternative viable protection solutions based on the problem observed. The paper will consider the assessment ofrenewable energy sources (RES) based on a fully rated converter technology. The DIgSILENT Power Factory software tool will be used to model the network.Keywords: fault level, protection schemes, relay settings, relay coordination, renewable energy sources
Procedia PDF Downloads 2064890 Development of an Improved Paradigm for the Tourism Sector in the Department of Huila, Colombia: A Theoretical and Empirical Approach
Authors: Laura N. Bolivar T.
Abstract:
The tourism importance for regional development is mainly highlighted by the collaborative, cooperating and competitive relationships of the involved agents. The fostering of associativity processes, in particular, the cluster approach emphasizes the beneficial outcomes from the concentration of enterprises, where innovation and entrepreneurship flourish and shape the dynamics for tourism empowerment. Considering the department of Huila, it is located in the south-west of Colombia and holds the biggest coffee production in the country, although it barely contributes to the national GDP. Hence, its economic development strategy is looking for more dynamism and Huila could be consolidated as a leading destination for cultural, ecological and heritage tourism, if at least the public policy making processes for the tourism management of La Tatacoa Desert, San Agustin Park and Bambuco’s National Festival, were implemented in a more efficient manner. In this order of ideas, this study attempts to address the potential restrictions and beneficial factors for the consolidation of the tourism sector of Huila-Colombia as a cluster and how could it impact its regional development. Therefore, a set of theoretical frameworks such as the Tourism Routes Approach, the Tourism Breeding Environment, the Community-based Tourism Method, among others, but also a collection of international experiences describing tourism clustering processes and most outstanding problematics, is analyzed to draw up learning points, structure of proceedings and success-driven factors to be contrasted with the local characteristics in Huila, as the region under study. This characterization involves primary and secondary information collection methods and comprises the South American and Colombian context together with the identification of involved actors and their roles, main interactions among them, major tourism products and their infrastructure, the visitors’ perspective on the situation and a recap of the related needs and benefits regarding the host community. Considering the umbrella concepts, the theoretical and the empirical approaches, and their comparison with the local specificities of the tourism sector in Huila, an array of shortcomings is analytically constructed and a series of guidelines are proposed as a way to overcome them and simultaneously, raise economic development and positively impact Huila’s well-being. This non-exhaustive bundle of guidelines is focused on fostering cooperating linkages in the actors’ network, dealing with Information and Communication Technologies’ innovations, reinforcing the supporting infrastructure, promoting the destinations considering the less known places as well, designing an information system enabling the tourism network to assess the situation based on reliable data, increasing competitiveness, developing participative public policy-making processes and empowering the host community about the touristic richness. According to this, cluster dynamics would drive the tourism sector to meet articulation and joint effort, then involved agents and local particularities would be adequately assisted to cope with the current changing environment of globalization and competition.Keywords: innovative strategy, local development, network of tourism actors, tourism cluster
Procedia PDF Downloads 1414889 Optimum Tuning Capacitors for Wireless Charging of Electric Vehicles Considering Variation in Coil Distances
Authors: Muhammad Abdullah Arafat, Nahrin Nowrose
Abstract:
Wireless charging of electric vehicles is becoming more and more attractive as large amount of power can now be transferred to a reasonable distance using magnetic resonance coupling method. However, proper tuning of the compensation network is required to achieve maximum power transmission. Due to the variation of coil distance from the nominal value as a result of change in tire condition, change in weight or uneven road condition, the tuning of the compensation network has become challenging. In this paper, a tuning method has been described to determine the optimum values of the compensation network in order to maximize the average output power. The simulation results show that 5.2 percent increase in average output power is obtained for 10 percent variation in coupling coefficient using the optimum values without the need of additional space and electro-mechanical components. The proposed method is applicable to both static and dynamic charging of electric vehicles.Keywords: coupling coefficient, electric vehicles, magnetic resonance coupling, tuning capacitor, wireless power transfer
Procedia PDF Downloads 1954888 Improved Super-Resolution Using Deep Denoising Convolutional Neural Network
Authors: Pawan Kumar Mishra, Ganesh Singh Bisht
Abstract:
Super-resolution is the technique that is being used in computer vision to construct high-resolution images from a single low-resolution image. It is used to increase the frequency component, recover the lost details and removing the down sampling and noises that caused by camera during image acquisition process. High-resolution images or videos are desired part of all image processing tasks and its analysis in most of digital imaging application. The target behind super-resolution is to combine non-repetition information inside single or multiple low-resolution frames to generate a high-resolution image. Many methods have been proposed where multiple images are used as low-resolution images of same scene with different variation in transformation. This is called multi-image super resolution. And another family of methods is single image super-resolution that tries to learn redundancy that presents in image and reconstruction the lost information from a single low-resolution image. Use of deep learning is one of state of art method at present for solving reconstruction high-resolution image. In this research, we proposed Deep Denoising Super Resolution (DDSR) that is a deep neural network for effectively reconstruct the high-resolution image from low-resolution image.Keywords: resolution, deep-learning, neural network, de-blurring
Procedia PDF Downloads 5174887 A Low Power Consumption Routing Protocol Based on a Meta-Heuristics
Authors: Kaddi Mohammed, Benahmed Khelifa D. Benatiallah
Abstract:
A sensor network consists of a large number of sensors deployed in areas to monitor and communicate with each other through a wireless medium. The collected routing data in the network consumes most of the energy of the sensor nodes. For this purpose, multiple routing approaches have been proposed to conserve energy resource at the sensors and to overcome the challenges of its limitation. In this work, we propose a new low energy consumption routing protocol for wireless sensor networks based on a meta-heuristic methods. Our protocol is to operate more fairly energy when routing captured data to the base station.Keywords: WSN, routing, energy, heuristic
Procedia PDF Downloads 3434886 Shoreline Change Estimation from Survey Image Coordinates and Neural Network Approximation
Authors: Tienfuan Kerh, Hsienchang Lu, Rob Saunders
Abstract:
Shoreline erosion problems caused by global warming and sea level rising may result in losing of land areas, so it should be examined regularly to reduce possible negative impacts. Initially in this study, three sets of survey images obtained from the years of 1990, 2001, and 2010, respectively, are digitalized by using graphical software to establish the spatial coordinates of six major beaches around the island of Taiwan. Then, by overlaying the known multi-period images, the change of shoreline can be observed from their distribution of coordinates. In addition, the neural network approximation is used to develop a model for predicting shoreline variation in the years of 2015 and 2020. The comparison results show that there is no significant change of total sandy area for all beaches in the three different periods. However, the prediction results show that two beaches may exhibit an increasing of total sandy areas under a statistical 95% confidence interval. The proposed method adopted in this study may be applicable to other shorelines of interest around the world.Keywords: digitalized shoreline coordinates, survey image overlaying, neural network approximation, total beach sandy areas
Procedia PDF Downloads 2724885 Analysis of Policy Issues on Computer-Based Testing in Nigeria
Authors: Samuel Oye Bandele
Abstract:
A policy is a system of principles to guide activities and strategic decisions of an organisation in order to achieve stated objectives and meeting expected outcomes. A Computer Based Test (CBT) policy is therefore a statement of intent to drive the CBT programmes, and should be implemented as a procedure or protocol. Policies are hence generally adopted by an organization or a nation. The concern here, in this paper, is the consideration and analysis of issues that are significant to evolving the acceptable policy that will drive the new CBT innovation in Nigeria. Public examinations and internal examinations in higher educational institutions in Nigeria are gradually making a radical shift from Paper Based or Paper-Pencil to Computer-Based Testing. The need to make an objective and empirical analysis of Policy issues relating to CBT became expedient. The following are some of the issues on CBT evolution in Nigeria that were identified as requiring policy backing. Prominent among them are requirements for establishing CBT centres, purpose of CBT, types and acquisition of CBT equipment, qualifications of staff: professional, technical and regular, security plans and curbing of cheating during examinations, among others. The descriptive research design was employed based on a population consisting of Principal Officers (Policymakers), Staff (Teaching and non-Teaching-Policy implementors), and CBT staff ( Technical and Professional- Policy supports) and candidates (internal and external). A fifty-item researcher-constructed questionnaire on policy issues was employed to collect data from 600 subjects drawn from higher institutions in South West Nigeria, using the purposive and stratified random sampling techniques. Data collected were analysed using descriptive (frequency counts, means and standard deviation) and inferential (t-test, ANOVA, regression and Factor analysis) techniques. Findings from this study showed, among others, that the factor loadings had significantly weights on the organizational and National policy issues on CBT innovation in Nigeria.Keywords: computer-based testing, examination, innovation, paper-based testing, paper pencil based testing, policy issues
Procedia PDF Downloads 2484884 Effect of Enterprise Digital Transformation on Enterprise Growth: Theoretical Logic and Chinese Experience
Authors: Bin Li
Abstract:
In the era of the digital economy, digital transformation has gradually become a strategic choice for enterprise development, but there is a relative lack of systematic research from the perspective of enterprise growth. Based on the sample of Chinese A-share listed companies from 2011 to 2021, this paper constructs A digital transformation index system and an enterprise growth composite index to empirically test the impact of enterprise digital transformation on enterprise growth and its mechanism. The results show that digital transformation can significantly promote corporate growth. The mechanism analysis finds that reducing operating costs, optimizing human capital structure, promoting R&D output and improving digital innovation capability play an important intermediary role in the process of digital transformation promoting corporate growth. At the same time, the level of external digital infrastructure and the strength of organizational resilience play a positive moderating role in the process of corporate digital transformation promoting corporate growth. In addition, while further analyzing the heterogeneity of enterprises, this paper further deepens the analysis of the driving factors and digital technology support of digital transformation, as well as the three dimensions of enterprise growth, thus deepening the research depth of enterprise digital transformation.Keywords: digital transformation, enterprise growth, digital technology, digital infrastructure, organization resilience, digital innovation
Procedia PDF Downloads 614883 A Study of Behavioral Phenomena Using an Artificial Neural Network
Authors: Yudhajit Datta
Abstract:
Will is a phenomenon that has puzzled humanity for a long time. It is a belief that Will Power of an individual affects the success achieved by an individual in life. It is thought that a person endowed with great will power can overcome even the most crippling setbacks of life while a person with a weak will cannot make the most of life even the greatest assets. Behavioral aspects of the human experience such as will are rarely subjected to quantitative study owing to the numerous uncontrollable parameters involved. This work is an attempt to subject the phenomena of will to the test of an artificial neural network. The claim being tested is that will power of an individual largely determines success achieved in life. In the study, an attempt is made to incorporate the behavioral phenomenon of will into a computational model using data pertaining to the success of individuals obtained from an experiment. A neural network is to be trained using data based upon part of the model, and subsequently used to make predictions regarding will corresponding to data points of success. If the prediction is in agreement with the model values, the model is to be retained as a candidate. Ultimately, the best-fit model from among the many different candidates is to be selected, and used for studying the correlation between success and will.Keywords: will power, will, success, apathy factor, random factor, characteristic function, life story
Procedia PDF Downloads 3794882 Event Driven Dynamic Clustering and Data Aggregation in Wireless Sensor Network
Authors: Ashok V. Sutagundar, Sunilkumar S. Manvi
Abstract:
Energy, delay and bandwidth are the prime issues of wireless sensor network (WSN). Energy usage optimization and efficient bandwidth utilization are important issues in WSN. Event triggered data aggregation facilitates such optimal tasks for event affected area in WSN. Reliable delivery of the critical information to sink node is also a major challenge of WSN. To tackle these issues, we propose an event driven dynamic clustering and data aggregation scheme for WSN that enhances the life time of the network by minimizing redundant data transmission. The proposed scheme operates as follows: (1) Whenever the event is triggered, event triggered node selects the cluster head. (2) Cluster head gathers data from sensor nodes within the cluster. (3) Cluster head node identifies and classifies the events out of the collected data using Bayesian classifier. (4) Aggregation of data is done using statistical method. (5) Cluster head discovers the paths to the sink node using residual energy, path distance and bandwidth. (6) If the aggregated data is critical, cluster head sends the aggregated data over the multipath for reliable data communication. (7) Otherwise aggregated data is transmitted towards sink node over the single path which is having the more bandwidth and residual energy. The performance of the scheme is validated for various WSN scenarios to evaluate the effectiveness of the proposed approach in terms of aggregation time, cluster formation time and energy consumed for aggregation.Keywords: wireless sensor network, dynamic clustering, data aggregation, wireless communication
Procedia PDF Downloads 4514881 Modelling and Optimisation of Floating Drum Biogas Reactor
Authors: L. Rakesh, T. Y. Heblekar
Abstract:
This study entails the development and optimization of a mathematical model for a floating drum biogas reactor from first principles using thermal and empirical considerations. The model was derived on the basis of mass conservation, lumped mass heat transfer formulations and empirical biogas formation laws. The treatment leads to a system of coupled nonlinear ordinary differential equations whose solution mapped four-time independent controllable parameters to five output variables which adequately serve to describe the reactor performance. These equations were solved numerically using fourth order Runge-Kutta method for a range of input parameter values. Using the data so obtained an Artificial Neural Network with a single hidden layer was trained using Levenberg-Marquardt Damped Least Squares (DLS) algorithm. This network was then fine-tuned for optimal mapping by varying hidden layer size. This fast forward model was then employed as a health score generator in the Bacterial Foraging Optimization code. The optimal operating state of the simplified Biogas reactor was thus obtained.Keywords: biogas, floating drum reactor, neural network model, optimization
Procedia PDF Downloads 1434880 Subjective Quality Assessment for Impaired Videos with Varying Spatial and Temporal Information
Authors: Muhammad Rehan Usman, Muhammad Arslan Usman, Soo Young Shin
Abstract:
The new era of digital communication has brought up many challenges that network operators need to overcome. The high demand of mobile data rates require improved networks, which is a challenge for the operators in terms of maintaining the quality of experience (QoE) for their consumers. In live video transmission, there is a sheer need for live surveillance of the videos in order to maintain the quality of the network. For this purpose objective algorithms are employed to monitor the quality of the videos that are transmitted over a network. In order to test these objective algorithms, subjective quality assessment of the streamed videos is required, as the human eye is the best source of perceptual assessment. In this paper we have conducted subjective evaluation of videos with varying spatial and temporal impairments. These videos were impaired with frame freezing distortions so that the impact of frame freezing on the quality of experience could be studied. We present subjective Mean Opinion Score (MOS) for these videos that can be used for fine tuning the objective algorithms for video quality assessment.Keywords: frame freezing, mean opinion score, objective assessment, subjective evaluation
Procedia PDF Downloads 4944879 A Multi-Agent System for Accelerating the Delivery Process of Clinical Diagnostic Laboratory Results Using GSM Technology
Authors: Ayman M. Mansour, Bilal Hawashin, Hesham Alsalem
Abstract:
Faster delivery of laboratory test results is one of the most noticeable signs of good laboratory service and is often used as a key performance indicator of laboratory performance. Despite the availability of technology, the delivery time of clinical laboratory test results continues to be a cause of customer dissatisfaction which makes patients feel frustrated and they became careless to get their laboratory test results. The Medical Clinical Laboratory test results are highly sensitive and could harm patients especially with the severe case if they deliver in wrong time. Such results affect the treatment done by physicians if arrived at correct time efforts should, therefore, be made to ensure faster delivery of lab test results by utilizing new trusted, Robust and fast system. In this paper, we proposed a distributed Multi-Agent System to enhance and faster the process of laboratory test results delivery using SMS. The developed system relies on SMS messages because of the wide availability of GSM network comparing to the other network. The software provides the capability of knowledge sharing between different units and different laboratory medical centers. The system was built using java programming. To implement the proposed system we had many possible techniques. One of these is to use the peer-to-peer (P2P) model, where all the peers are treated equally and the service is distributed among all the peers of the network. However, for the pure P2P model, it is difficult to maintain the coherence of the network, discover new peers and ensure security. Also, security is a quite important issue since each node is allowed to join the network without any control mechanism. We thus take the hybrid P2P model, a model between the Client/Server model and the pure P2P model using GSM technology through SMS messages. This model satisfies our need. A GUI has been developed to provide the laboratory staff with the simple and easy way to interact with the system. This system provides quick response rate and the decision is faster than the manual methods. This will save patients life.Keywords: multi-agent system, delivery process, GSM technology, clinical laboratory results
Procedia PDF Downloads 2494878 Assessing Performance of Data Augmentation Techniques for a Convolutional Network Trained for Recognizing Humans in Drone Images
Authors: Masood Varshosaz, Kamyar Hasanpour
Abstract:
In recent years, we have seen growing interest in recognizing humans in drone images for post-disaster search and rescue operations. Deep learning algorithms have shown great promise in this area, but they often require large amounts of labeled data to train the models. To keep the data acquisition cost low, augmentation techniques can be used to create additional data from existing images. There are many techniques of such that can help generate variations of an original image to improve the performance of deep learning algorithms. While data augmentation is potentially assumed to improve the accuracy and robustness of the models, it is important to ensure that the performance gains are not outweighed by the additional computational cost or complexity of implementing the techniques. To this end, it is important to evaluate the impact of data augmentation on the performance of the deep learning models. In this paper, we evaluated the most currently available 2D data augmentation techniques on a standard convolutional network which was trained for recognizing humans in drone images. The techniques include rotation, scaling, random cropping, flipping, shifting, and their combination. The results showed that the augmented models perform 1-3% better compared to a base network. However, as the augmented images only contain the human parts already visible in the original images, a new data augmentation approach is needed to include the invisible parts of the human body. Thus, we suggest a new method that employs simulated 3D human models to generate new data for training the network.Keywords: human recognition, deep learning, drones, disaster mitigation
Procedia PDF Downloads 954877 Artificial Neural Network Regression Modelling of GC/MS Retention of Terpenes Present in Satureja montana Extracts Obtained by Supercritical Carbon Dioxide
Authors: Strahinja Kovačević, Jelena Vladić, Senka Vidović, Zoran Zeković, Lidija Jevrić, Sanja Podunavac Kuzmanović
Abstract:
Supercritical extracts of highly valuated medicinal plant Satureja montana were prepared by application of supercritical carbon dioxide extraction in the carbon dioxide pressure range from 125 to 350 bar and temperature range from 40 to 60°C. Using GC/MS method of analysis chemical profiles (aromatic constituents) of S. montana extracts were obtained. Self-training artificial neural networks were applied to predict the retention time of the analyzed terpenes in GC/MS system. The best ANN model obtained was multilayer perceptron (MLP 11-11-1). Hidden activation was tanh and output activation was identity with Broyden–Fletcher–Goldfarb–Shanno training algorithm. Correlation measures of the obtained network were the following: R(training) = 0.9975, R(test) = 0.9971 and R(validation) = 0.9999. The comparison of the experimental and predicted retention times of the analyzed compounds showed very high correlation (R = 0.9913) and significant predictive power of the established neural network.Keywords: ANN regression, GC/MS, Satureja montana, terpenes
Procedia PDF Downloads 4524876 Internet of Things Networks: Denial of Service Detection in Constrained Application Protocol Using Machine Learning Algorithm
Authors: Adamu Abdullahi, On Francisca, Saidu Isah Rambo, G. N. Obunadike, D. T. Chinyio
Abstract:
The paper discusses the potential threat of Denial of Service (DoS) attacks in the Internet of Things (IoT) networks on constrained application protocols (CoAP). As billions of IoT devices are expected to be connected to the internet in the coming years, the security of these devices is vulnerable to attacks, disrupting their functioning. This research aims to tackle this issue by applying mixed methods of qualitative and quantitative for feature selection, extraction, and cluster algorithms to detect DoS attacks in the Constrained Application Protocol (CoAP) using the Machine Learning Algorithm (MLA). The main objective of the research is to enhance the security scheme for CoAP in the IoT environment by analyzing the nature of DoS attacks and identifying a new set of features for detecting them in the IoT network environment. The aim is to demonstrate the effectiveness of the MLA in detecting DoS attacks and compare it with conventional intrusion detection systems for securing the CoAP in the IoT environment. Findings: The research identifies the appropriate node to detect DoS attacks in the IoT network environment and demonstrates how to detect the attacks through the MLA. The accuracy detection in both classification and network simulation environments shows that the k-means algorithm scored the highest percentage in the training and testing of the evaluation. The network simulation platform also achieved the highest percentage of 99.93% in overall accuracy. This work reviews conventional intrusion detection systems for securing the CoAP in the IoT environment. The DoS security issues associated with the CoAP are discussed.Keywords: algorithm, CoAP, DoS, IoT, machine learning
Procedia PDF Downloads 804875 Spontaneous Message Detection of Annoying Situation in Community Networks Using Mining Algorithm
Authors: P. Senthil Kumari
Abstract:
Main concerns in data mining investigation are social controls of data mining for handling ambiguity, noise, or incompleteness on text data. We describe an innovative approach for unplanned text data detection of community networks achieved by classification mechanism. In a tangible domain claim with humble secrecy backgrounds provided by community network for evading annoying content is presented on consumer message partition. To avoid this, mining methodology provides the capability to unswervingly switch the messages and similarly recover the superiority of ordering. Here we designated learning-centered mining approaches with pre-processing technique to complete this effort. Our involvement of work compact with rule-based personalization for automatic text categorization which was appropriate in many dissimilar frameworks and offers tolerance value for permits the background of comments conferring to a variety of conditions associated with the policy or rule arrangements processed by learning algorithm. Remarkably, we find that the choice of classifier has predicted the class labels for control of the inadequate documents on community network with great value of effect.Keywords: text mining, data classification, community network, learning algorithm
Procedia PDF Downloads 5084874 Brain Age Prediction Based on Brain Magnetic Resonance Imaging by 3D Convolutional Neural Network
Authors: Leila Keshavarz Afshar, Hedieh Sajedi
Abstract:
Estimation of biological brain age from MR images is a topic that has been much addressed in recent years due to the importance it attaches to early diagnosis of diseases such as Alzheimer's. In this paper, we use a 3D Convolutional Neural Network (CNN) to provide a method for estimating the biological age of the brain. The 3D-CNN model is trained by MRI data that has been normalized. In addition, to reduce computation while saving overall performance, some effectual slices are selected for age estimation. By this method, the biological age of individuals using selected normalized data was estimated with Mean Absolute Error (MAE) of 4.82 years.Keywords: brain age estimation, biological age, 3D-CNN, deep learning, T1-weighted image, SPM, preprocessing, MRI, canny, gray matter
Procedia PDF Downloads 1484873 Analysis of Decentralized on Demand Cross Layer in Cognitive Radio Ad Hoc Network
Authors: A. Sri Janani, K. Immanuel Arokia James
Abstract:
Cognitive radio ad hoc networks different unlicensed users may acquire different available channel sets. This non-uniform spectrum availability imposes special design challenges for broadcasting in CR ad hoc networks. Cognitive radio automatically detects available channels in wireless spectrum. This is a form of dynamic spectrum management. Cross-layer optimization is proposed, using this can allow far away secondary users can also involve into channel work. So it can increase the throughput and it will overcome the collision and time delay.Keywords: cognitive radio, cross layer optimization, CR mesh network, heterogeneous spectrum, mesh topology, random routing optimization technique
Procedia PDF Downloads 359