Search results for: conventional neural network
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8662

Search results for: conventional neural network

5332 Shark Detection and Classification with Deep Learning

Authors: Jeremy Jenrette, Z. Y. C. Liu, Pranav Chimote, Edward Fox, Trevor Hastie, Francesco Ferretti

Abstract:

Suitable shark conservation depends on well-informed population assessments. Direct methods such as scientific surveys and fisheries monitoring are adequate for defining population statuses, but species-specific indices of abundance and distribution coming from these sources are rare for most shark species. We can rapidly fill these information gaps by boosting media-based remote monitoring efforts with machine learning and automation. We created a database of shark images by sourcing 24,546 images covering 219 species of sharks from the web application spark pulse and the social network Instagram. We used object detection to extract shark features and inflate this database to 53,345 images. We packaged object-detection and image classification models into a Shark Detector bundle. We developed the Shark Detector to recognize and classify sharks from videos and images using transfer learning and convolutional neural networks (CNNs). We applied these models to common data-generation approaches of sharks: boosting training datasets, processing baited remote camera footage and online videos, and data-mining Instagram. We examined the accuracy of each model and tested genus and species prediction correctness as a result of training data quantity. The Shark Detector located sharks in baited remote footage and YouTube videos with an average accuracy of 89\%, and classified located subjects to the species level with 69\% accuracy (n =\ eight species). The Shark Detector sorted heterogeneous datasets of images sourced from Instagram with 91\% accuracy and classified species with 70\% accuracy (n =\ 17 species). Data-mining Instagram can inflate training datasets and increase the Shark Detector’s accuracy as well as facilitate archiving of historical and novel shark observations. Base accuracy of genus prediction was 68\% across 25 genera. The average base accuracy of species prediction within each genus class was 85\%. The Shark Detector can classify 45 species. All data-generation methods were processed without manual interaction. As media-based remote monitoring strives to dominate methods for observing sharks in nature, we developed an open-source Shark Detector to facilitate common identification applications. Prediction accuracy of the software pipeline increases as more images are added to the training dataset. We provide public access to the software on our GitHub page.

Keywords: classification, data mining, Instagram, remote monitoring, sharks

Procedia PDF Downloads 121
5331 Productivity, Phenolic Composition and Antioxidant Activity of Arrowroot (Maranta arundinacea)

Authors: Maira C. M. Fonseca, Maria Aparecida N. Sediyama, Rosana Goncalves R. das Dores, Sanzio Mollica Vidigal, Alberto C. P. Dias

Abstract:

Among Brazilian plant diversity, many species are used as food and considered minor crops (non-conventional plant foods) (NCPF). Arrowroot (Maranta arundinacea) is a NCPF from which starch is extracted from rhizome do not have gluten. Thus, arrowroot flower starch can be consumed by celiac people. Additional, some medicinal and functional proprieties are assigned to arrowroot leaves which currently are underutilized. In Brazil, it’s cultivated mainly by small scale farmers and there is no specific recommendation for fertilization. This work aimed to determinate the best fertilization for rhizome production and to verify its influence in phenolic composition and antioxidant activity of leaf extracts. Two arrowroot varieties, “Common” and “Seta”, were cultivated in organic system at state of Minas Gerais, Brazil, using cattle manure with three levels of nitrogen (N) (0, 300 and 900 kg N ha-1). The experiment design was in randomized block with four replicates. The highest production of rhizomes in both varieties, “Common” (38198.24 kg ha-1) and “Seta” (43567.71 kg ha-1), were obtained with the use of 300 kg N ha-1. With this fertilization, the total aerial part, petiole and leaf production in the varieties were respectively: “Common” (190.312 kg ha-1; 159.312 kg ha-1; 31.100 kg ha-1) and “Seta” (207.656 kg ha-1; 180.539 kg ha-1; 27.062 kg ha-1). Methanolic leaf extracts were analysed by HPLC-DAD. The major phenolic compounds found were caffeioylquinic acids, p-coumaric derivatives and flavonoids. In general, the production of these compounds significantly decreases with the increase levels of nitrogen (900 kg N ha-1). With 300 kg N ha-1 the phenolic production was similar to control. The antioxidant activity was evaluated using DPPH method and was detected around 60% of radical scavenging when 0.1 mg/mL of plant extracts were used. We concluded that fertilization with 300 kg N ha-1 increased arrowroot rhizome production, maintaining phenolic compounds yield at leaves.

Keywords: antioxidant activity, non-conventional plants, organic fertilization, phenolic compounds

Procedia PDF Downloads 204
5330 Model-Driven and Data-Driven Approaches for Crop Yield Prediction: Analysis and Comparison

Authors: Xiangtuo Chen, Paul-Henry Cournéde

Abstract:

Crop yield prediction is a paramount issue in agriculture. The main idea of this paper is to find out efficient way to predict the yield of corn based meteorological records. The prediction models used in this paper can be classified into model-driven approaches and data-driven approaches, according to the different modeling methodologies. The model-driven approaches are based on crop mechanistic modeling. They describe crop growth in interaction with their environment as dynamical systems. But the calibration process of the dynamic system comes up with much difficulty, because it turns out to be a multidimensional non-convex optimization problem. An original contribution of this paper is to propose a statistical methodology, Multi-Scenarios Parameters Estimation (MSPE), for the parametrization of potentially complex mechanistic models from a new type of datasets (climatic data, final yield in many situations). It is tested with CORNFLO, a crop model for maize growth. On the other hand, the data-driven approach for yield prediction is free of the complex biophysical process. But it has some strict requirements about the dataset. A second contribution of the paper is the comparison of these model-driven methods with classical data-driven methods. For this purpose, we consider two classes of regression methods, methods derived from linear regression (Ridge and Lasso Regression, Principal Components Regression or Partial Least Squares Regression) and machine learning methods (Random Forest, k-Nearest Neighbor, Artificial Neural Network and SVM regression). The dataset consists of 720 records of corn yield at county scale provided by the United States Department of Agriculture (USDA) and the associated climatic data. A 5-folds cross-validation process and two accuracy metrics: root mean square error of prediction(RMSEP), mean absolute error of prediction(MAEP) were used to evaluate the crop prediction capacity. The results show that among the data-driven approaches, Random Forest is the most robust and generally achieves the best prediction error (MAEP 4.27%). It also outperforms our model-driven approach (MAEP 6.11%). However, the method to calibrate the mechanistic model from dataset easy to access offers several side-perspectives. The mechanistic model can potentially help to underline the stresses suffered by the crop or to identify the biological parameters of interest for breeding purposes. For this reason, an interesting perspective is to combine these two types of approaches.

Keywords: crop yield prediction, crop model, sensitivity analysis, paramater estimation, particle swarm optimization, random forest

Procedia PDF Downloads 231
5329 Islam and Democracy: A Paradoxical Study of Syed Maududi and Javed Ghamidi

Authors: Waseem Makai

Abstract:

The term ‘political Islam’ now seem to have gained the centre stage in every discourse pertaining to Islamic legitimacy and compatibility in modern civilisations. A never ceasing tradition of the philosophy of caliphate that has kept overriding the options of any alternate political institution in the Muslim world still permeates a huge faction of believers. Fully accustomed with the proliferation of changes and developments in individual, social and natural dispositions of the world, Islamic theologians retaliated to this flux through both conventional and modernist approaches. The so-called conventional approach was quintessential of the interpretations put forth by Syed Maududi, with new comprehensive, academic and powerful vigour, as never seen before. He generated the avant-garde scholarship which would bear testimony to his statements, made to uphold the political institution of Islam as supreme and noble. However, it was not his trait to challenge the established views but to codify them in such a bracket which a man of the 20th century would find captivating to his heart and satisfactory to his rationale. The delicate microcosms like selection of a caliph, implementation of Islamic commandments (Sharia), interest free banking sectors, imposing tax (Jazyah) on non-believers, waging the holy crusade (Jihad) for the expansion of Islamic boundaries, stoning for committing adulteration and capital punishment for apostates were all there in his scholarship which he spent whole of his life defending in the best possible manner. What and where did he went wrong with all this, was supposedly to be notified later, by his once been disciple, Javed Ahmad Ghamidi. Ghamidi is being accused of struggling between Scylla and Charybdis as he tries to remain steadfast to his basic Islamic tenets while modernising their interpretations to bring them in harmony with the Western ideals of democracy and liberty. His blatant acknowledgement of putting democracy at a high pedestal, calling the implementation of Sharia a non-mandatory task and denial to bracket people in the categories of Zimmi and Kaafir fully vindicates his stance against conventional narratives like that of Syed Maududi. Ghamidi goes to the extent of attributing current forms of radicalism and extremism, as exemplified in the operations of organisations like ISIS in Iraq and Syria and Tehreek-e-Taliban in Pakistan, to such a version of political Islam as upheld not only by Syed Maududi but by other prominent theologians like Ibn-Timyah, Syed Qutub and Dr. Israr Ahmad also. Ghamidi is wretched, in a way that his allegedly insubstantial claims gained him enough hostilities to leave his homeland when two of his close allies were brutally murdered. Syed Maududi and Javed Ghamidi, both stand poles apart in their understanding of Islam and its political domain. Who has the appropriate methodology, scholarship and execution in his mode of comprehension, is an intriguing task, worth carrying out in detail.

Keywords: caliphate, democracy, ghamidi, maududi

Procedia PDF Downloads 200
5328 Microwave Sanitization of Polyester Fabrics

Authors: K. Haggag, M. Salama, H. El-Sayed

Abstract:

Polyester fabrics were sanitized by exposing them to vaporized water under the influence of conventional heating or microwave irradiation. Hydrogen peroxide was added the humid sanitizing environment as a disinfectant. The said sanitization process was found to be effective towards two types of bacteria, namely Escherichia coli ATCC 2666 (G –ve) and Staphylococcus aureus ATCC 6538 (G +ve). The effect of the sanitization process on some of the inherent properties of polyester fabrics was monitored.

Keywords: polyester, fabric, sanitization, microwave, bacteria

Procedia PDF Downloads 376
5327 Review of Malaria Diagnosis Techniques

Authors: Lubabatu Sada Sodangu

Abstract:

Malaria is a major cause of death in tropical and subtropical nations. Malaria cases are continually rising as a result of a number of factors, despite the fact that the condition is now treatable using effective methods. In this situation, quick and effective diagnostic methods are essential for the management and control of malaria. Malaria diagnosis using conventional methods is still troublesome, hence new technologies have been created and implemented to get around the drawbacks. The review describes the currently known malaria diagnostic techniques, their strengths and shortcomings.

Keywords: malaria, technique, diagnosis, Africa

Procedia PDF Downloads 55
5326 Review of Malaria Diagnosis Techniques

Authors: Lubabatu Sada Sodangi

Abstract:

Malaria is a major cause of death in tropical and subtropical nations. Malaria cases are continually rising as a result of a number of factors, despite the fact that the condition is now treatable using effective methods. In this situation, quick and effective diagnostic methods are essential for the management and control of malaria. Malaria diagnosis using conventional methods is still troublesome; hence, new technologies have been created and implemented to get around the drawbacks. The review describes the currently known malaria diagnostic techniques, their strengths, and shortcomings.

Keywords: malaria, technique, diagnosis, Africa

Procedia PDF Downloads 60
5325 Elucidation of the Sequential Transcriptional Activity in Escherichia coli Using Time-Series RNA-Seq Data

Authors: Pui Shan Wong, Kosuke Tashiro, Satoru Kuhara, Sachiyo Aburatani

Abstract:

Functional genomics and gene regulation inference has readily expanded our knowledge and understanding of gene interactions with regards to expression regulation. With the advancement of transcriptome sequencing in time-series comes the ability to study the sequential changes of the transcriptome. This method presented here works to augment existing regulation networks accumulated in literature with transcriptome data gathered from time-series experiments to construct a sequential representation of transcription factor activity. This method is applied on a time-series RNA-Seq data set from Escherichia coli as it transitions from growth to stationary phase over five hours. Investigations are conducted on the various metabolic activities in gene regulation processes by taking advantage of the correlation between regulatory gene pairs to examine their activity on a dynamic network. Especially, the changes in metabolic activity during phase transition are analyzed with focus on the pagP gene as well as other associated transcription factors. The visualization of the sequential transcriptional activity is used to describe the change in metabolic pathway activity originating from the pagP transcription factor, phoP. The results show a shift from amino acid and nucleic acid metabolism, to energy metabolism during the transition to stationary phase in E. coli.

Keywords: Escherichia coli, gene regulation, network, time-series

Procedia PDF Downloads 372
5324 Assessment of Multi-Domain Energy Systems Modelling Methods

Authors: M. Stewart, Ameer Al-Khaykan, J. M. Counsell

Abstract:

Emissions are a consequence of electricity generation. A major option for low carbon generation, local energy systems featuring Combined Heat and Power with solar PV (CHPV) has significant potential to increase energy performance, increase resilience, and offer greater control of local energy prices while complementing the UK’s emissions standards and targets. Recent advances in dynamic modelling and simulation of buildings and clusters of buildings using the IDEAS framework have successfully validated a novel multi-vector (simultaneous control of both heat and electricity) approach to integrating the wide range of primary and secondary plant typical of local energy systems designs including CHP, solar PV, gas boilers, absorption chillers and thermal energy storage, and associated electrical and hot water networks, all operating under a single unified control strategy. Results from this work indicate through simulation that integrated control of thermal storage can have a pivotal role in optimizing system performance well beyond the present expectations. Environmental impact analysis and reporting of all energy systems including CHPV LES presently employ a static annual average carbon emissions intensity for grid supplied electricity. This paper focuses on establishing and validating CHPV environmental performance against conventional emissions values and assessment benchmarks to analyze emissions performance without and with an active thermal store in a notional group of non-domestic buildings. Results of this analysis are presented and discussed in context of performance validation and quantifying the reduced environmental impact of CHPV systems with active energy storage in comparison with conventional LES designs.

Keywords: CHPV, thermal storage, control, dynamic simulation

Procedia PDF Downloads 240
5323 Wireless Sensor Network for Forest Fire Detection and Localization

Authors: Tarek Dandashi

Abstract:

WSNs may provide a fast and reliable solution for the early detection of environment events like forest fires. This is crucial for alerting and calling for fire brigade intervention. Sensor nodes communicate sensor data to a host station, which enables a global analysis and the generation of a reliable decision on a potential fire and its location. A WSN with TinyOS and nesC for the capturing and transmission of a variety of sensor information with controlled source, data rates, duration, and the records/displaying activity traces is presented. We propose a similarity distance (SD) between the distribution of currently sensed data and that of a reference. At any given time, a fire causes diverging opinions in the reported data, which alters the usual data distribution. Basically, SD consists of a metric on the Cumulative Distribution Function (CDF). SD is designed to be invariant versus day-to-day changes of temperature, changes due to the surrounding environment, and normal changes in weather, which preserve the data locality. Evaluation shows that SD sensitivity is quadratic versus an increase in sensor node temperature for a group of sensors of different sizes and neighborhood. Simulation of fire spreading when ignition is placed at random locations with some wind speed shows that SD takes a few minutes to reliably detect fires and locate them. We also discuss the case of false negative and false positive and their impact on the decision reliability.

Keywords: forest fire, WSN, wireless sensor network, algortihm

Procedia PDF Downloads 262
5322 High and Low Salinity Polymer in Omani Oil Field

Authors: Intisar Al Busaidi, Rashid Al Maamari, Daowoud Al Mahroqi, Mahvash Karimi

Abstract:

In recent years, some research studies have been performed on the hybrid application of polymer and low salinity water flooding (LSWF). Numerous technical and economic benefits of low salinity polymer flooding (LSPF) have been reported. However, as with any EOR technology, there are various risks involved in using LSPF. Ions exchange between porous media and brine is one of the Crude oil/ brine/ rocks (COBR) reactions that is identified as a potential risk in LSPF. To the best of our knowledge, this conclusion was drawn based on bulk rheology measurements, and no explanation was provided on how water chemistry changed in the presence of polymer. Therefore, this study aimed to understand rock/ brine interactions with high and low salinity brine in the absence and presence of polymer with Omani reservoir core plugs. Many single-core flooding experiments were performed with low and high salinity polymer solutions to investigate the influence of partially hydrolyzed polyacrylic amide with different brine salinities on cation exchange reactions. Ion chromatography (IC), total organic carbon (TOC), rheological, and pH measurements were conducted for produced aqueous phase. A higher increase in pH and lower polymer adsorption was observed in LSPF compared with conventional polymer flooding. In addition, IC measurements showed that all produced fluids in the absence and presence of polymer showed elevated Ca²⁺, Mg²⁺, K+, Cl- and SO₄²⁻ ions compared to the injected fluids. However, the divalent cations levels, mainly Ca²⁺, were the highest and remained elevated for several pore volumes in the presence of LSP. The results are in line with rheological measurements where the highest viscosity reduction was recorded with the highest level of Ca²⁺ production. Despite the viscosity loss due to cation exchange reactions, LSP can be an attractive alternative to conventional polymer flooding in the Marmul field.

Keywords: polymer, ions, exchange, recovery, low salinity

Procedia PDF Downloads 114
5321 Filtering Intrusion Detection Alarms Using Ant Clustering Approach

Authors: Ghodhbani Salah, Jemili Farah

Abstract:

With the growth of cyber attacks, information safety has become an important issue all over the world. Many firms rely on security technologies such as intrusion detection systems (IDSs) to manage information technology security risks. IDSs are considered to be the last line of defense to secure a network and play a very important role in detecting large number of attacks. However the main problem with today’s most popular commercial IDSs is generating high volume of alerts and huge number of false positives. This drawback has become the main motivation for many research papers in IDS area. Hence, in this paper we present a data mining technique to assist network administrators to analyze and reduce false positive alarms that are produced by an IDS and increase detection accuracy. Our data mining technique is unsupervised clustering method based on hybrid ANT algorithm. This algorithm discovers clusters of intruders’ behavior without prior knowledge of a possible number of classes, then we apply K-means algorithm to improve the convergence of the ANT clustering. Experimental results on real dataset show that our proposed approach is efficient with high detection rate and low false alarm rate.

Keywords: intrusion detection system, alarm filtering, ANT class, ant clustering, intruders’ behaviors, false alarms

Procedia PDF Downloads 404
5320 Machine Learning in Patent Law: How Genetic Breeding Algorithms Challenge Modern Patent Law Regimes

Authors: Stefan Papastefanou

Abstract:

Artificial intelligence (AI) is an interdisciplinary field of computer science with the aim of creating intelligent machine behavior. Early approaches to AI have been configured to operate in very constrained environments where the behavior of the AI system was previously determined by formal rules. Knowledge was presented as a set of rules that allowed the AI system to determine the results for specific problems; as a structure of if-else rules that could be traversed to find a solution to a particular problem or question. However, such rule-based systems typically have not been able to generalize beyond the knowledge provided. All over the world and especially in IT-heavy industries such as the United States, the European Union, Singapore, and China, machine learning has developed to be an immense asset, and its applications are becoming more and more significant. It has to be examined how such products of machine learning models can and should be protected by IP law and for the purpose of this paper patent law specifically, since it is the IP law regime closest to technical inventions and computing methods in technical applications. Genetic breeding models are currently less popular than recursive neural network method and deep learning, but this approach can be more easily described by referring to the evolution of natural organisms, and with increasing computational power; the genetic breeding method as a subset of the evolutionary algorithms models is expected to be regaining popularity. The research method focuses on patentability (according to the world’s most significant patent law regimes such as China, Singapore, the European Union, and the United States) of AI inventions and machine learning. Questions of the technical nature of the problem to be solved, the inventive step as such, and the question of the state of the art and the associated obviousness of the solution arise in the current patenting processes. Most importantly, and the key focus of this paper is the problem of patenting inventions that themselves are developed through machine learning. The inventor of a patent application must be a natural person or a group of persons according to the current legal situation in most patent law regimes. In order to be considered an 'inventor', a person must actually have developed part of the inventive concept. The mere application of machine learning or an AI algorithm to a particular problem should not be construed as the algorithm that contributes to a part of the inventive concept. However, when machine learning or the AI algorithm has contributed to a part of the inventive concept, there is currently a lack of clarity regarding the ownership of artificially created inventions. Since not only all European patent law regimes but also the Chinese and Singaporean patent law approaches include identical terms, this paper ultimately offers a comparative analysis of the most relevant patent law regimes.

Keywords: algorithms, inventor, genetic breeding models, machine learning, patentability

Procedia PDF Downloads 108
5319 Optimal Placement of the Unified Power Controller to Improve the Power System Restoration

Authors: Mohammad Reza Esmaili

Abstract:

One of the most important parts of the restoration process of a power network is the synchronizing of its subsystems. In this situation, the biggest concern of the system operators will be the reduction of the standing phase angle (SPA) between the endpoints of the two islands. In this regard, the system operators perform various actions and maneuvers so that the synchronization operation of the subsystems is successfully carried out and the system finally reaches acceptable stability. The most common of these actions include load control, generation control and, in some cases, changing the network topology. Although these maneuvers are simple and common, due to the weak network and extreme load changes, the restoration will be associated with low speed. One of the best ways to control the SPA is to use FACTS devices. By applying a soft control signal, these tools can reduce the SPA between two subsystems with more speed and accuracy, and the synchronization process can be done in less time. Meanwhile, the unified power controller (UPFC), a series-parallel compensator device with the change of transmission line power and proper adjustment of the phase angle, will be the proposed option in order to realize the subject of this research. Therefore, with the optimal placement of UPFC in a power system, in addition to improving the normal conditions of the system, it is expected to be effective in reducing the SPA during power system restoration. Therefore, the presented paper provides an optimal structure to coordinate the three problems of improving the division of subsystems, reducing the SPA and optimal power flow with the aim of determining the optimal location of UPFC and optimal subsystems. The proposed objective functions in this paper include maximizing the quality of the subsystems, reducing the SPA at the endpoints of the subsystems, and reducing the losses of the power system. Since there will be a possibility of creating contradictions in the simultaneous optimization of the proposed objective functions, the structure of the proposed optimization problem is introduced as a non-linear multi-objective problem, and the Pareto optimization method is used to solve it. The innovative technique proposed to implement the optimization process of the mentioned problem is an optimization algorithm called the water cycle (WCA). To evaluate the proposed method, the IEEE 39 bus power system will be used.

Keywords: UPFC, SPA, water cycle algorithm, multi-objective problem, pareto

Procedia PDF Downloads 66
5318 Bitcoin, Blockchain and Smart Contract: Attacks and Mitigations

Authors: Mohamed Rasslan, Doaa Abdelrahman, Mahmoud M. Nasreldin, Ghada Farouk, Heba K. Aslan

Abstract:

Blockchain is a distributed database that endorses transparency while bitcoin is a decentralized cryptocurrency (electronic cash) that endorses anonymity and is powered by blockchain technology. Smart contracts are programs that are stored on a blockchain. Smart contracts are executed when predetermined conditions are fulfilled. Smart contracts automate the agreement execution in order to make sure that all participants immediate-synchronism of the outcome-certainty, without any intermediary's involvement or time loss. Currently, the Bitcoin market worth billions of dollars. Bitcoin could be transferred from one purchaser to another without the need for an intermediary bank. Network nodes through cryptography verify bitcoin transactions, which are registered in a public-book called “blockchain”. Bitcoin could be replaced by other coins, merchandise, and services. Rapid growing of the bitcoin market-value, encourages its counterparts to make use of its weaknesses and exploit vulnerabilities for profit. Moreover, it motivates scientists to define known vulnerabilities, offer countermeasures, and predict future threats. In his paper, we study blockchain technology and bitcoin from the attacker’s point of view. Furthermore, mitigations for the attacks are suggested, and contemporary security solutions are discussed. Finally, research methods that achieve strict security and privacy protocol are elaborated.

Keywords: Cryptocurrencies, Blockchain, Bitcoin, Smart Contracts, Peer-to-Peer Network, Security Issues, Privacy Techniques

Procedia PDF Downloads 82
5317 Flexible Feedstock Concept in Gasification Process for Carbon-Negative Energy Technology: A Case Study in Malaysia

Authors: Zahrul Faizi M. S., Ali A., Norhuda A. M.

Abstract:

Emission of greenhouse gases (GHG) from solid waste treatment and dependency on fossil fuel to produce electricity are the major concern in Malaysia as well as global. Innovation in downdraft gasification with combined heat and power (CHP) systems has the potential to minimize solid waste and reduce the emission of anthropogenic GHG from conventional fossil fuel power plants. However, the efficiency and capability of downdraft gasification to generate electricity from various alternative fuels, for instance, agriculture residues (i.e., woodchip, coconut shell) and municipal solid waste (MSW), are still controversial, on top of the toxicity level from the produced bottom ash. Thus this study evaluates the adaptability and reliability of the 20 kW downdraft gasification system to generate electricity (while considering environmental sustainability from the bottom ash) using flexible local feedstock at 20, 40, and 60% mixed ratio of MSW: agriculture residues. Feedstock properties such as feed particle size, moisture, and ash contents are also analyzed to identify optimal characteristics for the combination of feedstock (feedstock flexibility) to obtain maximum energy generation. Results show that the gasification system is capable to flexibly accommodate different feedstock compositions subjected to specific particle size (less than 2 inches) at a moisture content between 15 to 20%. These values exhibit enhance gasifier performance and provide a significant effect to the syngas composition utilizes by the internal combustion engine, which reflects energy production. The result obtained in this study is able to provide a new perspective on the transition of the conventional gasification system to a future reliable carbon-negative energy technology. Subsequently, promoting commercial scale-up of the downdraft gasification system.

Keywords: carbon-negative energy, feedstock flexibility, gasification, renewable energy

Procedia PDF Downloads 135
5316 Value Index, a Novel Decision Making Approach for Waste Load Allocation

Authors: E. Feizi Ashtiani, S. Jamshidi, M.H Niksokhan, A. Feizi Ashtiani

Abstract:

Waste load allocation (WLA) policies may use multi-objective optimization methods to find the most appropriate and sustainable solutions. These usually intend to simultaneously minimize two criteria, total abatement costs (TC) and environmental violations (EV). If other criteria, such as inequity, need for minimization as well, it requires introducing more binary optimizations through different scenarios. In order to reduce the calculation steps, this study presents value index as an innovative decision making approach. Since the value index contains both the environmental violation and treatment costs, it can be maximized simultaneously with the equity index. It implies that the definition of different scenarios for environmental violations is no longer required. Furthermore, the solution is not necessarily the point with minimized total costs or environmental violations. This idea is testified for Haraz River, in north of Iran. Here, the dissolved oxygen (DO) level of river is simulated by Streeter-Phelps equation in MATLAB software. The WLA is determined for fish farms using multi-objective particle swarm optimization (MOPSO) in two scenarios. At first, the trade-off curves of TC-EV and TC-Inequity are plotted separately as the conventional approach. In the second, the Value-Equity curve is derived. The comparative results show that the solutions are in a similar range of inequity with lower total costs. This is due to the freedom of environmental violation attained in value index. As a result, the conventional approach can well be replaced by the value index particularly for problems optimizing these objectives. This reduces the process to achieve the best solutions and may find better classification for scenario definition. It is also concluded that decision makers are better to focus on value index and weighting its contents to find the most sustainable alternatives based on their requirements.

Keywords: waste load allocation (WLA), value index, multi objective particle swarm optimization (MOPSO), Haraz River, equity

Procedia PDF Downloads 422
5315 Geosynthetic Tubes in Coastal Structures a Better Substitute for Shorter Planning Horizon: A Case Study

Authors: A. Pietro Rimoldi, B. Anilkumar Gopinath, C. Minimol Korulla

Abstract:

Coastal engineering structure is conventionally designed for a shorter planning horizon usually 20 years. These structures are subjected to different offshore climatic externalities like waves, tides, tsunamis etc. during the design life period. The probability of occurrence of these different offshore climatic externalities varies. The impact frequently caused by these externalities on the structures is of concern because it has a significant bearing on the capital /operating cost of the project. There can also be repeated short time occurrence of these externalities in the assumed planning horizon which can cause heavy damage to the conventional coastal structure which are mainly made of rock. A replacement of the damaged portion to prevent complete collapse is time consuming and expensive when dealing with hard rock structures. But if coastal structures are made of Geo-synthetic containment systems such replacement is quickly possible in the time period between two successive occurrences. In order to have a better knowledge and to enhance the predictive capacity of these occurrences, this study estimates risk of encounter within the design life period of various externalities based on the concept of exponential distribution. This gives an idea of the frequency of occurrences which in turn gives an indication of whether replacement is necessary and if so at what time interval such replacements have to be effected. To validate this theoretical finding, a pilot project has been taken up in the field so that the impact of the externalities can be studied both for a hard rock and a Geosynthetic tube structure. The paper brings out the salient feature of a case study which pertains to a project in which Geosynthetic tubes have been used for reformation of a seawall adjacent to a conventional rock structure in Alappuzha coast, Kerala, India. The effectiveness of the Geosystem in combatting the impact of the short-term externalities has been brought out.

Keywords: climatic externalities, exponential distribution, geosystems, planning horizon

Procedia PDF Downloads 228
5314 Green Extraction Technologies of Flavonoids Containing Pharmaceuticals

Authors: Lamzira Ebralidze, Aleksandre Tsertsvadze, Dali Berashvili, Aliosha Bakuridze

Abstract:

Nowadays, there is an increasing demand for biologically active substances from vegetable, animal, and mineral resources. In terms of the use of natural compounds, pharmaceutical, cosmetic, and nutrition industry has big interest. The biggest drawback of conventional extraction methods is the need to use a large volume of organic extragents. The removal of the organic solvent is a multi-stage process. And their absolute removal cannot be achieved, and they still appear in the final product as impurities. A large amount of waste containing organic solvent damages not only human health but also has the harmful effects of the environment. Accordingly, researchers are focused on improving the extraction methods, which aims to minimize the use of organic solvents and energy sources, using alternate solvents and renewable raw materials. In this context, green extraction principles were formed. Green Extraction is a need of today’s environment. Green Extraction is the concept, and it totally corresponds to the challenges of the 21st century. The extraction of biologically active compounds based on green extraction principles is vital from the view of preservation and maintaining biodiversity. Novel technologies of green extraction are known, such as "cold methods" because during the extraction process, the temperature is relatively lower, and it doesn’t have a negative impact on the stability of plant compounds. Novel technologies provide great opportunities to reduce or replace the use of organic toxic solvents, the efficiency of the process, enhance excretion yield, and improve the quality of the final product. The objective of the research is the development of green technologies of flavonoids containing preparations. Methodology: At the first stage of the research, flavonoids containing preparations (Tincture Herba Leonuri, flamine, rutine) were prepared based on conventional extraction methods: maceration, bismaceration, percolation, repercolation. At the same time, the same preparations were prepared based on green technologies, microwave-assisted, UV extraction methods. Product quality characteristics were evaluated by pharmacopeia methods. At the next stage of the research technological - economic characteristics and cost efficiency of products prepared based on conventional and novel technologies were determined. For the extraction of flavonoids, water is used as extragent. Surface-active substances are used as co-solvent in order to reduce surface tension, which significantly increases the solubility of polyphenols in water. Different concentrations of water-glycerol mixture, cyclodextrin, ionic solvent were used for the extraction process. In vitro antioxidant activity will be studied by the spectrophotometric method, using DPPH (2,2-diphenyl-1- picrylhydrazyl) as an antioxidant assay. The advantage of green extraction methods is also the possibility of obtaining higher yield in case of low temperature, limitation extraction process of undesirable compounds. That is especially important for the extraction of thermosensitive compounds and maintaining their stability.

Keywords: extraction, green technologies, natural resources, flavonoids

Procedia PDF Downloads 130
5313 Changes in Textural Properties of Zucchini Slices Under Effects of Partial Predrying and Deep-Fat-Frying

Authors: E. Karacabey, Ş. G. Özçelik, M. S. Turan, C. Baltacıoğlu, E. Küçüköner

Abstract:

Changes in textural properties of any food material during processing is significant for further consumer’s evaluation and directly affects their decisions. Thus any food material should be considered in terms of textural properties after any process. In the present study zucchini slices were partially predried to control and reduce the product’s final oil content. A conventional oven was used for partially dehydration of zucchini slices. Following frying was carried in an industrial fryer having temperature controller. This study was based on the effect of this predrying process on textural properties of fried zucchini slices. Texture profile analysis was performed. Hardness, elasticity, chewiness, cohesiveness were studied texture parameters of fried zucchini slices. Temperature and weight loss were monitored parameters of predrying process, whereas, in frying, oil temperature and process time were controlled. Optimization of two successive processes was done by response surface methodology being one of the common used statistical process optimization tools. Models developed for each texture parameters displayed high success to predict their values as a function of studied processes’ conditions. Process optimization was performed according to target values for each property determined for directly fried zucchini slices taking the highest score from sensory evaluation. Results indicated that textural properties of predried and then fried zucchini slices could be controlled by well-established equations. This is thought to be significant for fried stuff related food industry, where controlling of sensorial properties are crucial to lead consumer’s perception and texture related ones are leaders. This project (113R015) has been supported by TUBITAK.

Keywords: optimization, response surface methodology, texture profile analysis, conventional oven, modelling

Procedia PDF Downloads 433
5312 Beyond Empathy: From Justice to Reconciliation

Authors: Nissim Avissar

Abstract:

This paper aims to question the practice of bringing together people belonging to groups in conflict with the aim of bridging differences through universal empathy and interpersonal connections. It is argued that in cases where one group has the power, and the other is in a struggle to change the balance assuming universal equality between the groups and encouraging emphatic understanding is a non-emphatic practice. Accordingly, a new concept is posited–justice-sensitive empathy, conditioning empathy in such situations on the acknowledgement of an imbalance of power/injustice. With this reframing in mind, educational practices promoting social justice are discussed. In order to create conditions for justice-seeking or politically sensitive empathy, we need to go beyond the conventional definitions of empathy and offer other means and possibilities. Three possibilities are discussed. The first focuses on intra-group (as opposed to inter-group) processes within each group. It means temporary and tactical separation that may allow each group to focus on its own needs and values and perhaps to return to the dialogue more confidently. The second option emphasizes the notion of "constructive conflict," which means that each side still aspires to promote his own interests but without demolishing the other side (which is a rival but also an unwanted and forced partner). Here, alongside the "obligation to resist" and to act to promote justice as we view and understand it, we have to take into account the other side. The third and last option relates to the practice of Restorative Justice. This practice originated in the Truth and Reconciliation committees in South Africa, but it is now widely used in other contexts. Those committees had the authority to punish (or pardon) people; however, their main purpose was to seek truth and, from there, nourish reconciliation. This is the main idea of restorative justice; it seeks justice for the sake of restoring relationships. All the above options involve action and are aware of power relations (i.e., politics). They all seek justice. They may create conditions for the more conventional empathic practice to evolve, but no less than that, they are examples of justice-seeking and politically sensitive empathetic practice.

Keywords: education, empathy, justice, reconciliation

Procedia PDF Downloads 97
5311 Exploring the Synergistic Effects of Aerobic Exercise and Cinnamon Extract on Metabolic Markers in Insulin-Resistant Rats through Advanced Machine Learning and Deep Learning Techniques

Authors: Masoomeh Alsadat Mirshafaei

Abstract:

The present study aims to explore the effect of an 8-week aerobic training regimen combined with cinnamon extract on serum irisin and leptin levels in insulin-resistant rats. Additionally, this research leverages various machine learning (ML) and deep learning (DL) algorithms to model the complex interdependencies between exercise, nutrition, and metabolic markers, offering a groundbreaking approach to obesity and diabetes research. Forty-eight Wistar rats were selected and randomly divided into four groups: control, training, cinnamon, and training cinnamon. The training protocol was conducted over 8 weeks, with sessions 5 days a week at 75-80% VO2 max. The cinnamon and training-cinnamon groups were injected with 200 ml/kg/day of cinnamon extract. Data analysis included serum data, dietary intake, exercise intensity, and metabolic response variables, with blood samples collected 72 hours after the final training session. The dataset was analyzed using one-way ANOVA (P<0.05) and fed into various ML and DL models, including Support Vector Machines (SVM), Random Forest (RF), and Convolutional Neural Networks (CNN). Traditional statistical methods indicated that aerobic training, with and without cinnamon extract, significantly increased serum irisin and decreased leptin levels. Among the algorithms, the CNN model provided superior performance in identifying specific interactions between cinnamon extract concentration and exercise intensity, optimizing the increase in irisin and the decrease in leptin. The CNN model achieved an accuracy of 92%, outperforming the SVM (85%) and RF (88%) models in predicting the optimal conditions for metabolic marker improvements. The study demonstrated that advanced ML and DL techniques could uncover nuanced relationships and potential cellular responses to exercise and dietary supplements, which is not evident through traditional methods. These findings advocate for the integration of advanced analytical techniques in nutritional science and exercise physiology, paving the way for personalized health interventions in managing obesity and diabetes.

Keywords: aerobic training, cinnamon extract, insulin resistance, irisin, leptin, convolutional neural networks, exercise physiology, support vector machines, random forest

Procedia PDF Downloads 38
5310 Electric Arc Furnaces as a Source of Voltage Fluctuations in the Power System

Authors: Zbigniew Olczykowski

Abstract:

The paper presents the impact of work on the electric arc furnace power grid. The arc furnace operating will be modeled at different power conditions of steelworks. The paper will describe how to determine the increase in voltage fluctuations caused by working in parallel arc furnaces. The analysis of indicators characterizing the quality of electricity recorded during several cycles of measurement made at the same time at three points grid, with different power and different short-circuit rated voltage, will be carried out. The measurements analysis presented in this paper were conducted in the mains of one of the Polish steel. The indicators characterizing the quality of electricity was recorded during several cycles of measurement while making measurements at three points of different power network short-circuit power and various voltage ratings. Measurements of power quality indices included the one-week measurement cycles in accordance with the EN-50160. Data analysis will include the results obtained during the simultaneous measurement of three-point grid. This will determine the actual propagation of interference generated by the device. Based on the model studies and measurements of quality indices of electricity we will establish the effect of a specific arc on the mains. The short-circuit power network’s minimum value will also be estimated, this is necessary to limit the voltage fluctuations generated by arc furnaces.

Keywords: arc furnaces, long-term flicker, measurement and modeling of power quality, voltage fluctuations

Procedia PDF Downloads 290
5309 Remote Vital Signs Monitoring in Neonatal Intensive Care Unit Using a Digital Camera

Authors: Fatema-Tuz-Zohra Khanam, Ali Al-Naji, Asanka G. Perera, Kim Gibson, Javaan Chahl

Abstract:

Conventional contact-based vital signs monitoring sensors such as pulse oximeters or electrocardiogram (ECG) may cause discomfort, skin damage, and infections, particularly in neonates with fragile, sensitive skin. Therefore, remote monitoring of the vital sign is desired in both clinical and non-clinical settings to overcome these issues. Camera-based vital signs monitoring is a recent technology for these applications with many positive attributes. However, there are still limited camera-based studies on neonates in a clinical setting. In this study, the heart rate (HR) and respiratory rate (RR) of eight infants at the Neonatal Intensive Care Unit (NICU) in Flinders Medical Centre were remotely monitored using a digital camera applying color and motion-based computational methods. The region-of-interest (ROI) was efficiently selected by incorporating an image decomposition method. Furthermore, spatial averaging, spectral analysis, band-pass filtering, and peak detection were also used to extract both HR and RR. The experimental results were validated with the ground truth data obtained from an ECG monitor and showed a strong correlation using the Pearson correlation coefficient (PCC) 0.9794 and 0.9412 for HR and RR, respectively. The RMSE between camera-based data and ECG data for HR and RR were 2.84 beats/min and 2.91 breaths/min, respectively. A Bland Altman analysis of the data also showed a close correlation between both data sets with a mean bias of 0.60 beats/min and 1 breath/min, and the lower and upper limit of agreement -4.9 to + 6.1 beats/min and -4.4 to +6.4 breaths/min for both HR and RR, respectively. Therefore, video camera imaging may replace conventional contact-based monitoring in NICU and has potential applications in other contexts such as home health monitoring.

Keywords: neonates, NICU, digital camera, heart rate, respiratory rate, image decomposition

Procedia PDF Downloads 104
5308 Myosin-Driven Movement of Nanoparticles – An Approach to High-Speed Tracking

Authors: Sneha Kumari, Ravi Krishnan Elangovan

Abstract:

This abstract describes the development of a high-speed tracking method by modification in motor components for nanoparticle attachment. Myosin motors are nano-sized protein machines powering movement that defines life. These miniature molecular devices serve as engines utilizing chemical energy stored in ATP to produce useful mechanical energy in the form of a few nanometre displacement events leading to force generation that is required for cargo transport, cell division, cell locomotion, translated to macroscopic movements like running etc. With the advent of in vitro motility assay (IVMA), detailed functional studies of the actomyosin system could be performed. The major challenge with the currently available IVMA for tracking actin filaments is a resolution limitation of ± 50nm. To overcome this, we are trying to develop Single Molecule IVMA in which nanoparticle (GNP/QD) will be attached along or on the barbed end of actin filaments using CapZ protein and visualization by a compact TIRF module called ‘cTIRF’. The waveguide-based illumination by cTIRF offers a unique separation of excitation and collection optics, enabling imaging by scattering without emission filters. So, this technology is well equipped to perform tracking with high precision in temporal resolution of 2ms with significantly improved SNR by 100-fold as compared to conventional TIRF. Also, the nanoparticles (QD/GNP) attached to actin filament act as a point source of light coffering ease in filament tracking compared to conventional manual tracking. Moreover, the attachment of cargo (QD/GNP) to the thin filament paves the way for various nano-technological applications through their transportation to different predetermined locations on the chip

Keywords: actin, cargo, IVMA, myosin motors and single-molecule system

Procedia PDF Downloads 87
5307 Lessons Learned in Developing a Clinical Information System and Electronic Health Record (EHR) System That Meet the End User Needs and State of Qatar's Emerging Regulations

Authors: Darshani Premaratne, Afshin Kandampath Puthiyadath

Abstract:

The Government of Qatar is taking active steps in improving quality of health care industry in the state of Qatar. In this initiative development and market introduction of Clinical Information System and Electronic Health Record (EHR) system are proved to be a highly challenging process. Along with an organization specialized on EHR system development and with the blessing of Health Ministry of Qatar the process of introduction of EHR system in Qatar healthcare industry was undertaken. Initially a market survey was carried out to understand the requirements. Secondly, the available government regulations, needs and possible upcoming regulations were carefully studied before deployment of resources for software development. Sufficient flexibility was allowed to cater for both the changes in the market and the regulations. As the first initiative a system that enables integration of referral network where referral clinic and laboratory system for all single doctor (and small scale) clinics was developed. Setting of isolated single doctor clinics all over the state to bring in to an integrated referral network along with a referral hospital need a coherent steering force and a solid top down framework. This paper discusses about the lessons learned in developing, in obtaining approval of the health ministry and in introduction to the industry of the single doctor referral network along with an EHR system. It was concluded that development of this nature required continues balance between the market requirements and upcoming regulations. Further accelerating the development based on the emerging needs, implementation based on the end user needs while tallying with the regulations, diffusion, and uptake of demand-driven and evidence-based products, tools, strategies, and proper utilization of findings were equally found paramount in successful development of end product. Development of full scale Clinical Information System and EHR system are underway based on the lessons learned. The Government of Qatar is taking active steps in improving quality of health care industry in the state of Qatar. In this initiative development and market introduction of Clinical Information System and Electronic Health Record (EHR) system are proved to be a highly challenging process. Along with an organization specialized on EHR system development and with the blessing of Health Ministry of Qatar the process of introduction of EHR system in Qatar healthcare industry was undertaken. Initially a market survey was carried out to understand the requirements. Secondly the available government regulations, needs and possible upcoming regulations were carefully studied before deployment of resources for software development. Sufficient flexibility was allowed to cater for both the changes in the market and the regulations. As the first initiative a system that enables integration of referral network where referral clinic and laboratory system for all single doctor (and small scale) clinics was developed. Setting of isolated single doctor clinics all over the state to bring in to an integrated referral network along with a referral hospital need a coherent steering force and a solid top down framework. This paper discusses about the lessons learned in developing, in obtaining approval of the health ministry and in introduction to the industry of the single doctor referral network along with an EHR system. It was concluded that development of this nature required continues balance between the market requirements and upcoming regulations. Further accelerating the development based on the emerging needs, implementation based on the end user needs while tallying with the regulations, diffusion, and uptake of demand-driven and evidence-based products, tools, strategies, and proper utilization of findings were equally found paramount in successful development of end product. Development of full scale Clinical Information System and EHR system are underway based on the lessons learned.

Keywords: clinical information system, electronic health record, state regulations, integrated referral network of clinics

Procedia PDF Downloads 362
5306 Performance Evaluation of Wideband Code Division Multiplication Network

Authors: Osama Abdallah Mohammed Enan, Amin Babiker A/Nabi Mustafa

Abstract:

The aim of this study is to evaluate and analyze different parameters of WCDMA (wideband code division multiplication). Moreover, this study also incorporates brief yet throughout analysis of WCDMA’s components as well as its internal architecture. This study also examines different power controls. These power controls may include open loop power control, closed or inner group loop power control and outer loop power control. Different handover techniques or methods of WCDMA are also illustrated in this study. These handovers may include hard handover, inter system handover and soft and softer handover. Different duplexing techniques are also described in the paper. This study has also presented an idea about different parameters of WCDMA that leads the system towards QoS issues. This may help the operator in designing and developing adequate network configuration. In addition to this, the study has also investigated various parameters including Bit Energy per Noise Spectral Density (Eb/No), Noise rise, and Bit Error Rate (BER). After simulating these parameters, using MATLAB environment, it was investigated that, for a given Eb/No value the system capacity increase by increasing the reuse factor. Besides that, it was also analyzed that, noise rise is decreasing for lower data rates and for lower interference levels. Finally, it was examined that, BER increase by using one type of modulation technique than using other type of modulation technique.

Keywords: duplexing, handover, loop power control, WCDMA

Procedia PDF Downloads 215
5305 Reliability Analysis of Variable Stiffness Composite Laminate Structures

Authors: A. Sohouli, A. Suleman

Abstract:

This study focuses on reliability analysis of variable stiffness composite laminate structures to investigate the potential structural improvement compared to conventional (straight fibers) composite laminate structures. A computational framework was developed which it consists of a deterministic design step and reliability analysis. The optimization part is Discrete Material Optimization (DMO) and the reliability of the structure is computed by Monte Carlo Simulation (MCS) after using Stochastic Response Surface Method (SRSM). The design driver in deterministic optimization is the maximum stiffness, while optimization method concerns certain manufacturing constraints to attain industrial relevance. These manufacturing constraints are the change of orientation between adjacent patches cannot be too large and the maximum number of successive plies of a particular fiber orientation should not be too high. Variable stiffness composites may be manufactured by Automated Fiber Machines (AFP) which provides consistent quality with good production rates. However, laps and gaps are the most important challenges to steer fibers that effect on the performance of the structures. In this study, the optimal curved fiber paths at each layer of composites are designed in the first step by DMO, and then the reliability analysis is applied to investigate the sensitivity of the structure with different standard deviations compared to the straight fiber angle composites. The random variables are material properties and loads on the structures. The results show that the variable stiffness composite laminate structures are much more reliable, even for high standard deviation of material properties, than the conventional composite laminate structures. The reason is that the variable stiffness composite laminates allow tailoring stiffness and provide the possibility of adjusting stress and strain distribution favorably in the structures.

Keywords: material optimization, Monte Carlo simulation, reliability analysis, response surface method, variable stiffness composite structures

Procedia PDF Downloads 520
5304 Effect of Hybrid Fibers on Mechanical Properties in Autoclaved Aerated Concrete

Authors: B. Vijay Antony Raj, Umarani Gunasekaran, R. Thiru Kumara Raja Vallaban

Abstract:

Fibrous autoclaved aerated concrete (FAAC) is concrete containing fibrous material in it which helps to increase its structural integrity when compared to that of convention autoclaved aerated concrete (CAAC). These short discrete fibers are uniformly distributed and randomly oriented, which enhances the bond strength within the aerated concrete matrix. Conventional red-clay bricks create larger impact to the environment due to red soil depletion and it also consumes large amount to time for construction. Whereas, AAC are larger in size, lighter in weight and it is environmentally friendly in nature and hence it is a viable replacement for red-clay bricks. Internal micro cracks and corner cracks are the only disadvantages of conventional autoclaved aerated concrete, to resolve this particular issue it is preferable to make use of fibers in it.These fibers are bonded together within the matrix and they induce the aerated concrete to withstand considerable stresses, especially during the post cracking stage. Hence, FAAC has the capability of enhancing the mechanical properties and energy absorption capacity of CAAC. In this research work, individual fibers like glass, nylon, polyester and polypropylene are used they generally reduce the brittle fracture of AAC.To study the fibre’s surface topography and composition, SEM analysis is performed and then to determine the composition of a specimen as a whole as well as the composition of individual components EDAX mapping is carried out and then an experimental approach was performed to determine the effect of hybrid (multiple) fibres at various dosage (0.5%, 1%, 1.5%) and curing temperature of 180-2000 C is maintained to determine the mechanical properties of autoclaved aerated concrete. As an analytical part, the outcome experimental results is compared with fuzzy logic using MATLAB.

Keywords: fiberous AAC, crack control, energy absorption, mechanical properies, SEM, EDAX, MATLAB

Procedia PDF Downloads 269
5303 Understanding the Basics of Information Security: An Act of Defense

Authors: Sharon Q. Yang, Robert J. Congleton

Abstract:

Information security is a broad concept that covers any issues and concerns about the proper access and use of information on the Internet, including measures and procedures to protect intellectual property and private data from illegal access and online theft; the act of hacking; and any defensive technologies that contest such cybercrimes. As more research and commercial activities are conducted online, cybercrimes have increased significantly, putting sensitive information at risk. Information security has become critically important for organizations and private citizens alike. Hackers scan for network vulnerabilities on the Internet and steal data whenever they can. Cybercrimes disrupt our daily life, cause financial losses, and instigate fear in the public. Since the start of the pandemic, most data related cybercrimes targets have been either financial or health information from companies and organizations. Libraries also should have a high interest in understanding and adopting information security methods to protect their patron data and copyrighted materials. But according to information security professionals, higher education and cultural organizations, including their libraries, are the least prepared entities for cyberattacks. One recent example is that of Steven’s Institute of Technology in New Jersey in the US, which had its network hacked in 2020, with the hackers demanding a ransom. As a result, the network of the college was down for two months, causing serious financial loss. There are other cases where libraries, colleges, and universities have been targeted for data breaches. In order to build an effective defense, we need to understand the most common types of cybercrimes, including phishing, whaling, social engineering, distributed denial of service (DDoS) attacks, malware and ransomware, and hacker profiles. Our research will focus on each hacking technique and related defense measures; and the social background and reasons/purpose of hacker and hacking. Our research shows that hacking techniques will continue to evolve as new applications, housing information, and data on the Internet continue to be developed. Some cybercrimes can be stopped with effective measures, while others present challenges. It is vital that people understand what they face and the consequences when not prepared.

Keywords: cybercrimes, hacking technologies, higher education, information security, libraries

Procedia PDF Downloads 134