Search results for: neuromorphic computing
426 Drug Delivery of Cyclophosphamide Functionalized Zigzag (8,0) CNT, Armchair (4,4) CNT, and Nanocone Complexes in Water
Authors: Morteza Keshavarz
Abstract:
In this work, using density functional theory (DFT) thermodynamic stability and quantum molecular descriptors of cyclophoshphamide (an anticancer drug)-functionalized zigzag (8,0) CNT, armchair (4,4) CNT and nanocone complexes in water, for two attachment namely the sidewall and tip, is considered. Calculation of the total electronic energy (Et) and binding energy (Eb) of all complexes indicates that the most thermodynamic stability belongs to the sidewall-attachment of cyclophosphamide into functional nanocone. On the other hand, results from chemical hardness show that drug-functionalized zigzag (8,0) and armchair (4,4) complexes in the tip-attachment configuration possess the smallest and greatest chemical hardness, respectively. By computing the solvation energy, it is found that the solution of the drug and all complexes are spontaneous in water. Furthermore, chirality, type of nanovector (nanotube or nanocone), or attachment configuration have no effects on solvation energy of complexes.Keywords: carbon nanotube, drug delivery, cyclophosphamide drug, density functional theory (DFT)
Procedia PDF Downloads 370425 A Deletion-Cost Based Fast Compression Algorithm for Linear Vector Data
Authors: Qiuxiao Chen, Yan Hou, Ning Wu
Abstract:
As there are deficiencies of the classic Douglas-Peucker Algorithm (DPA), such as high risks of deleting key nodes by mistake, high complexity, time consumption and relatively slow execution speed, a new Deletion-Cost Based Compression Algorithm (DCA) for linear vector data was proposed. For each curve — the basic element of linear vector data, all the deletion costs of its middle nodes were calculated, and the minimum deletion cost was compared with the pre-defined threshold. If the former was greater than or equal to the latter, all remaining nodes were reserved and the curve’s compression process was finished. Otherwise, the node with the minimal deletion cost was deleted, its two neighbors' deletion costs were updated, and the same loop on the compressed curve was repeated till the termination. By several comparative experiments using different types of linear vector data, the comparison between DPA and DCA was performed from the aspects of compression quality and computing efficiency. Experiment results showed that DCA outperformed DPA in compression accuracy and execution efficiency as well.Keywords: Douglas-Peucker algorithm, linear vector data, compression, deletion cost
Procedia PDF Downloads 251424 Cybersecurity Protection Structures: The Case of Lesotho
Authors: N. N. Mosola, K. F. Moeketsi, R. Sehobai, N. Pule
Abstract:
The Internet brings increasing use of Information and Communications Technology (ICT) services and facilities. Consequently, new computing paradigms emerge to provide services over the Internet. Although there are several benefits stemming from these services, they pose several risks inherited from the Internet. For example, cybercrime, identity theft, malware etc. To thwart these risks, this paper proposes a holistic approach. This approach involves multidisciplinary interactions. The paper proposes a top-down and bottom-up approach to deal with cyber security concerns in developing countries. These concerns range from regulatory and legislative areas, cyber awareness, research and development, technical dimensions etc. The main focus areas are highlighted and a cybersecurity model solution is proposed. The paper concludes by combining all relevant solutions into a proposed cybersecurity model to assist developing countries in enhancing a cyber-safe environment to instill and promote a culture of cybersecurity.Keywords: cybercrime, cybersecurity, computer emergency response team, computer security incident response team
Procedia PDF Downloads 156423 Experimental Study of Hyperparameter Tuning a Deep Learning Convolutional Recurrent Network for Text Classification
Authors: Bharatendra Rai
Abstract:
The sequence of words in text data has long-term dependencies and is known to suffer from vanishing gradient problems when developing deep learning models. Although recurrent networks such as long short-term memory networks help to overcome this problem, achieving high text classification performance is a challenging problem. Convolutional recurrent networks that combine the advantages of long short-term memory networks and convolutional neural networks can be useful for text classification performance improvements. However, arriving at suitable hyperparameter values for convolutional recurrent networks is still a challenging task where fitting a model requires significant computing resources. This paper illustrates the advantages of using convolutional recurrent networks for text classification with the help of statistically planned computer experiments for hyperparameter tuning.Keywords: long short-term memory networks, convolutional recurrent networks, text classification, hyperparameter tuning, Tukey honest significant differences
Procedia PDF Downloads 129422 Performance Comparison of AODV and Soft AODV Routing Protocol
Authors: Abhishek, Seema Devi, Jyoti Ohri
Abstract:
A mobile ad hoc network (MANET) represents a system of wireless mobile nodes that can self-organize freely and dynamically into arbitrary and temporary network topology. Unlike a wired network, wireless network interface has limited transmission range. Routing is the task of forwarding data packets from source to a given destination. Ad-hoc On Demand Distance Vector (AODV) routing protocol creates a path for a destination only when it required. This paper describes the implementation of AODV routing protocol using MATLAB-based Truetime simulator. In MANET's node movements are not fixed while they are random in nature. Hence intelligent techniques i.e. fuzzy and ANFIS are used to optimize the transmission range. In this paper, we compared the transmission range of AODV, fuzzy AODV and ANFIS AODV. For soft computing AODV, we have taken transmitted power and received threshold as input and transmission range as output. ANFIS gives better results as compared to fuzzy AODV.Keywords: ANFIS, AODV, fuzzy, MANET, reactive routing protocol, routing protocol, truetime
Procedia PDF Downloads 498421 SAP: A Smart Amusement Park System for Tourist Services
Authors: Pei-Chun Lee, Sheng-Shih Wang, Pei-Hsuan Ku
Abstract:
Many existing amusement parks have been operated with assistance of a variety of information and communications technologies to design friendly and efficient service systems for tourists. However, these systems leave various levels of decisions to tourists to make by themselves. This incurs pressure on tourists and thereby bringing negative experience in their tour. This paper proposes a smart amusement park system to offer each tourist the GPS-based customized plan without tourists making decisions by themselves. The proposed system consists of the mobile app subsystem, the central subsystem, and the detecting/counting subsystem. The mobile app subsystem interacts with the central subsystem. The central subsystem performs the necessary computing and database management of the proposed system. The detecting/counting subsystem aims to detect and compute the number of visitors to an attraction. Experimental results show that the proposed system can not only work well, but also provide an innovative business operating model for owners of amusement parks.Keywords: amusement park, location-based service, LBS, mobile app, tourist service
Procedia PDF Downloads 512420 Analysis of Fault Tolerance on Grid Computing in Real Time Approach
Authors: Parampal Kaur, Deepak Aggarwal
Abstract:
In the computational Grid, fault tolerance is an imperative issue to be considered during job scheduling. Due to the widespread use of resources, systems are highly prone to errors and failures. Hence, fault tolerance plays a key role in the grid to avoid the problem of unreliability. Scheduling the task to the appropriate resource is a vital requirement in computational Grid. The fittest resource scheduling algorithm searches for the appropriate resource based on the job requirements, in contrary to the general scheduling algorithms where jobs are scheduled to the resources with best performance factor. The proposed method is to improve the fault tolerance of the fittest resource scheduling algorithm by scheduling the job in coordination with job replication when the resource has low reliability. Based on the reliability index of the resource, the resource is identified as critical. The tasks are scheduled based on the criticality of the resources. Results show that the execution time of the tasks is comparatively reduced with the proposed algorithm using real-time approach rather than a simulator.Keywords: computational grid, fault tolerance, task replication, job scheduling
Procedia PDF Downloads 436419 Exploring Cybercrimes and Major Security Breaches: Assessing the Broader Fiscal Impact on Nigeria
Authors: Washima Tuleun
Abstract:
Cybercrime is a global concern, and Nigeria is not immune to its effects. This paper investigates the cybercrimes and significant cyber-attacks that have targeted businesses and institutions in Nigeria, examining their various forms and the financial and economic impacts they have on individuals, businesses, and the nation as a whole. As technological advancements rapidly evolve and online services gain widespread adoption, there has been a corresponding rise in cyber-related attacks. These attacks often target personal data, exploit system vulnerabilities, and result in the theft of sensitive information, leading to financial losses, reputational damage, and broader impacts on organizations. The study conducts a thorough review of existing literature, case studies, and statistical data to provide a comprehensive understanding of Nigeria’s cybercrime landscape. Additionally, it assesses the efforts by both the government and the private sector to address these challenges and offers recommendations for more effective strategies to mitigate and reduce their impact.Keywords: cybersecurity, telecommunications engineering, information technology, threat intelligence, vulnerability management, computing
Procedia PDF Downloads 29418 Assessing the Effectiveness of Machine Learning Algorithms for Cyber Threat Intelligence Discovery from the Darknet
Authors: Azene Zenebe
Abstract:
Deep learning is a subset of machine learning which incorporates techniques for the construction of artificial neural networks and found to be useful for modeling complex problems with large dataset. Deep learning requires a very high power computational and longer time for training. By aggregating computing power, high performance computer (HPC) has emerged as an approach to resolving advanced problems and performing data-driven research activities. Cyber threat intelligence (CIT) is actionable information or insight an organization or individual uses to understand the threats that have, will, or are currently targeting the organization. Results of review of literature will be presented along with results of experimental study that compares the performance of tree-based and function-base machine learning including deep learning algorithms using secondary dataset collected from darknet.Keywords: deep-learning, cyber security, cyber threat modeling, tree-based machine learning, function-based machine learning, data science
Procedia PDF Downloads 153417 Ethical Perspectives on Implementation of Computer Aided Design Curriculum in Architecture in Nigeria: A Case Study of Chukwuemeka Odumegwu Ojukwu University, Uli
Authors: Kelechi Ezeji
Abstract:
The use of Computer Aided Design (CAD) technologies has become pervasive in the Architecture, Engineering and Construction (AEC) industry. This has led to its inclusion as an important part of the training module in the curriculum for Architecture Schools in Nigeria. This paper examines the ethical questions that arise in the implementation of Computer Aided Design (CAD) Content of the curriculum for Architectural education. Using existing literature, it begins this scrutiny from the propriety of inclusion of CAD into the education of the architect and the obligations of the different stakeholders in the implementation process. It also examines the questions raised by the negative use of computing technologies as well as perceived negative influence of the use of CAD on design creativity. Survey methodology was employed to gather data from the Department of Architecture, Chukwuemeka Odumegwu Ojukwu University Uli, which has been used as a case study on how the issues raised are being addressed. The paper draws conclusions on what will make for successful ethical implementation.Keywords: computer aided design, curriculum, education, ethics
Procedia PDF Downloads 413416 A Parallel Implementation of Artificial Bee Colony Algorithm within CUDA Architecture
Authors: Selcuk Aslan, Dervis Karaboga, Celal Ozturk
Abstract:
Artificial Bee Colony (ABC) algorithm is one of the most successful swarm intelligence based metaheuristics. It has been applied to a number of constrained or unconstrained numerical and combinatorial optimization problems. In this paper, we presented a parallelized version of ABC algorithm by adapting employed and onlooker bee phases to the Compute Unified Device Architecture (CUDA) platform which is a graphical processing unit (GPU) programming environment by NVIDIA. The execution speed and obtained results of the proposed approach and sequential version of ABC algorithm are compared on functions that are typically used as benchmarks for optimization algorithms. Tests on standard benchmark functions with different colony size and number of parameters showed that proposed parallelization approach for ABC algorithm decreases the execution time consumed by the employed and onlooker bee phases in total and achieved similar or better quality of the results compared to the standard sequential implementation of the ABC algorithm.Keywords: Artificial Bee Colony algorithm, GPU computing, swarm intelligence, parallelization
Procedia PDF Downloads 378415 Analyze of Nanoscale Materials and Devices for Future Communication and Telecom Networks in the Gas Refinery
Authors: Mohamad Bagher Heidari, Hefzollah Mohammadian
Abstract:
New discoveries in materials on the nanometer-length scale are expected to play an important role in addressing ongoing and future challenges in the field of communication. Devices and systems for ultra-high speed short and long range communication links, portable and power efficient computing devices, high-density memory and logics, ultra-fast interconnects, and autonomous and robust energy scavenging devices for accessing ambient intelligence and needed information will critically depend on the success of next-generation emerging nonmaterials and devices. This article presents some exciting recent developments in nonmaterials that have the potential to play a critical role in the development and transformation of future intelligent communication and telecom networks in the gas refinery. The industry is benefiting from nanotechnology advances with numerous applications including those in smarter sensors, logic elements, computer chips, memory storage devices, optoelectronics.Keywords: nonmaterial, intelligent communication, nanoscale, nanophotonic, telecom
Procedia PDF Downloads 333414 Comparison Between Genetic Algorithms and Particle Swarm Optimization Optimized Proportional Integral Derirative and PSS for Single Machine Infinite System
Authors: Benalia Nadia, Zerzouri Nora, Ben Si Ali Nadia
Abstract:
Abstract: Among the many different modern heuristic optimization methods, genetic algorithms (GA) and the particle swarm optimization (PSO) technique have been attracting a lot of interest. The GA has gained popularity in academia and business mostly because to its simplicity, ability to solve highly nonlinear mixed integer optimization problems that are typical of complex engineering systems, and intuitiveness. The mechanics of the PSO methodology, a relatively recent heuristic search tool, are modeled after the swarming or cooperative behavior of biological groups. It is suitable to compare the performance of the two techniques since they both aim to solve a particular objective function but make use of distinct computing methods. In this article, PSO and GA optimization approaches are used for the parameter tuning of the power system stabilizer and Proportional integral derivative regulator. Load angle and rotor speed variations in the single machine infinite bus bar system is used to measure the performance of the suggested solution.Keywords: SMIB, genetic algorithm, PSO, transient stability, power system stabilizer, PID
Procedia PDF Downloads 82413 A Study on Application of Elastic Theory for Computing Flexural Stresses in Preflex Beam
Authors: Nasiri Ahmadullah, Shimozato Tetsuhiro, Masayuki Tai
Abstract:
This paper presents the step-by-step procedure for using Elastic Theory to calculate the internal stresses in composite bridge girders prestressed by the Preflexing Technology, called Prebeam in Japan and Preflex beam worldwide. Elastic Theory approaches preflex beams the same way as it does the conventional composite girders. Since preflex beam undergoes different stages of construction, calculations are made using different sectional and material properties. Stresses are calculated in every stage using the properties of the specific section. Stress accumulation gives the available stress in a section of interest. Concrete presence in the section implies prestress loss due to creep and shrinkage, however; more work is required to be done in this field. In addition to the graphical presentation of this application, this paper further discusses important notes of graphical comparison between the results of an experimental-only research carried out on a preflex beam, with the results of simulation based on the elastic theory approach, for an identical beam using Finite Element Modeling (FEM) by the author.Keywords: composite girder, Elastic Theory, preflex beam, prestressing
Procedia PDF Downloads 279412 Distributed Perceptually Important Point Identification for Time Series Data Mining
Authors: Tak-Chung Fu, Ying-Kit Hung, Fu-Lai Chung
Abstract:
In the field of time series data mining, the concept of the Perceptually Important Point (PIP) identification process is first introduced in 2001. This process originally works for financial time series pattern matching and it is then found suitable for time series dimensionality reduction and representation. Its strength is on preserving the overall shape of the time series by identifying the salient points in it. With the rise of Big Data, time series data contributes a major proportion, especially on the data which generates by sensors in the Internet of Things (IoT) environment. According to the nature of PIP identification and the successful cases, it is worth to further explore the opportunity to apply PIP in time series ‘Big Data’. However, the performance of PIP identification is always considered as the limitation when dealing with ‘Big’ time series data. In this paper, two distributed versions of PIP identification based on the Specialized Binary (SB) Tree are proposed. The proposed approaches solve the bottleneck when running the PIP identification process in a standalone computer. Improvement in term of speed is obtained by the distributed versions.Keywords: distributed computing, performance analysis, Perceptually Important Point identification, time series data mining
Procedia PDF Downloads 433411 Intrusion Detection Based on Graph Oriented Big Data Analytics
Authors: Ahlem Abid, Farah Jemili
Abstract:
Intrusion detection has been the subject of numerous studies in industry and academia, but cyber security analysts always want greater precision and global threat analysis to secure their systems in cyberspace. To improve intrusion detection system, the visualisation of the security events in form of graphs and diagrams is important to improve the accuracy of alerts. In this paper, we propose an approach of an IDS based on cloud computing, big data technique and using a machine learning graph algorithm which can detect in real time different attacks as early as possible. We use the MAWILab intrusion detection dataset . We choose Microsoft Azure as a unified cloud environment to load our dataset on. We implement the k2 algorithm which is a graphical machine learning algorithm to classify attacks. Our system showed a good performance due to the graphical machine learning algorithm and spark structured streaming engine.Keywords: Apache Spark Streaming, Graph, Intrusion detection, k2 algorithm, Machine Learning, MAWILab, Microsoft Azure Cloud
Procedia PDF Downloads 146410 Increasing the System Availability of Data Centers by Using Virtualization Technologies
Authors: Chris Ewe, Naoum Jamous, Holger Schrödl
Abstract:
Like most entrepreneurs, data center operators pursue goals such as profit-maximization, improvement of the company’s reputation or basically to exist on the market. Part of those aims is to guarantee a given quality of service. Quality characteristics are specified in a contract called the service level agreement. Central part of this agreement is non-functional properties of an IT service. The system availability is one of the most important properties as it will be shown in this paper. To comply with availability requirements, data center operators can use virtualization technologies. A clear model to assess the effect of virtualization functions on the parts of a data center in relation to the system availability is still missing. This paper aims to introduce a basic model that shows these connections, and consider if the identified effects are positive or negative. Thus, this work also points out possible disadvantages of the technology. In consequence, the paper shows opportunities as well as risks of data center virtualization in relation to system availability.Keywords: availability, cloud computing IT service, quality of service, service level agreement, virtualization
Procedia PDF Downloads 536409 Renovation Planning Model for a Shopping Mall
Authors: Hsin-Yun Lee
Abstract:
In this study, the pedestrian simulation VISWALK integration and application platform ant algorithms written program made to construct a renovation engineering schedule planning mode. The use of simulation analysis platform construction site when the user running the simulation, after calculating the user walks in the case of construction delays, the ant algorithm to find out the minimum delay time schedule plan, and add volume and unit area deactivated loss of business computing, and finally to the owners and users of two different positions cut considerations pick out the best schedule planning. To assess and validate its effectiveness, this study constructed the model imported floor of a shopping mall floor renovation engineering cases. Verify that the case can be found from the mode of the proposed project schedule planning program can effectively reduce the delay time and the user's walking mall loss of business, the impact of the operation on the renovation engineering facilities in the building to a minimum.Keywords: pedestrian, renovation, schedule, simulation
Procedia PDF Downloads 413408 NUX: A Lightweight Block Cipher for Security at Wireless Sensor Node Level
Authors: Gaurav Bansod, Swapnil Sutar, Abhijit Patil, Jagdish Patil
Abstract:
This paper proposes an ultra-lightweight cipher NUX. NUX is a generalized Feistel network. It supports 128/80 bit key length and block length of 64 bit. For 128 bit key length, NUX needs only 1022 GEs which is less as compared to all existing cipher design. NUX design results into less footprint area and minimal memory size. This paper presents security analysis of NUX cipher design which shows cipher’s resistance against basic attacks like Linear and Differential Cryptanalysis. Advanced attacks like Biclique attack is also mounted on NUX cipher design. Two different F function in NUX cipher design results in high diffusion mechanism which generates large number of active S-boxes in minimum number of rounds. NUX cipher has total 31 rounds. NUX design will be best-suited design for critical application like smart grid, IoT, wireless sensor network, where memory size, footprint area and the power dissipation are the major constraints.Keywords: lightweight cryptography, Feistel cipher, block cipher, IoT, encryption, embedded security, ubiquitous computing
Procedia PDF Downloads 372407 Approximate-Based Estimation of Single Event Upset Effect on Statistic Random-Access Memory-Based Field-Programmable Gate Arrays
Authors: Mahsa Mousavi, Hamid Reza Pourshaghaghi, Mohammad Tahghighi, Henk Corporaal
Abstract:
Recently, Statistic Random-Access Memory-based (SRAM-based) Field-Programmable Gate Arrays (FPGAs) are widely used in aeronautics and space systems where high dependability is demanded and considered as a mandatory requirement. Since design’s circuit is stored in configuration memory in SRAM-based FPGAs; they are very sensitive to Single Event Upsets (SEUs). In addition, the adverse effects of SEUs on the electronics used in space are much higher than in the Earth. Thus, developing fault tolerant techniques play crucial roles for the use of SRAM-based FPGAs in space. However, fault tolerance techniques introduce additional penalties in system parameters, e.g., area, power, performance and design time. In this paper, an accurate estimation of configuration memory vulnerability to SEUs is proposed for approximate-tolerant applications. This vulnerability estimation is highly required for compromising between the overhead introduced by fault tolerance techniques and system robustness. In this paper, we study applications in which the exact final output value is not necessarily always a concern meaning that some of the SEU-induced changes in output values are negligible. We therefore define and propose Approximate-based Configuration Memory Vulnerability Factor (ACMVF) estimation to avoid overestimating configuration memory vulnerability to SEUs. In this paper, we assess the vulnerability of configuration memory by injecting SEUs in configuration memory bits and comparing the output values of a given circuit in presence of SEUs with expected correct output. In spite of conventional vulnerability factor calculation methods, which accounts any deviations from the expected value as failures, in our proposed method a threshold margin is considered depending on user-case applications. Given the proposed threshold margin in our model, a failure occurs only when the difference between the erroneous output value and the expected output value is more than this margin. The ACMVF is subsequently calculated by acquiring the ratio of failures with respect to the total number of SEU injections. In our paper, a test-bench for emulating SEUs and calculating ACMVF is implemented on Zynq-7000 FPGA platform. This system makes use of the Single Event Mitigation (SEM) IP core to inject SEUs into configuration memory bits of the target design implemented in Zynq-7000 FPGA. Experimental results for 32-bit adder show that, when 1% to 10% deviation from correct output is considered, the counted failures number is reduced 41% to 59% compared with the failures number counted by conventional vulnerability factor calculation. It means that estimation accuracy of the configuration memory vulnerability to SEUs is improved up to 58% in the case that 10% deviation is acceptable in output results. Note that less than 10% deviation in addition result is reasonably tolerable for many applications in approximate computing domain such as Convolutional Neural Network (CNN).Keywords: fault tolerance, FPGA, single event upset, approximate computing
Procedia PDF Downloads 198406 Advanced Digital Manufacturing: Case Study
Authors: Abdelrahman Abdelazim
Abstract:
Most industries are looking for technologies that are easy to use, efficient and fast to accomplish. To implement these, factories tend to use advanced systems that could alter complicity to simplicity and rudimentary to advancement. Cloud Manufacturing is a new movement that aims to mirror and integrate cloud computing into manufacturing. Amongst cloud manufacturing various advantages are decreasing the human involvements and increasing the dependency on automated machines, which in turns decreases human errors and increases efficiency. A reliable and extraordinary performance processes with minimum errors are highly desired factors of today’s manufacturers. At the glance it seems to be the best alternative, however, the implementation of a cloud system can be very challenging. This work investigates cloud manufacturing in details, it outlines its advantages and disadvantages by converting a local factory in Kuwait to a cloud-ready system. Initially the flow of the factory’s manufacturing process has been analyzed identifying the bottlenecks and illustrating how cloud manufacturing can eliminate them. Following this an automation process has been analyzed and implemented. A comparison between the process before and after the adaptation has been carried out showing the effects on the cost, the output and the efficiency of the process.Keywords: cloud manufacturing, automation, Kuwait industrial sector, advanced digital manufacturing
Procedia PDF Downloads 771405 Analyzing the Quality of Cloud-Based E-Learning Systems on the Perception of the Learners and the Teachers
Authors: R. W. C. Devindi, S. M. Buddika Harshanath
Abstract:
E-learning is a widely used technology for learning in the modern world. With the pandemic situation the popularity of using e-learning has been increased in a larger capacity. The e-learning educational systems require software resources as well as hardware usually but it is hard for most of the education institutions to afford those resources. Also with the massive user load e-learning has to broaden the server side resources as well. Therefore, in the present cloud computing was implemented in order to make the e – learning systems more efficient. The researcher has analyzed the quality of the e-learning systems on the perception of the learners and the teachers with the aid of hypothesis and has given the analyzed results and the discussion in this report. Therefore, the future research will be able to get some steps to increase the quality of the online learning systems furthermore. In the case of e-learning, quality assurance and cost effectiveness are essential. A complex quality assurance system is used in the stated project. There are no well-defined standard evaluation measures in this field. As a result, accurately assessing the e-learning system's overall quality is challenging. The researcher has done the analysis with the aid of standard methods and software.Keywords: LMS–learning management system, SPSS–statistical package for social sciences (software), eigen value, hypothesis
Procedia PDF Downloads 107404 Beyond Classic Program Evaluation and Review Technique: A Generalized Model for Subjective Distributions with Flexible Variance
Authors: Byung Cheol Kim
Abstract:
The Program Evaluation and Review Technique (PERT) is widely used for project management, but it struggles with subjective distributions, particularly due to its assumptions of constant variance and light tails. To overcome these limitations, we propose the Generalized PERT (G-PERT) model, which enhances PERT by incorporating variability in three-point subjective estimates. Our methodology extends the original PERT model to cover the full range of unimodal beta distributions, enabling the model to handle thick-tailed distributions and offering formulas for computing mean and variance. This maintains the simplicity of PERT while providing a more accurate depiction of uncertainty. Our empirical analysis demonstrates that the G-PERT model significantly improves performance, particularly when dealing with heavy-tail subjective distributions. In comparative assessments with alternative models such as triangular and lognormal distributions, G-PERT shows superior accuracy and flexibility. These results suggest that G-PERT offers a more robust solution for project estimation while still retaining the user-friendliness of the classic PERT approach.Keywords: PERT, subjective distribution, project management, flexible variance
Procedia PDF Downloads 18403 Spatial-Temporal Awareness Approach for Extensive Re-Identification
Authors: Tyng-Rong Roan, Fuji Foo, Wenwey Hseush
Abstract:
Recent development of AI and edge computing plays a critical role to capture meaningful events such as detection of an unattended bag. One of the core problems is re-identification across multiple CCTVs. Immediately following the detection of a meaningful event is to track and trace the objects related to the event. In an extensive environment, the challenge becomes severe when the number of CCTVs increases substantially, imposing difficulties in achieving high accuracy while maintaining real-time performance. The algorithm that re-identifies cross-boundary objects for extensive tracking is referred to Extensive Re-Identification, which emphasizes the issues related to the complexity behind a great number of CCTVs. The Spatial-Temporal Awareness approach challenges the conventional thinking and concept of operations which is labor intensive and time consuming. The ability to perform Extensive Re-Identification through a multi-sensory network provides the next-level insights – creating value beyond traditional risk management.Keywords: long-short-term memory, re-identification, security critical application, spatial-temporal awareness
Procedia PDF Downloads 112402 A Resource Optimization Strategy for CPU (Central Processing Unit) Intensive Applications
Authors: Junjie Peng, Jinbao Chen, Shuai Kong, Danxu Liu
Abstract:
On the basis of traditional resource allocation strategies, the usage of resources on physical servers in cloud data center is great uncertain. It will cause waste of resources if the assignment of tasks is not enough. On the contrary, it will cause overload if the assignment of tasks is too much. This is especially obvious when the applications are the same type because of its resource preferences. Considering CPU intensive application is one of the most common types of application in the cloud, we studied the optimization strategy for CPU intensive applications on the same server. We used resource preferences to analyze the case that multiple CPU intensive applications run simultaneously, and put forward a model which can predict the execution time for CPU intensive applications which run simultaneously. Based on the prediction model, we proposed the method to select the appropriate number of applications for a machine. Experiments show that the model can predict the execution time accurately for CPU intensive applications. To improve the execution efficiency of applications, we propose a scheduling model based on priority for CPU intensive applications. Extensive experiments verify the validity of the scheduling model.Keywords: cloud computing, CPU intensive applications, resource optimization, strategy
Procedia PDF Downloads 278401 A Genre Analysis of University Lectures
Authors: Lee Kok Yueh, Fatin Hamadah Rahman, David Hassell, Au Thien Wan
Abstract:
This work reports on a genre based study of lectures at a University in Brunei, Universiti Teknologi Brunei to explore the communicative functions and to gain insight into the discourse. It explores these in three different domains; Social Science, Engineering and Computing. Audio recordings from four lecturers comprising 20 lectures were transcribed and analysed, with the duration of each lecture varying between 20 to 90 minutes. This qualitative study found similar patterns and functions of lectures as those found in existing research amongst which include greetings, housekeeping, or recapping of previous lectures in the lecture introductions. In the lecture content, comprehension check and use of examples or analogies are very prevalent. However, the use of examples largely depend on the lecture content; and the more technical the content, the harder it was for lecturers to provide examples or analogies. Three functional moves are identified in the lecture conclusions; announcement, summary and future plan, all of which are optional. Despite the relatively small sample size, the present study shows that lectures are interactive and there are some consistencies with the delivery of lecture in relation to the communicative functions and genre of lecture.Keywords: communicative functions, genre analysis, higher education, lectures
Procedia PDF Downloads 191400 Design and Development of Data Mining Application for Medical Centers in Remote Areas
Authors: Grace Omowunmi Soyebi
Abstract:
Data Mining is the extraction of information from a large database which helps in predicting a trend or behavior, thereby helping management make knowledge-driven decisions. One principal problem of most hospitals in rural areas is making use of the file management system for keeping records. A lot of time is wasted when a patient visits the hospital, probably in an emergency, and the nurse or attendant has to search through voluminous files before the patient's file can be retrieved; this may cause an unexpected to happen to the patient. This Data Mining application is to be designed using a Structured System Analysis and design method, which will help in a well-articulated analysis of the existing file management system, feasibility study, and proper documentation of the Design and Implementation of a Computerized medical record system. This Computerized system will replace the file management system and help to easily retrieve a patient's record with increased data security, access clinical records for decision-making, and reduce the time range at which a patient gets attended to.Keywords: data mining, medical record system, systems programming, computing
Procedia PDF Downloads 209399 Flexible Cities: A Multisided Spatial Application of Tracking Livability of Urban Environment
Authors: Maria Christofi, George Plastiras, Rafaella Elia, Vaggelis Tsiourtis, Theocharis Theocharides, Miltiadis Katsaros
Abstract:
The rapidly expanding urban areas of the world constitute a challenge of how we need to make the transition to "the next urbanization", which will be defined by new analytical tools and new sources of data. This paper is about the production of a spatial application, the ‘FUMapp’, where space and its initiative will be available literally, in meters, but also abstractly, at a sensed level. While existing spatial applications typically focus on illustrations of the urban infrastructure, the suggested application goes beyond the existing: It investigates how our environment's perception adapts to the alterations of the built environment through a dataset construction of biophysical measurements (eye-tracking, heart beating), and physical metrics (spatial characteristics, size of stimuli, rhythm of mobility). It explores the intersections between architecture, cognition, and computing where future design can be improved and identifies the flexibility and livability of the ‘available space’ of specific examined urban paths.Keywords: biophysical data, flexibility of urban, livability, next urbanization, spatial application
Procedia PDF Downloads 142398 Automatic Classification of Periodic Heart Sounds Using Convolutional Neural Network
Authors: Jia Xin Low, Keng Wah Choo
Abstract:
This paper presents an automatic normal and abnormal heart sound classification model developed based on deep learning algorithm. MITHSDB heart sounds datasets obtained from the 2016 PhysioNet/Computing in Cardiology Challenge database were used in this research with the assumption that the electrocardiograms (ECG) were recorded simultaneously with the heart sounds (phonocardiogram, PCG). The PCG time series are segmented per heart beat, and each sub-segment is converted to form a square intensity matrix, and classified using convolutional neural network (CNN) models. This approach removes the need to provide classification features for the supervised machine learning algorithm. Instead, the features are determined automatically through training, from the time series provided. The result proves that the prediction model is able to provide reasonable and comparable classification accuracy despite simple implementation. This approach can be used for real-time classification of heart sounds in Internet of Medical Things (IoMT), e.g. remote monitoring applications of PCG signal.Keywords: convolutional neural network, discrete wavelet transform, deep learning, heart sound classification
Procedia PDF Downloads 348397 Research on the Aero-Heating Prediction Based on Hybrid Meshes and Hybrid Schemes
Authors: Qiming Zhang, Youda Ye, Qinxue Jiang
Abstract:
Accurate prediction of external flowfield and aero-heating at the wall of hypersonic vehicle is very crucial for the design of aircrafts. Unstructured/hybrid meshes have more powerful advantages than structured meshes in terms of pre-processing, parallel computing and mesh adaptation, so it is imperative to develop high-resolution numerical methods for the calculation of aerothermal environment on unstructured/hybrid meshes. The inviscid flux scheme is one of the most important factors affecting the accuracy of unstructured/ hybrid mesh heat flux calculation. Here, a new hybrid flux scheme is developed and the approach of interface type selection is proposed: i.e. 1) using the exact Riemann scheme solution to calculate the flux on the faces parallel to the wall; 2) employing Sterger-Warming (S-W) scheme to improve the stability of the numerical scheme in other interfaces. The results of the heat flux fit the one observed experimentally and have little dependence on grids, which show great application prospect in unstructured/ hybrid mesh.Keywords: aero-heating prediction, computational fluid dynamics, hybrid meshes, hybrid schemes
Procedia PDF Downloads 248