Search results for: computer modelling
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4080

Search results for: computer modelling

570 Application of Computational Chemistry for Searching Anticancer Derivatives of 2-Phenazinamines as Bcr-Abl Tyrosine Kinase Inhibitors

Authors: Gajanan M. Sonwane

Abstract:

The computational studies on 2-phenazinamines with their protein targets have been carried out to design compounds with potential anticancer activity. This strategy of designing compounds possessing selectivity over specific tyrosine kinase has been achieved through G-QSAR and molecular docking studies. The objective of this research has been to design newer 2-phenazinamine derivatives as Bcr-Abl tyrosine kinase inhibitors by G-QSAR, molecular docking studies followed by wet-lab studies along with evaluation of their anticancer potential. Computational chemistry was done by using VLife MDS 4.3 and Autodock 4.2 followed by wet-lab experiments for synthesizing 2-phenazinamine derivatives. The chemical structures of ligands in 2D were drawn by employing Chemdraw 2D Ultra 8.0 and were converted into 3D. These were optimized by using a semi-empirical method called MOPAC. The protein structure was retrieved from RCSC protein data bank as a PDB file. The binding interactions of protein and ligands were done by using PYMOL. The molecular properties of the designed compounds were predicted in silico by using Osiris property explorer. The parent compound 2-phenazinamine was synthesized by reduction of 2, 4-dinitro-N-phenyl-benzenamine in the presence of tin chloride followed by cyclization in the presence of nitrobenzene and magnesium sulfate. The derivatization at the amino function of 2-phenazinamine was performed by treating parent compound with various aldehydes in the presence of dicyclohexylcarbodiimide (DCC) and urea to afford 2-(2-chlorophenyl)-3-(phenazine-2-yl) thiazolidine-4-one. Synthesized 39 novel derivatives of 2-phenazinamine and performed antioxidant activity, anti antiproliferative on the bulb of onion and anticancer activity on cell line showing significant competition with marked blockbuster drug imatinib.

Keywords: computer-aided drug design, tyrosin kinases, anticancer, docking

Procedia PDF Downloads 140
569 An Analysis of the Causes of SMEs Failure in Developing Countries: The Case of South Africa

Authors: Paul Saah, Charles Mbohwa, Nelson Sizwe Madonsela

Abstract:

In the context of developing countries, this study explores a crucial component of economic development by examining the reasons behind the failure of small and medium-sized enterprises (SMEs). SMEs are acknowledged as essential drivers of economic expansion, job creation, and poverty alleviation in emerging countries. This research uses South Africa as a case study to evaluate the reasons why SMEs fail in developing nations. This study explores a quantitative research methodology to investigate the complex causes of SME failures using statistical tools and reliability tests. To ensure the viability of data collection, a sample size of 400 small business owners was chosen using a non-probability selection technique. A closed-ended questionnaire was the primary technique used to obtain detailed information from the participants. Data was analysed and interpreted using computer software packages such as the Statistical Package for the Social Sciences (SPSS). According to the findings, the main reasons why SMEs fail in developing nations are a lack of strategic business planning, a lack of funding, poor management, a lack of innovation, a lack of business research and a low level of education and training. The results of this study show that SMEs can be sustainable and successful as long as they comprehend and use the suggested small business success determining variables into their daily operations. This implies that the more SMEs in developing countries implement the proposed determinant factors of small business success in their business operations the more the businesses are likely to succeed and vice versa.

Keywords: failure, developing countries, SMEs, economic development, South Africa

Procedia PDF Downloads 77
568 3D Human Face Reconstruction in Unstable Conditions

Authors: Xiaoyuan Suo

Abstract:

3D object reconstruction is a broad research area within the computer vision field involving many stages and still open problems. One of the existing challenges in this field lies with micromotion, such as the facial expressions on the appearance of the human or animal face. Similar literatures in this field focuses on 3D reconstruction in stable conditions such as an existing image or photos taken in a rather static environment, while the purpose of this work is to discuss a flexible scan system using multiple cameras that can correctly reconstruct 3D stable and moving objects -- human face with expression in particular. Further, a mathematical model is proposed at the end of this literature to automate the 3D object reconstruction process. The reconstruction process takes several stages. Firstly, a set of simple 2D lines would be projected onto the object and hence a set of uneven curvy lines can be obtained, which represents the 3D numerical data of the surface. The lines and their shapes will help to identify object’s 3D construction in pixels. With the two-recorded angles and their distance from the camera, a simple mathematical calculation would give the resulting coordinate of each projected line in an absolute 3D space. This proposed research will benefit many practical areas, including but not limited to biometric identification, authentications, cybersecurity, preservation of cultural heritage, drama acting especially those with rapid and complex facial gestures, and many others. Specifically, this will (I) provide a brief survey of comparable techniques existing in this field. (II) discuss a set of specialized methodologies or algorithms for effective reconstruction of 3D objects. (III)implement, and testing the developed methodologies. (IV) verify findings with data collected from experiments. (V) conclude with lessons learned and final thoughts.

Keywords: 3D photogrammetry, 3D object reconstruction, facial expression recognition, facial recognition

Procedia PDF Downloads 150
567 Computer-Aided Detection of Liver and Spleen from CT Scans using Watershed Algorithm

Authors: Belgherbi Aicha, Bessaid Abdelhafid

Abstract:

In the recent years a great deal of research work has been devoted to the development of semi-automatic and automatic techniques for the analysis of abdominal CT images. The first and fundamental step in all these studies is the semi-automatic liver and spleen segmentation that is still an open problem. In this paper, a semi-automatic liver and spleen segmentation method by the mathematical morphology based on watershed algorithm has been proposed. Our algorithm is currency in two parts. In the first, we seek to determine the region of interest by applying the morphological to extract the liver and spleen. The second step consists to improve the quality of the image gradient. In this step, we propose a method for improving the image gradient to reduce the over-segmentation problem by applying the spatial filters followed by the morphological filters. Thereafter we proceed to the segmentation of the liver, spleen. The aim of this work is to develop a method for semi-automatic segmentation liver and spleen based on watershed algorithm, improve the accuracy and the robustness of the liver and spleen segmentation and evaluate a new semi-automatic approach with the manual for liver segmentation. To validate the segmentation technique proposed, we have tested it on several images. Our segmentation approach is evaluated by comparing our results with the manual segmentation performed by an expert. The experimental results are described in the last part of this work. The system has been evaluated by computing the sensitivity and specificity between the semi-automatically segmented (liver and spleen) contour and the manually contour traced by radiological experts. Liver segmentation has achieved the sensitivity and specificity; sens Liver=96% and specif Liver=99% respectively. Spleen segmentation achieves similar, promising results sens Spleen=95% and specif Spleen=99%.

Keywords: CT images, liver and spleen segmentation, anisotropic diffusion filter, morphological filters, watershed algorithm

Procedia PDF Downloads 325
566 The Effect of Photovoltaic Integrated Shading Devices on the Energy Performance of Apartment Buildings in a Mediterranean Climate

Authors: Jenan Abu Qadourah

Abstract:

With the depletion of traditional fossil resources and the growing human population, it is now more important than ever to reduce our energy usage and harmful emissions. In the Mediterranean region, the intense solar radiation contributes to summertime overheating, which raises energy costs and building carbon footprints, alternatively making it suitable for the installation of solar energy systems. In urban settings, where multi-story structures predominate and roof space is limited, photovoltaic integrated shading devices (PVSD) are a clean solution for building designers. However, incorporating photovoltaic (PV) systems into a building's envelope is a complex procedure that, if not executed correctly, might result in the PV system failing. As a result, potential PVSD design solutions must be assessed based on their overall energy performance from the project's early design stage. Therefore, this paper aims to investigate and compare the possible impact of various PVSDs on the energy performance of new apartments in the Mediterranean region, with a focus on Amman, Jordan. To achieve the research aim, computer simulations were performed to assess and compare the energy performance of different PVSD configurations. Furthermore, an energy index was developed by taking into account all energy aspects, including the building's primary energy demand and the PVSD systems' net energy production. According to the findings, the PVSD system can meet 12% to 43% of the apartment building's electricity needs. By highlighting the potential interest in PVSD systems, this study aids the building designer in producing more energy-efficient buildings and encourages building owners to install PV systems on the façade of their buildings.

Keywords: photovoltaic integrated shading device, solar energy, architecture, energy performance, simulation, overall energy index, Jordan

Procedia PDF Downloads 84
565 Designing an Editorialization Environment for Repeatable Self-Correcting Exercises

Authors: M. Kobylanski, D. Buskulic, P.-H. Duron, D. Revuz, F. Ruggieri, E. Sandier, C. Tijus

Abstract:

In order to design a cooperative e-learning platform, we observed teams of Teacher [T], Computer Scientist [CS] and exerciser's programmer-designer [ED] cooperating for the conception of a self-correcting exercise, but without the use of such a device in order to catch the kind of interactions a useful platform might provide. To do so, we first run a task analysis on how T, CS and ED should be cooperating in order to achieve, at best, the task of creating and implementing self-directed, self-paced, repeatable self-correcting exercises (RSE) in the context of open educational resources. The formalization of the whole process was based on the “objectives, activities and evaluations” theory of educational task analysis. Second, using the resulting frame as a “how-to-do it” guide, we run a series of three contrasted Hackathon of RSE-production to collect data about the cooperative process that could be later used to design the collaborative e-learning platform. Third, we used two complementary methods to collect, to code and to analyze the adequate survey data: the directional flow of interaction among T-CS-ED experts holding a functional role, and the Means-End Problem Solving analysis. Fourth, we listed the set of derived recommendations useful for the design of the exerciser as a cooperative e-learning platform. Final recommendations underline the necessity of building (i) an ecosystem that allows to sustain teams of T-CS-ED experts, (ii) a data safety platform although offering accessibility and open discussion about the production of exercises with their resources and (iii) a good architecture allowing the inheritance of parts of the coding of any exercise already in the data base as well as fast implementation of new kinds of exercises along with their associated learning activities.

Keywords: editorialization, open educational resources, pedagogical alignment, produsage, repeatable self-correcting exercises, team roles

Procedia PDF Downloads 122
564 The Role of Artificial Intelligence in Criminal Procedure

Authors: Herke Csongor

Abstract:

The artificial intelligence (AI) has been used in the United States of America in the decisionmaking process of the criminal justice system for decades. In the field of law, including criminal law, AI can provide serious assistance in decision-making in many places. The paper reviews four main areas where AI still plays a role in the criminal justice system and where it is expected to play an increasingly important role. The first area is the predictive policing: a number of algorithms are used to prevent the commission of crimes (by predicting potential crime locations or perpetrators). This may include the so-called linking hot-spot analysis, crime linking and the predictive coding. The second area is the Big Data analysis: huge amounts of data sets are already opaque to human activity and therefore unprocessable. Law is one of the largest producers of digital documents (because not only decisions, but nowadays the entire document material is available digitally), and this volume can only and exclusively be handled with the help of computer programs, which the development of AI systems can have an increasing impact on. The third area is the criminal statistical data analysis. The collection of statistical data using traditional methods required enormous human resources. The AI is a huge step forward in that it can analyze the database itself, based on the requested aspects, a collection according to any aspect can be available in a few seconds, and the AI itself can analyze the database and indicate if it finds an important connection either from the point of view of crime prevention or crime detection. Finally, the use of AI during decision-making in both investigative and judicial fields is analyzed in detail. While some are skeptical about the future role of AI in decision-making, many believe that the question is not whether AI will participate in decision-making, but only when and to what extent it will transform the current decision-making system.

Keywords: artificial intelligence, international criminal cooperation, planning and organizing of the investigation, risk assessment

Procedia PDF Downloads 38
563 Cyber Security and Risk Assessment of the e-Banking Services

Authors: Aisha F. Bushager

Abstract:

Today we are more exposed than ever to cyber threats and attacks at personal, community, organizational, national, and international levels. More aspects of our lives are operating on computer networks simply because we are living in the fifth domain, which is called the Cyberspace. One of the most sensitive areas that are vulnerable to cyber threats and attacks is the Electronic Banking (e-Banking) area, where the banking sector is providing online banking services to its clients. To be able to obtain the clients trust and encourage them to practice e-Banking, also, to maintain the services provided by the banks and ensure safety, cyber security and risks control should be given a high priority in the e-banking area. The aim of the study is to carry out risk assessment on the e-banking services and determine the cyber threats, cyber attacks, and vulnerabilities that are facing the e-banking area specifically in the Kingdom of Bahrain. To collect relevant data, structured interviews were taken place with e-banking experts in different banks. Then, collected data where used as in input to the risk management framework provided by the National Institute of Standards and Technology (NIST), which was the model used in the study to assess the risks associated with e-banking services. The findings of the study showed that the cyber threats are commonly human errors, technical software or hardware failure, and hackers, on the other hand, the most common attacks facing the e-banking sector were phishing, malware attacks, and denial-of-service. The risks associated with the e-banking services were around the moderate level, however, more controls and countermeasures must be applied to maintain the moderate level of risks. The results of the study will help banks discover their vulnerabilities and maintain their online services, in addition, it will enhance the cyber security and contribute to the management and control of risks that are facing the e-banking sector.

Keywords: cyber security, e-banking, risk assessment, threats identification

Procedia PDF Downloads 350
562 Design and Implementation of a Software Platform Based on Artificial Intelligence for Product Recommendation

Authors: Giuseppina Settanni, Antonio Panarese, Raffaele Vaira, Maurizio Galiano

Abstract:

Nowdays, artificial intelligence is used successfully in academia and industry for its ability to learn from a large amount of data. In particular, in recent years the use of machine learning algorithms in the field of e-commerce has spread worldwide. In this research study, a prototype software platform was designed and implemented in order to suggest to users the most suitable products for their needs. The platform includes a chatbot and a recommender system based on artificial intelligence algorithms that provide suggestions and decision support to the customer. The recommendation systems perform the important function of automatically filtering and personalizing information, thus allowing to manage with the IT overload to which the user is exposed on a daily basis. Recently, international research has experimented with the use of machine learning technologies with the aim to increase the potential of traditional recommendation systems. Specifically, support vector machine algorithms have been implemented combined with natural language processing techniques that allow the user to interact with the system, express their requests and receive suggestions. The interested user can access the web platform on the internet using a computer, tablet or mobile phone, register, provide the necessary information and view the products that the system deems them most appropriate. The platform also integrates a dashboard that allows the use of the various functions, which the platform is equipped with, in an intuitive and simple way. Artificial intelligence algorithms have been implemented and trained on historical data collected from user browsing. Finally, the testing phase allowed to validate the implemented model, which will be further tested by letting customers use it.

Keywords: machine learning, recommender system, software platform, support vector machine

Procedia PDF Downloads 134
561 A Design for Customer Preferences Model by Cluster Analysis of Geometric Features and Customer Preferences

Authors: Yuan-Jye Tseng, Ching-Yen Chen

Abstract:

In the design cycle, a main design task is to determine the external shape of the product. The external shape of a product is one of the key factors that can affect the customers’ preferences linking to the motivation to buy the product, especially in the case of a consumer electronic product such as a mobile phone. The relationship between the external shape and the customer preferences needs to be studied to enhance the customer’s purchase desire and action. In this research, a design for customer preferences model is developed for investigating the relationships between the external shape and the customer preferences of a product. In the first stage, the names of the geometric features are collected and evaluated from the data of the specified internet web pages using the developed text miner. The key geometric features can be determined if the number of occurrence on the web pages is relatively high. For each key geometric feature, the numerical values are explored using the text miner to collect the internet data from the web pages. In the second stage, a cluster analysis model is developed to evaluate the numerical values of the key geometric features to divide the external shapes into several groups. Several design suggestion cases can be proposed, for example, large model, mid-size model, and mini model, for designing a mobile phone. A customer preference index is developed by evaluating the numerical data of each of the key geometric features of the design suggestion cases. The design suggestion case with the top ranking of the customer preference index can be selected as the final design of the product. In this paper, an example product of a notebook computer is illustrated. It shows that the external shape of a product can be used to drive customer preferences. The presented design for customer preferences model is useful for determining a suitable external shape of the product to increase customer preferences.

Keywords: cluster analysis, customer preferences, design evaluation, design for customer preferences, product design

Procedia PDF Downloads 191
560 Integration of an Augmented Reality System for the Visualization of the HRMAS NMR Analysis of Brain Biopsy Specimens Using the Brainlab Cranial Navigation System

Authors: Abdelkrim Belhaoua, Jean-Pierre Radoux, Mariana Kuras, Vincent Récamier, Martial Piotto, Karim Elbayed, François Proust, Izzie Namer

Abstract:

This paper proposes an augmented reality system dedicated to neurosurgery in order to assist the surgeon during an operation. This work is part of the ExtempoRMN project (Funded by Bpifrance) which aims at analyzing during a surgical operation the metabolic content of tumoral brain biopsy specimens by HRMAS NMR. Patients affected with a brain tumor (gliomas) frequently need to undergo an operation in order to remove the tumoral mass. During the operation, the neurosurgeon removes biopsy specimens using image-guided surgery. The biopsy specimens removed are then sent for HRMAS NMR analysis in order to obtain a better diagnosis and prognosis. Image-guided refers to the use of MRI images and a computer to precisely locate and target a lesion (abnormal tissue) within the brain. This is performed using preoperative MRI images and the BrainLab neuro-navigation system. With the patient MRI images loaded on the Brainlab Cranial neuro-navigation system in the operating theater, surgeons can better identify their approach before making an incision. The Brainlab neuro-navigation tool tracks in real time the position of the instruments and displays their position on the patient MRI data. The results of the biopsy analysis by 1H HRMAS NMR are then sent back to the operating theater and superimposed on the 3D localization system directly on the MRI images. The method we have developed to communicate between the HRMAS NMR analysis software and Brainlab makes use of a combination of C++, VTK and the Insight Toolkit using OpenIGTLink protocol.

Keywords: neuro-navigation, augmented reality, biopsy, BrainLab, HR-MAS NMR

Procedia PDF Downloads 363
559 Preparation of Nano-Scaled linbo3 by Polyol Method

Authors: Gabriella Dravecz, László Péter, Zsolt Kis

Abstract:

Abstract— The growth of optical LiNbO3 single crystal and its physical and chemical properties are well known on the macroscopic scale. Nowadays the rare-earth doped single crystals became important for coherent quantum optical experiments: electromagnetically induced transparency, slow down of light pulses, coherent quantum memory. The expansion of applications is increasingly requiring the production of nano scaled LiNbO3 particles. For example, rare-earth doped nanoscaled particles of lithium niobate can be act like single photon source which can be the bases of a coding system of the quantum computer providing complete inaccessibility to strangers. The polyol method is a chemical synthesis where oxide formation occurs instead of hydroxide because of the high temperature. Moreover the polyol medium limits the growth and agglomeration of the grains producing particles with the diameter of 30-200 nm. In this work nano scaled LiNbO3 was prepared by the polyol method. The starting materials (niobium oxalate and LiOH) were diluted in H2O2. Then it was suspended in ethylene glycol and heated up to about the boiling point of the mixture with intensive stirring. After the thermal equilibrium was reached, the mixture was kept in this temperature for 4 hours. The suspension was cooled overnight. The mixture was centrifuged and the particles were filtered. Dynamic Light Scattering (DLS) measurement was carried out and the size of the particles were found to be 80-100 nms. This was confirmed by Scanning Electron Microscope (SEM) investigations. The element analysis of SEM showed large amount of Nb in the sample. The production of LiNbO3 nano particles were succesful by the polyol method. The agglomeration of the particles were avoided and the size of 80-100nm could be reached.

Keywords: lithium-niobate, nanoparticles, polyol, SEM

Procedia PDF Downloads 134
558 Performance Comparison of Deep Convolutional Neural Networks for Binary Classification of Fine-Grained Leaf Images

Authors: Kamal KC, Zhendong Yin, Dasen Li, Zhilu Wu

Abstract:

Intra-plant disease classification based on leaf images is a challenging computer vision task due to similarities in texture, color, and shape of leaves with a slight variation of leaf spot; and external environmental changes such as lighting and background noises. Deep convolutional neural network (DCNN) has proven to be an effective tool for binary classification. In this paper, two methods for binary classification of diseased plant leaves using DCNN are presented; model created from scratch and transfer learning. Our main contribution is a thorough evaluation of 4 networks created from scratch and transfer learning of 5 pre-trained models. Training and testing of these models were performed on a plant leaf images dataset belonging to 16 distinct classes, containing a total of 22,265 images from 8 different plants, consisting of a pair of healthy and diseased leaves. We introduce a deep CNN model, Optimized MobileNet. This model with depthwise separable CNN as a building block attained an average test accuracy of 99.77%. We also present a fine-tuning method by introducing the concept of a convolutional block, which is a collection of different deep neural layers. Fine-tuned models proved to be efficient in terms of accuracy and computational cost. Fine-tuned MobileNet achieved an average test accuracy of 99.89% on 8 pairs of [healthy, diseased] leaf ImageSet.

Keywords: deep convolution neural network, depthwise separable convolution, fine-grained classification, MobileNet, plant disease, transfer learning

Procedia PDF Downloads 186
557 Ebola Virus Glycoprotein Inhibitors from Natural Compounds: Computer-Aided Drug Design

Authors: Driss Cherqaoui, Nouhaila Ait Lahcen, Ismail Hdoufane, Mehdi Oubahmane, Wissal Liman, Christelle Delaite, Mohammed M. Alanazi

Abstract:

The Ebola virus is a highly contagious and deadly pathogen that causes Ebola virus disease. The Ebola virus glycoprotein (EBOV-GP) is a key factor in viral entry into host cells, making it a critical target for therapeutic intervention. Using a combination of computational approaches, this study focuses on the identification of natural compounds that could serve as potent inhibitors of EBOV-GP. The 3D structure of EBOV-GP was selected, with missing residues modeled, and this structure was minimized and equilibrated. Two large natural compound databases, COCONUT and NPASS, were chosen and filtered based on toxicity risks and Lipinski’s Rule of Five to ensure drug-likeness. Following this, a pharmacophore model, built from 22 reported active inhibitors, was employed to refine the selection of compounds with a focus on structural relevance to known Ebola inhibitors. The filtered compounds were subjected to virtual screening via molecular docking, which identified ten promising candidates (five from each database) with strong binding affinities to EBOV-GP. These compounds were then validated through molecular dynamics simulations to evaluate their binding stability and interactions with the target. The top three compounds from each database were further analyzed using ADMET profiling, confirming their favorable pharmacokinetic properties, stability, and safety. These results suggest that the selected compounds have the potential to inhibit EBOV-GP, offering new avenues for antiviral drug development against the Ebola virus.

Keywords: EBOV-GP, Ebola virus glycoprotein, high-throughput drug screening, molecular docking, molecular dynamics, natural compounds, pharmacophore modeling, virtual screening

Procedia PDF Downloads 22
556 Using Printouts as Social Media Evidence and Its Authentication in the Courtroom

Authors: Chih-Ping Chang

Abstract:

Different from traditional objective evidence, social media evidence has its own characteristics with easily tampering, recoverability, and cannot be read without using other devices (such as a computer). Simply taking a screenshot from social network sites must be questioned its original identity. When the police search and seizure digital information, a common way they use is to directly print out digital data obtained and ask the signature of the parties at the presence, without taking original digital data back. In addition to the issue on its original identity, this conduct to obtain evidence may have another two results. First, it will easily allege that is tampering evidence because the police wanted to frame the suspect and falsified evidence. Second, it is not easy to discovery hidden information. The core evidence associated with crime may not appear in the contents of files. Through discovery the original file, data related to the file, such as the original producer, creation time, modification date, and even GPS location display can be revealed from hidden information. Therefore, how to show this kind of evidence in the courtroom will be arguably the most important task for ruling social media evidence. This article, first, will introduce forensic software, like EnCase, TCT, FTK, and analyze their function to prove the identity with another digital data. Then turning back to the court, the second part of this article will discuss legal standard for authentication of social media evidence and application of that forensic software in the courtroom. As the conclusion, this article will provide a rethinking, that is, what kind of authenticity is this rule of evidence chase for. Does legal system automatically operate the transcription of scientific knowledge? Or furthermore, it wants to better render justice, not only under scientific fact, but through multivariate debating.

Keywords: federal rule of evidence, internet forensic, printouts as evidence, social media evidence, United States v. Vayner

Procedia PDF Downloads 290
555 Meta Model for Optimum Design Objective Function of Steel Frames Subjected to Seismic Loads

Authors: Salah R. Al Zaidee, Ali S. Mahdi

Abstract:

Except for simple problems of statically determinate structures, optimum design problems in structural engineering have implicit objective functions where structural analysis and design are essential within each searching loop. With these implicit functions, the structural engineer is usually enforced to write his/her own computer code for analysis, design, and searching for optimum design among many feasible candidates and cannot take advantage of available software for structural analysis, design, and searching for the optimum solution. The meta-model is a regression model used to transform an implicit objective function into objective one and leads in turn to decouple the structural analysis and design processes from the optimum searching process. With the meta-model, well-known software for structural analysis and design can be used in sequence with optimum searching software. In this paper, the meta-model has been used to develop an explicit objective function for plane steel frames subjected to dead, live, and seismic forces. Frame topology is assumed as predefined based on architectural and functional requirements. Columns and beams sections and different connections details are the main design variables in this study. Columns and beams are grouped to reduce the number of design variables and to make the problem similar to that adopted in engineering practice. Data for the implicit objective function have been generated based on analysis and assessment for many design proposals with CSI SAP software. These data have been used later in SPSS software to develop a pure quadratic nonlinear regression model for the explicit objective function. Good correlations with a coefficient, R2, in the range from 0.88 to 0.99 have been noted between the original implicit functions and the corresponding explicit functions generated with meta-model.

Keywords: meta-modal, objective function, steel frames, seismic analysis, design

Procedia PDF Downloads 243
554 Business Program Curriculum with Industry-Recognized Certifications: An Empirical Study of Exam Results and Program Curriculum

Authors: Thomas J. Bell III

Abstract:

Pursuing a business degree is fraught with perplexing questions regarding the rising tuition cost and the immediate value of earning a degree. Any decision to pursue an undergraduate business degree is perceived to have value if it facilitates post-graduate job placement. Business programs have decreased value in the absence of innovation in business programs that close the skills gap between recent graduates and employment opportunities. Industry-based certifications are seemingly becoming a requirement differentiator among job applicants. Texas Wesleyan University offers a Computer Information System (CIS) program with an innovative curriculum that integrates industry-recognized certification training into its traditional curriculum with core subjects and electives. This paper explores a culture of innovation in the CIS business program curriculum that creates sustainable stakeholder value for students, employers, the community, and the university. A quantitative research methodology surveying over one-hundred students in the CIS program will be used to examine factors influencing the success or failure of students taking certification exams. Researchers will analyze control variables to identify specific correlations between practice exams, teaching pedagogy, study time, age, work experience, etc. This study compared various exam preparation techniques to corresponding exam results across several industry certification exams. The findings will aid in understanding control variables with correlations that positively and negatively impact exam results. Such discovery may provide useful insight into pedagogical impact indicators that positively contribute to certification exam success and curriculum enhancement.

Keywords: taking certification exams, exam training, testing skills, exam study aids, certification exam curriculum

Procedia PDF Downloads 88
553 Medical Diagnosis of Retinal Diseases Using Artificial Intelligence Deep Learning Models

Authors: Ethan James

Abstract:

Over one billion people worldwide suffer from some level of vision loss or blindness as a result of progressive retinal diseases. Many patients, particularly in developing areas, are incorrectly diagnosed or undiagnosed whatsoever due to unconventional diagnostic tools and screening methods. Artificial intelligence (AI) based on deep learning (DL) convolutional neural networks (CNN) have recently gained a high interest in ophthalmology for its computer-imaging diagnosis, disease prognosis, and risk assessment. Optical coherence tomography (OCT) is a popular imaging technique used to capture high-resolution cross-sections of retinas. In ophthalmology, DL has been applied to fundus photographs, optical coherence tomography, and visual fields, achieving robust classification performance in the detection of various retinal diseases including macular degeneration, diabetic retinopathy, and retinitis pigmentosa. However, there is no complete diagnostic model to analyze these retinal images that provide a diagnostic accuracy above 90%. Thus, the purpose of this project was to develop an AI model that utilizes machine learning techniques to automatically diagnose specific retinal diseases from OCT scans. The algorithm consists of neural network architecture that was trained from a dataset of over 20,000 real-world OCT images to train the robust model to utilize residual neural networks with cyclic pooling. This DL model can ultimately aid ophthalmologists in diagnosing patients with these retinal diseases more quickly and more accurately, therefore facilitating earlier treatment, which results in improved post-treatment outcomes.

Keywords: artificial intelligence, deep learning, imaging, medical devices, ophthalmic devices, ophthalmology, retina

Procedia PDF Downloads 181
552 Electronic Commerce in Georgia: Problems and Development Perspectives

Authors: Nika GorgoShadze, Anri Shainidze, Bachuki Katamadze

Abstract:

In parallel to the development of the digital economy in the world, electronic commerce is also widely developing. Internet and ICT (information and communication technology) have created new business models as well as promoted to market consolidation, sustainability of the business environment, creation of digital economy, facilitation of business and trade, business dynamism, higher competitiveness, etc. Electronic commerce involves internet technology which is sold via the internet. Nowadays electronic commerce is a field of business which is used by leading world brands very effectively. After the research of internet market in Georgia, it was found out that quality of internet is high in Tbilisi and is low in the regions. The internet market of Tbilisi can be evaluated as high-speed internet service, competitive and cost effective internet market. Development of electronic commerce in Georgia is connected with organizational and methodological as well as legal problems. First of all, a legal framework should be developed which will regulate responsibilities of organizations. The Ministry of Economy and Sustainable Development will play a crucial role in creating legal framework. Ministry of Justice will also be involved in this process as well as agency for data exchange. Measures should be taken in order to make electronic commerce in Georgia easier. Business companies may be offered some model to get low-cost and complex service. A service centre should be created which will provide all kinds of online-shopping. This will be a rather interesting innovation which will facilitate online-shopping in Georgia. Development of electronic business in Georgia requires modernized infrastructure of telecommunications (especially in the regions) as well as solution of institutional and socio-economic problems. Issues concerning internet availability and computer skills are also important.

Keywords: electronic commerce, internet market, electronic business, information technology, information society, electronic systems

Procedia PDF Downloads 384
551 Design and Creation of a BCI Videogame for Training and Measure of Sustained Attention in Children with ADHD

Authors: John E. Muñoz, Jose F. Lopez, David S. Lopez

Abstract:

Attention Deficit Hyperactivity Disorder (ADHD) is a disorder that affects 1 out of 5 Colombian children, converting into a real public health problem in the country. Conventional treatments such as medication and neuropsychological therapy have been proved to be insufficient in order to decrease high incidence levels of ADHD in the principal Colombian cities. This work demonstrates a design and development of a videogame that uses a brain computer interface not only to serve as an input device but also as a tool to monitor neurophysiologic signal. The video game named “The Harvest Challenge” puts a cultural scene of a Colombian coffee grower in its context, where a player can use his/her avatar in three mini games created in order to reinforce four fundamental aspects: i) waiting ability, ii) planning ability, iii) ability to follow instructions and iv) ability to achieve objectives. The details of this collaborative designing process of the multimedia tool according to the exact clinic necessities and the description of interaction proposals are presented through the mental stages of attention and relaxation. The final videogame is presented as a tool for sustained attention training in children with ADHD using as an action mechanism the neuromodulation of Beta and Theta waves through an electrode located in the central part of the front lobe of the brain. The processing of an electroencephalographic signal is produced automatically inside the videogame allowing to generate a report of the theta/beta ratio evolution - a biological marker, which has been demonstrated to be a sufficient measure to discriminate of children with deficit and without.

Keywords: BCI, neuromodulation, ADHD, videogame, neurofeedback, theta/beta ratio

Procedia PDF Downloads 371
550 An Ensemble System of Classifiers for Computer-Aided Volcano Monitoring

Authors: Flavio Cannavo

Abstract:

Continuous evaluation of the status of potentially hazardous volcanos plays a key role for civil protection purposes. The importance of monitoring volcanic activity, especially for energetic paroxysms that usually come with tephra emissions, is crucial not only for exposures to the local population but also for airline traffic. Presently, real-time surveillance of most volcanoes worldwide is essentially delegated to one or more human experts in volcanology, who interpret data coming from different kind of monitoring networks. Unfavorably, the high nonlinearity of the complex and coupled volcanic dynamics leads to a large variety of different volcanic behaviors. Moreover, continuously measured parameters (e.g. seismic, deformation, infrasonic and geochemical signals) are often not able to fully explain the ongoing phenomenon, thus making the fast volcano state assessment a very puzzling task for the personnel on duty at the control rooms. With the aim of aiding the personnel on duty in volcano surveillance, here we introduce a system based on an ensemble of data-driven classifiers to infer automatically the ongoing volcano status from all the available different kind of measurements. The system consists of a heterogeneous set of independent classifiers, each one built with its own data and algorithm. Each classifier gives an output about the volcanic status. The ensemble technique allows weighting the single classifier output to combine all the classifications into a single status that maximizes the performance. We tested the model on the Mt. Etna (Italy) case study by considering a long record of multivariate data from 2011 to 2015 and cross-validated it. Results indicate that the proposed model is effective and of great power for decision-making purposes.

Keywords: Bayesian networks, expert system, mount Etna, volcano monitoring

Procedia PDF Downloads 246
549 Virtual Reality and Avatars in Education

Authors: Michael Brazley

Abstract:

Virtual Reality (VR) and 3D videos are the most current generation of learning technology today. Virtual Reality and 3D videos are being used in professional offices and Schools now for marketing and education. Technology in the field of design has progress from two dimensional drawings to 3D models, using computers and sophisticated software. Virtual Reality is being used as collaborative means to allow designers and others to meet and communicate inside models or VR platforms using avatars. This research proposes to teach students from different backgrounds how to take a digital model into a 3D video, then into VR, and finally VR with multiple avatars communicating with each other in real time. The next step would be to develop the model where people from three or more different locations can meet as avatars in real time, in the same model and talk to each other. This research is longitudinal, studying the use of 3D videos in graduate design and Virtual Reality in XR (Extended Reality) courses. The research methodology is a combination of quantitative and qualitative methods. The qualitative methods begin with the literature review and case studies. The quantitative methods come by way of student’s 3D videos, survey, and Extended Reality (XR) course work. The end product is to develop a VR platform with multiple avatars being able to communicate in real time. This research is important because it will allow multiple users to remotely enter your model or VR platform from any location in the world and effectively communicate in real time. This research will lead to improved learning and training using Virtual Reality and Avatars; and is generalizable because most Colleges, Universities, and many citizens own VR equipment and computer labs. This research did produce a VR platform with multiple avatars having the ability to move and speak to each other in real time. Major implications of the research include but not limited to improved: learning, teaching, communication, marketing, designing, planning, etc. Both hardware and software played a major role in project success.

Keywords: virtual reality, avatars, education, XR

Procedia PDF Downloads 98
548 Computer-Aided Depression Screening: A Literature Review on Optimal Methodologies for Mental Health Screening

Authors: Michelle Nighswander

Abstract:

Suicide can be a tragic response to mental illness. It is difficult for people to disclose or discuss suicidal impulses. The stigma surrounding mental health can create a reluctance to seek help for mental illness. Patients may feel pressure to exhibit a socially desirable demeanor rather than reveal these issues, especially if they sense their healthcare provider is pressed for time or does not have an extensive history with their provider. Overcoming these barriers can be challenging. Although there are several validated depression and suicide risk instruments, varying processes used to administer these tools may impact the truthfulness of the responses. A literature review was conducted to find evidence of the impact of the environment on the accuracy of depression screening. Many investigations do not describe the environment and fewer studies use a comparison design. However, three studies demonstrated that computerized self-reporting might be more likely to elicit truthful and accurate responses due to increased privacy when responding compared to a face-to-face interview. These studies showed patients reported positive reactions to computerized screening for other stigmatizing health conditions such as alcohol use during pregnancy. Computerized self-screening for depression offers the possibility of more privacy and patient reflection, which could then send a targeted message of risk to the healthcare provider. This could potentially increase the accuracy while also increasing time efficiency for the clinic. Considering the persistent effects of mental health stigma, how these screening questions are posed can impact patients’ responses. This literature review analyzes trends in depression screening methodologies, the impact of setting on the results and how this may assist in overcoming one barrier caused by stigma.

Keywords: computerized self-report, depression, mental health stigma, suicide risk

Procedia PDF Downloads 131
547 Developing an Edutainment Game for Children with ADHD Based on SAwD and VCIA Model

Authors: Bruno Gontijo Batista

Abstract:

This paper analyzes how the Socially Aware Design (SAwD) and the Value-oriented and Culturally Informed Approach (VCIA) design model can be used to develop an edutainment game for children with Attention Deficit Hyperactivity Disorder (ADHD). The SAwD approach seeks a design that considers new dimensions in human-computer interaction, such as culture, aesthetics, emotional and social aspects of the user's everyday experience. From this perspective, the game development was VCIA model-based, including the users in the design process through participatory methodologies, considering their behavioral patterns, culture, and values. This is because values, beliefs, and behavioral patterns influence how technology is understood and used and the way it impacts people's lives. This model can be applied at different stages of design, which goes from explaining the problem and organizing the requirements to the evaluation of the prototype and the final solution. Thus, this paper aims to understand how this model can be used in the development of an edutainment game for children with ADHD. In the area of education and learning, children with ADHD have difficulties both in behavior and in school performance, as they are easily distracted, which is reflected both in classes and on tests. Therefore, they must perform tasks that are exciting or interesting for them, once the pleasure center in the brain is activated, it reinforces the center of attention, leaving the child more relaxed and focused. In this context, serious games have been used as part of the treatment of ADHD in children aiming to improve focus and attention, stimulate concentration, as well as be a tool for improving learning in areas such as math and reading, combining education and entertainment (edutainment). Thereby, as a result of the research, it was developed, in a participatory way, applying the VCIA model, an edutainment game prototype, for a mobile platform, for children between 8 and 12 years old.

Keywords: ADHD, edutainment, SAwD, VCIA

Procedia PDF Downloads 190
546 Strategies and Approaches for Curriculum Development and Training of Faculty in Cybersecurity Education

Authors: Lucy Tsado

Abstract:

As cybercrime and cyberattacks continue to increase, the need to respond will follow suit. When cybercrimes occur, the duty to respond sometimes falls on law enforcement. However, criminal justice students are not taught concepts in cybersecurity and digital forensics. There is, therefore, an urgent need for many more institutions to begin teaching cybersecurity and related courses to social science students especially criminal justice students. However, many faculty in universities, colleges, and high schools are not equipped to teach these courses or do not have the knowledge and resources to teach important concepts in cybersecurity or digital forensics to criminal justice students. This research intends to develop curricula and training programs to equip faculty with the skills to meet this need. There is a current call to involve non-technical fields to fill the cybersecurity skills gap, according to experts. There is a general belief among non-technical fields that cybersecurity education is only attainable within computer science and technologically oriented fields. As seen from current calls, this is not entirely the case. Transitioning into the field is possible through curriculum development, training, certifications, internships and apprenticeships, and competitions. There is a need to identify how a cybersecurity eco-system can be created at a university to encourage/start programs that will lead to an interest in cybersecurity education as well as attract potential students. A short-term strategy can address this problem through curricula development, while a long-term strategy will address developing training faculty to teach cybersecurity and digital forensics. Therefore this research project addresses this overall problem in two parts, through curricula development for the criminal justice discipline; and training of faculty in criminal justice to teaching the important concepts of cybersecurity and digital forensics.

Keywords: cybersecurity education, criminal justice, curricula development, nontechnical cybersecurity, cybersecurity, digital forensics

Procedia PDF Downloads 105
545 An Experimental Approach to the Influence of Tipping Points and Scientific Uncertainties in the Success of International Fisheries Management

Authors: Jules Selles

Abstract:

The Atlantic and Mediterranean bluefin tuna fishery have been considered as the archetype of an overfished and mismanaged fishery. This crisis has demonstrated the role of public awareness and the importance of the interactions between science and management about scientific uncertainties. This work aims at investigating the policy making process associated with a regional fisheries management organization. We propose a contextualized computer-based experimental approach, in order to explore the effects of key factors on the cooperation process in a complex straddling stock management setting. Namely, we analyze the effects of the introduction of a socio-economic tipping point and the uncertainty surrounding the estimation of the resource level. Our approach is based on a Gordon-Schaefer bio-economic model which explicitly represents the decision making process. Each participant plays the role of a stakeholder of ICCAT and represents a coalition of fishing nations involved in the fishery and decide unilaterally a harvest policy for the coming year. The context of the experiment induces the incentives for exploitation and collaboration to achieve common sustainable harvest plans at the Atlantic bluefin tuna stock scale. Our rigorous framework allows testing how stakeholders who plan the exploitation of a fish stock (a common pool resource) respond to two kinds of effects: i) the inclusion of a drastic shift in the management constraints (beyond a socio-economic tipping point) and ii) an increasing uncertainty in the scientific estimation of the resource level.

Keywords: economic experiment, fisheries management, game theory, policy making, Atlantic Bluefin tuna

Procedia PDF Downloads 253
544 Computing Machinery and Legal Intelligence: Towards a Reflexive Model for Computer Automated Decision Support in Public Administration

Authors: Jacob Livingston Slosser, Naja Holten Moller, Thomas Troels Hildebrandt, Henrik Palmer Olsen

Abstract:

In this paper, we propose a model for human-AI interaction in public administration that involves legal decision-making. Inspired by Alan Turing’s test for machine intelligence, we propose a way of institutionalizing a continuous working relationship between man and machine that aims at ensuring both good legal quality and higher efficiency in decision-making processes in public administration. We also suggest that our model enhances the legitimacy of using AI in public legal decision-making. We suggest that case loads in public administration could be divided between a manual and an automated decision track. The automated decision track will be an algorithmic recommender system trained on former cases. To avoid unwanted feedback loops and biases, part of the case load will be dealt with by both a human case worker and the automated recommender system. In those cases an experienced human case worker will have the role of an evaluator, choosing between the two decisions. This model will ensure that the algorithmic recommender system is not compromising the quality of the legal decision making in the institution. It also enhances the legitimacy of using algorithmic decision support because it provides justification for its use by being seen as superior to human decisions when the algorithmic recommendations are preferred by experienced case workers. The paper outlines in some detail the process through which such a model could be implemented. It also addresses the important issue that legal decision making is subject to legislative and judicial changes and that legal interpretation is context sensitive. Both of these issues requires continuous supervision and adjustments to algorithmic recommender systems when used for legal decision making purposes.

Keywords: administrative law, algorithmic decision-making, decision support, public law

Procedia PDF Downloads 217
543 Teachers' Disability Disclosure: A Multiple Perspective

Authors: N. Tal-Alon, O. Shapira-Lishchinsky

Abstract:

Disability disclosure is one of the most complicated dilemmas that people with invisible disabilities face. There are only a few research studies that have focused on the difficulties and dilemmas of teachers who have different disabilities. In addition, there are currently no research studies focusing specifically on the different aspects of disability disclosure, which are unique to teachers. This research has, therefore, broadened the knowledge base and understanding of the dilemma of disability disclosure among teachers with invisible physical disabilities. In addition, it has shed light on the ways this issue is perceived by different groups: the perspective of school principals, the perspective of colleagues, and the perspective of teachers with physical disabilities themselves. The study sample included 12 teachers with invisible physical disabilities, 10 school principals who employ at least one teacher with an invisible physical disability, and 10 professional colleagues of at least one teacher with an invisible physical disability. This particular research study was conducted using a qualitative approach through the Narralizer computer program based on a series of in-depth interviews. The data analysis was carried out by grouping major points of interest into specific categories and sub-categories. The findings of this research suggest that teachers with disabilities struggle with the dilemma of whether or not to reveal their disability to the school staff and to their students. It was found that there were considerable differences between the issues that faculty members considered regarding this dilemma and the ones that teachers with disabilities considered. While the principals and professional colleagues focused solely on their own interests, the teachers with a disability emphasized more on the ways that they might have a positive influence on their students, as well as their own individual interests. In addition, school principals on a whole tended to view negatively the option of disclosing the disability to the students and were often critical towards teachers who concealed their disability from the school staff. The importance of this research is in its potential to influence policy decisions that can be implemented by the Ministry of Education regarding the support system for teachers with invisible physical disabilities.

Keywords: education, employment, invisible disabilities, teachers

Procedia PDF Downloads 102
542 A Comprehensive Theory of Communication with Biological and Non-Biological Intelligence for a 21st Century Curriculum

Authors: Thomas Schalow

Abstract:

It is commonly recognized that our present curriculum is not preparing students to function in the 21st century. This is particularly true in regard to communication needs across cultures - both human and non-human. In this paper, a comprehensive theory of communication-based on communication with non-human cultures and intelligences is presented to meet the following three imminent contingencies: communicating with sentient biological intelligences, communicating with extraterrestrial intelligences, and communicating with artificial super-intelligences. The paper begins with the argument that we need to become much more serious about communicating with the non-human, intelligent life forms that already exists around us here on Earth. We need to broaden our definition of communication and reach out to other sentient life forms in order to provide humanity with a better perspective of its place within our ecosystem. The paper next examines the science and philosophy behind CETI (communication with extraterrestrial intelligences) and how it could prove useful even in the absence of contact with alien life. However, CETI’s assumptions and methodology need to be revised in accordance with the communication theory being proposed in this paper if we are truly serious about finding and communicating with life beyond Earth. The final theme explored in this paper is communication with non-biological super-intelligences. Humanity has never been truly compelled to converse with other species, and our failure to seriously consider such intercourse has left us largely unprepared to deal with communication in a future that will be mediated and controlled by computer algorithms. Fortunately, our experience dealing with other cultures can provide us with a framework for this communication. The basic concepts behind intercultural communication can be applied to the three types of communication envisioned in this paper if we are willing to recognize that we are in fact dealing with other cultures when we interact with other species, alien life, and artificial super-intelligence. The ideas considered in this paper will require a new mindset for humanity, but a new disposition will yield substantial gains. A curriculum that is truly ready for the 21st century needs to be aligned with this new theory of communication.

Keywords: artificial intelligence, CETI, communication, language

Procedia PDF Downloads 364
541 Crude Glycerol Affects Canine Spermatoa Motility: Computer Assister Semen Analysis in Vitro

Authors: P. Massanyi, L. Kichi, T. Slanina, E. Kolesar, J. Danko, N. Lukac, E. Tvrda, R. Stawarz, A. Kolesarova

Abstract:

Target of this study was the analysis of the impact of crude glycerol on canine spermatozoa motility, morphology, viability, and membrane integrity. Experiments were realized in vitro. In the study, semen from 5 large dog breeds was used. They were typical representatives of large breeds, coming from healthy rearing, regularly vaccinated and integrated to the further breeding. Semen collections were realized at the owners of animals and in the veterinary clinic. Subsequently the experiments were realized at the Department of Animal Physiology of the SUA in Nitra. The spermatozoa motility was evaluated using CASA analyzer (SpermVisionTM, Minitub, Germany) at the temperature 5 and 37°C for 5 hours. In the study, 13 motility parameters were evaluated. Generally, crude glycerol has generally negative effect on spermatozoa motility. Morphological analysis was realized using Hancock staining and the preparations were evaluated at magnification 1000x using classification tables of morphologically changed spermatozoa. Data clearly detected the highest number of morphologically changed spermatozoa in the experimental groups (know twisted tails, tail torso and tail coiling). For acrosome alterations swelled acrosomes, removed acrosomes and acrosomes with undulated membrane were detected. In this study also the effect of crude glycerol on spermatozoa membrane integrity were analyzed. The highest crude glycerol concentration significantly affects spermatozoa integrity. Results of this study show that crude glycerol has effect of spermatozoa motility, viability, and membrane integrity. Detected changes are related to crude glycerol concentration, temperature, as well as time of incubation.

Keywords: dog, semen, spermatozoa, acrosome, glycerol, CASA, viability

Procedia PDF Downloads 319