Search results for: intelligent computing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1678

Search results for: intelligent computing

238 Bismuth Telluride Topological Insulator: Physical Vapor Transport vs Molecular Beam Epitaxy

Authors: Omar Concepcion, Osvaldo De Melo, Arturo Escobosa

Abstract:

Topological insulator (TI) materials are insulating in the bulk and conducting in the surface. The unique electronic properties associated with these surface states make them strong candidates for exploring innovative quantum phenomena and as practical applications for quantum computing, spintronic and nanodevices. Many materials, including Bi₂Te₃, have been proposed as TIs and, in some cases, it has been demonstrated experimentally by angle-resolved photoemission spectroscopy (ARPES), scanning tunneling spectroscopy (STM) and/or magnetotransport measurements. A clean surface is necessary in order to make any of this measurements. Several techniques have been used to produce films and different kinds of nanostructures. Growth and characterization in situ is usually the best option although cleaving the films can be an alternative to have a suitable surface. In the present work, we report a comparison of Bi₂Te₃ grown by physical vapor transport (PVT) and molecular beam epitaxy (MBE). The samples were characterized by X-ray diffraction (XRD), Scanning electron microscopy (SEM), Atomic force microscopy (AFM), X-ray photoelectron spectroscopy (XPS) and ARPES. The Bi₂Te₃ samples grown by PVT, were cleaved in the ultra-high vacuum in order to obtain a surface free of contaminants. In both cases, the XRD shows a c-axis orientation and the pole diagrams proved the epitaxial relationship between film and substrate. The ARPES image shows the linear dispersion characteristic of the surface states of the TI materials. The samples grown by PVT, a relatively simple and cost-effective technique shows the same high quality and TI properties than the grown by MBE.

Keywords: Bismuth telluride, molecular beam epitaxy, physical vapor transport, topological insulator

Procedia PDF Downloads 164
237 Microwave Single Photon Source Using Landau-Zener Transitions

Authors: Siddhi Khaire, Samarth Hawaldar, Baladitya Suri

Abstract:

As efforts towards quantum communication advance, the need for single photon sources becomes imminent. Due to the extremely low energy of a single microwave photon, efforts to build single photon sources and detectors in the microwave range are relatively recent. We plan to use a Cooper Pair Box (CPB) that has a ‘sweet-spot’ where the two energy levels have minimal separation. Moreover, these qubits have fairly large anharmonicity making them close to ideal two-level systems. If the external gate voltage of these qubits is varied rapidly while passing through the sweet-spot, due to Landau-Zener effect, the qubit can be excited almost deterministically. The rapid change of the gate control voltage through the sweet spot induces a non-adiabatic population transfer from the ground to the excited state. The qubit eventually decays into the emission line emitting a single photon. The advantage of this setup is that the qubit can be excited without any coherent microwave excitation, thereby effectively increasing the usable source efficiency due to the absence of control pulse microwave photons. Since the probability of a Landau-Zener transition can be made almost close to unity by the appropriate design of parameters, this source behaves as an on-demand source of single microwave photons. The large anharmonicity of the CPB also ensures that only one excited state is involved in the transition and multiple photon output is highly improbable. Such a system has so far not been implemented and would find many applications in the areas of quantum optics, quantum computation as well as quantum communication.

Keywords: quantum computing, quantum communication, quantum optics, superconducting qubits, flux qubit, charge qubit, microwave single photon source, quantum information processing

Procedia PDF Downloads 65
236 Roof and Road Network Detection through Object Oriented SVM Approach Using Low Density LiDAR and Optical Imagery in Misamis Oriental, Philippines

Authors: Jigg L. Pelayo, Ricardo G. Villar, Einstine M. Opiso

Abstract:

The advances of aerial laser scanning in the Philippines has open-up entire fields of research in remote sensing and machine vision aspire to provide accurate timely information for the government and the public. Rapid mapping of polygonal roads and roof boundaries is one of its utilization offering application to disaster risk reduction, mitigation and development. The study uses low density LiDAR data and high resolution aerial imagery through object-oriented approach considering the theoretical concept of data analysis subjected to machine learning algorithm in minimizing the constraints of feature extraction. Since separating one class from another in distinct regions of a multi-dimensional feature-space, non-trivial computing for fitting distribution were implemented to formulate the learned ideal hyperplane. Generating customized hybrid feature which were then used in improving the classifier findings. Supplemental algorithms for filtering and reshaping object features are develop in the rule set for enhancing the final product. Several advantages in terms of simplicity, applicability, and process transferability is noticeable in the methodology. The algorithm was tested in the different random locations of Misamis Oriental province in the Philippines demonstrating robust performance in the overall accuracy with greater than 89% and potential to semi-automation. The extracted results will become a vital requirement for decision makers, urban planners and even the commercial sector in various assessment processes.

Keywords: feature extraction, machine learning, OBIA, remote sensing

Procedia PDF Downloads 338
235 Computer-Aided Detection of Liver and Spleen from CT Scans using Watershed Algorithm

Authors: Belgherbi Aicha, Bessaid Abdelhafid

Abstract:

In the recent years a great deal of research work has been devoted to the development of semi-automatic and automatic techniques for the analysis of abdominal CT images. The first and fundamental step in all these studies is the semi-automatic liver and spleen segmentation that is still an open problem. In this paper, a semi-automatic liver and spleen segmentation method by the mathematical morphology based on watershed algorithm has been proposed. Our algorithm is currency in two parts. In the first, we seek to determine the region of interest by applying the morphological to extract the liver and spleen. The second step consists to improve the quality of the image gradient. In this step, we propose a method for improving the image gradient to reduce the over-segmentation problem by applying the spatial filters followed by the morphological filters. Thereafter we proceed to the segmentation of the liver, spleen. The aim of this work is to develop a method for semi-automatic segmentation liver and spleen based on watershed algorithm, improve the accuracy and the robustness of the liver and spleen segmentation and evaluate a new semi-automatic approach with the manual for liver segmentation. To validate the segmentation technique proposed, we have tested it on several images. Our segmentation approach is evaluated by comparing our results with the manual segmentation performed by an expert. The experimental results are described in the last part of this work. The system has been evaluated by computing the sensitivity and specificity between the semi-automatically segmented (liver and spleen) contour and the manually contour traced by radiological experts. Liver segmentation has achieved the sensitivity and specificity; sens Liver=96% and specif Liver=99% respectively. Spleen segmentation achieves similar, promising results sens Spleen=95% and specif Spleen=99%.

Keywords: CT images, liver and spleen segmentation, anisotropic diffusion filter, morphological filters, watershed algorithm

Procedia PDF Downloads 299
234 C-eXpress: A Web-Based Analysis Platform for Comparative Functional Genomics and Proteomics in Human Cancer Cell Line, NCI-60 as an Example

Authors: Chi-Ching Lee, Po-Jung Huang, Kuo-Yang Huang, Petrus Tang

Abstract:

Background: Recent advances in high-throughput research technologies such as new-generation sequencing and multi-dimensional liquid chromatography makes it possible to dissect the complete transcriptome and proteome in a single run for the first time. However, it is almost impossible for many laboratories to handle and analysis these “BIG” data without the support from a bioinformatics team. We aimed to provide a web-based analysis platform for users with only limited knowledge on bio-computing to study the functional genomics and proteomics. Method: We use NCI-60 as an example dataset to demonstrate the power of the web-based analysis platform and data delivering system: C-eXpress takes a simple text file that contain the standard NCBI gene or protein ID and expression levels (rpkm or fold) as input file to generate a distribution map of gene/protein expression levels in a heatmap diagram organized by color gradients. The diagram is hyper-linked to a dynamic html table that allows the users to filter the datasets based on various gene features. A dynamic summary chart is generated automatically after each filtering process. Results: We implemented an integrated database that contain pre-defined annotations such as gene/protein properties (ID, name, length, MW, pI); pathways based on KEGG and GO biological process; subcellular localization based on GO cellular component; functional classification based on GO molecular function, kinase, peptidase and transporter. Multiple ways of sorting of column and rows is also provided for comparative analysis and visualization of multiple samples.

Keywords: cancer, visualization, database, functional annotation

Procedia PDF Downloads 591
233 Faster Pedestrian Recognition Using Deformable Part Models

Authors: Alessandro Preziosi, Antonio Prioletti, Luca Castangia

Abstract:

Deformable part models achieve high precision in pedestrian recognition, but all publicly available implementations are too slow for real-time applications. We implemented a deformable part model algorithm fast enough for real-time use by exploiting information about the camera position and orientation. This implementation is both faster and more precise than alternative DPM implementations. These results are obtained by computing convolutions in the frequency domain and using lookup tables to speed up feature computation. This approach is almost an order of magnitude faster than the reference DPM implementation, with no loss in precision. Knowing the position of the camera with respect to horizon it is also possible prune many hypotheses based on their size and location. The range of acceptable sizes and positions is set by looking at the statistical distribution of bounding boxes in labelled images. With this approach it is not needed to compute the entire feature pyramid: for example higher resolution features are only needed near the horizon. This results in an increase in mean average precision of 5% and an increase in speed by a factor of two. Furthermore, to reduce misdetections involving small pedestrians near the horizon, input images are supersampled near the horizon. Supersampling the image at 1.5 times the original scale, results in an increase in precision of about 4%. The implementation was tested against the public KITTI dataset, obtaining an 8% improvement in mean average precision over the best performing DPM-based method. By allowing for a small loss in precision computational time can be easily brought down to our target of 100ms per image, reaching a solution that is faster and still more precise than all publicly available DPM implementations.

Keywords: autonomous vehicles, deformable part model, dpm, pedestrian detection, real time

Procedia PDF Downloads 256
232 ChaQra: A Cellular Unit of the Indian Quantum Network

Authors: Shashank Gupta, Iteash Agarwal, Vijayalaxmi Mogiligidda, Rajesh Kumar Krishnan, Sruthi Chennuri, Deepika Aggarwal, Anwesha Hoodati, Sheroy Cooper, Ranjan, Mohammad Bilal Sheik, Bhavya K. M., Manasa Hegde, M. Naveen Krishna, Amit Kumar Chauhan, Mallikarjun Korrapati, Sumit Singh, J. B. Singh, Sunil Sud, Sunil Gupta, Sidhartha Pant, Sankar, Neha Agrawal, Ashish Ranjan, Piyush Mohapatra, Roopak T., Arsh Ahmad, Nanjunda M., Dilip Singh

Abstract:

Major research interests on quantum key distribution (QKD) are primarily focussed on increasing 1. point-to-point transmission distance (1000 Km), 2. secure key rate (Mbps), 3. security of quantum layer (device-independence). It is great to push the boundaries on these fronts, but these isolated approaches are neither scalable nor cost-effective due to the requirements of specialised hardware and different infrastructure. Current and future QKD network requires addressing different sets of challenges apart from distance, key rate, and quantum security. In this regard, we present ChaQra -a sub-quantum network with core features as 1) Crypto agility (integration in the already deployed telecommunication fibres), 2) Software defined networking (SDN paradigm for routing different nodes), 3) reliability (addressing denial-of-service with hybrid quantum safe cryptography), 4) upgradability (modules upgradation based on scientific and technological advancements), 5) Beyond QKD (using QKD network for distributed computing, multi-party computation etc). Our results demonstrate a clear path to create and accelerate quantum secure Indian subcontinent under the national quantum mission.

Keywords: quantum network, quantum key distribution, quantum security, quantum information

Procedia PDF Downloads 21
231 An Approach to Building a Recommendation Engine for Travel Applications Using Genetic Algorithms and Neural Networks

Authors: Adrian Ionita, Ana-Maria Ghimes

Abstract:

The lack of features, design and the lack of promoting an integrated booking application are some of the reasons why most online travel platforms only offer automation of old booking processes, being limited to the integration of a smaller number of services without addressing the user experience. This paper represents a practical study on how to improve travel applications creating user-profiles through data-mining based on neural networks and genetic algorithms. Choices made by users and their ‘friends’ in the ‘social’ network context can be considered input data for a recommendation engine. The purpose of using these algorithms and this design is to improve user experience and to deliver more features to the users. The paper aims to highlight a broader range of improvements that could be applied to travel applications in terms of design and service integration, while the main scientific approach remains the technical implementation of the neural network solution. The motivation of the technologies used is also related to the initiative of some online booking providers that have made the fact that they use some ‘neural network’ related designs public. These companies use similar Big-Data technologies to provide recommendations for hotels, restaurants, and cinemas with a neural network based recommendation engine for building a user ‘DNA profile’. This implementation of the ‘profile’ a collection of neural networks trained from previous user choices, can improve the usability and design of any type of application.

Keywords: artificial intelligence, big data, cloud computing, DNA profile, genetic algorithms, machine learning, neural networks, optimization, recommendation system, user profiling

Procedia PDF Downloads 142
230 Computing Machinery and Legal Intelligence: Towards a Reflexive Model for Computer Automated Decision Support in Public Administration

Authors: Jacob Livingston Slosser, Naja Holten Moller, Thomas Troels Hildebrandt, Henrik Palmer Olsen

Abstract:

In this paper, we propose a model for human-AI interaction in public administration that involves legal decision-making. Inspired by Alan Turing’s test for machine intelligence, we propose a way of institutionalizing a continuous working relationship between man and machine that aims at ensuring both good legal quality and higher efficiency in decision-making processes in public administration. We also suggest that our model enhances the legitimacy of using AI in public legal decision-making. We suggest that case loads in public administration could be divided between a manual and an automated decision track. The automated decision track will be an algorithmic recommender system trained on former cases. To avoid unwanted feedback loops and biases, part of the case load will be dealt with by both a human case worker and the automated recommender system. In those cases an experienced human case worker will have the role of an evaluator, choosing between the two decisions. This model will ensure that the algorithmic recommender system is not compromising the quality of the legal decision making in the institution. It also enhances the legitimacy of using algorithmic decision support because it provides justification for its use by being seen as superior to human decisions when the algorithmic recommendations are preferred by experienced case workers. The paper outlines in some detail the process through which such a model could be implemented. It also addresses the important issue that legal decision making is subject to legislative and judicial changes and that legal interpretation is context sensitive. Both of these issues requires continuous supervision and adjustments to algorithmic recommender systems when used for legal decision making purposes.

Keywords: administrative law, algorithmic decision-making, decision support, public law

Procedia PDF Downloads 189
229 Numerical Studies for Standard Bi-Conjugate Gradient Stabilized Method and the Parallel Variants for Solving Linear Equations

Authors: Kuniyoshi Abe

Abstract:

Bi-conjugate gradient (Bi-CG) is a well-known method for solving linear equations Ax = b, for x, where A is a given n-by-n matrix, and b is a given n-vector. Typically, the dimension of the linear equation is high and the matrix is sparse. A number of hybrid Bi-CG methods such as conjugate gradient squared (CGS), Bi-CG stabilized (Bi-CGSTAB), BiCGStab2, and BiCGstab(l) have been developed to improve the convergence of Bi-CG. Bi-CGSTAB has been most often used for efficiently solving the linear equation, but we have seen the convergence behavior with a long stagnation phase. In such cases, it is important to have Bi-CG coefficients that are as accurate as possible, and the stabilization strategy, which stabilizes the computation of the Bi-CG coefficients, has been proposed. It may avoid stagnation and lead to faster computation. Motivated by a large number of processors in present petascale high-performance computing hardware, the scalability of Krylov subspace methods on parallel computers has recently become increasingly prominent. The main bottleneck for efficient parallelization is the inner products which require a global reduction. The resulting global synchronization phases cause communication overhead on parallel computers. The parallel variants of Krylov subspace methods reducing the number of global communication phases and hiding the communication latency have been proposed. However, the numerical stability, specifically, the convergence speed of the parallel variants of Bi-CGSTAB may become worse than that of the standard Bi-CGSTAB. In this paper, therefore, we compare the convergence speed between the standard Bi-CGSTAB and the parallel variants by numerical experiments and show that the convergence speed of the standard Bi-CGSTAB is faster than the parallel variants. Moreover, we propose the stabilization strategy for the parallel variants.

Keywords: bi-conjugate gradient stabilized method, convergence speed, Krylov subspace methods, linear equations, parallel variant

Procedia PDF Downloads 141
228 Reinforcement-Learning Based Handover Optimization for Cellular Unmanned Aerial Vehicles Connectivity

Authors: Mahmoud Almasri, Xavier Marjou, Fanny Parzysz

Abstract:

The demand for services provided by Unmanned Aerial Vehicles (UAVs) is increasing pervasively across several sectors including potential public safety, economic, and delivery services. As the number of applications using UAVs grows rapidly, more and more powerful, quality of service, and power efficient computing units are necessary. Recently, cellular technology draws more attention to connectivity that can ensure reliable and flexible communications services for UAVs. In cellular technology, flying with a high speed and altitude is subject to several key challenges, such as frequent handovers (HOs), high interference levels, connectivity coverage holes, etc. Additional HOs may lead to “ping-pong” between the UAVs and the serving cells resulting in a decrease of the quality of service and energy consumption. In order to optimize the number of HOs, we develop in this paper a Q-learning-based algorithm. While existing works focus on adjusting the number of HOs in a static network topology, we take into account the impact of cells deployment for three different simulation scenarios (Rural, Semi-rural and Urban areas). We also consider the impact of the decision distance, where the drone has the choice to make a switching decision on the number of HOs. Our results show that a Q-learning-based algorithm allows to significantly reduce the average number of HOs compared to a baseline case where the drone always selects the cell with the highest received signal. Moreover, we also propose which hyper-parameters have the largest impact on the number of HOs in the three tested environments, i.e. Rural, Semi-rural, or Urban.

Keywords: drones connectivity, reinforcement learning, handovers optimization, decision distance

Procedia PDF Downloads 78
227 Flipping the Script: Opportunities, Challenges, and Threats of a Digital Revolution in Higher Education

Authors: James P. Takona

Abstract:

In a world that is experiencing sharp digital transformations guided by digital technologies, the potential of technology to drive transformation and evolution in the higher is apparent. Higher education is facing a paradigm shift that exposes susceptibilities and threats to fully online programs in the face of post-Covid-19 trends of commodification. This historical moment is likely to be remembered as a critical turning point from analog to digital degree-focused learning modalities, where the default became the pivot point of competition between higher education institutions. Fall 2020 marks a significant inflection point in higher education as students, educators, and government leaders scrutinize higher education's price and value propositions through the new lens of traditional lecture halls versus multiple digitized delivery modes. Online education has since tiled the way for a pedagogical shift in how teachers teach and students learn. The incremental growth of online education in the west can now be attributed to the increasing patronage among students, faculty, and institution administrators. More often than not, college instructors assume paraclete roles in this learning mode, while students become active collaborators and no longer passive learners. This paper offers valuable discernments into the threats, challenges, and opportunities of a massive digital revolution in servicing degree programs. To view digital instruction and learning demands for instructional practices that revolve around collaborative work, engaging students in learning activities, and an engagement that promotes active efforts to solicit strong connections between course activities and expected learning pace for all students. Appropriate digital technologies demand instructors and students need prior solid skills. Need for the use of digital technology to support instruction and learning, intelligent tutoring offers great promise, and failures at implementing digital learning may not improve outcomes for specific student populations. Digital learning benefits students differently depending on their circumstances and background and those of the institution and/or program. Students have alternative options, access to the convenience of learning anytime and anywhere, and the possibility of acquiring and developing new skills leading to lifelong learning.

Keywords: digi̇tized learning, digital education, collaborative work, high education, online education, digitize delivery

Procedia PDF Downloads 63
226 Comparison of Blockchain Ecosystem for Identity Management

Authors: K. S. Suganya, R. Nedunchezhian

Abstract:

In recent years, blockchain technology has been found to be the most significant discovery in this digital era, after the discovery of the Internet and Cloud Computing. Blockchain is a simple, distributed public ledger that contains all the user’s transaction details in a block. The global copy of the block is then shared among all its peer-peer network users after validation by the Blockchain miners. Once a block is validated and accepted, it cannot be altered by any users making it a trust-free transaction. It also resolves the problem of double-spending by using traditional cryptographic methods. Since the advent of bitcoin, blockchain has been the backbone for all its transactions. But in recent years, it has found its roots and uses in many fields like Smart Contracts, Smart City management, healthcare, etc. Identity management against digital identity theft has become a major concern among financial and other organizations. To solve this digital identity theft, blockchain technology can be employed with existing identity management systems, which maintain a distributed public ledger containing details of an individual’s identity containing information such as Digital birth certificates, Citizenship number, Bank details, voter details, driving license in the form of blocks verified on the blockchain becomes time-stamped, unforgeable and publicly visible for any legitimate users. The main challenge in using blockchain technology to prevent digital identity theft is ensuring the pseudo-anonymity and privacy of the users. This survey paper will exert to study the blockchain concepts, consensus protocols, and various blockchain-based Digital Identity Management systems with their research scope. This paper also discusses the role of Blockchain in COVID-19 pandemic management by self-sovereign identity and supply chain management.

Keywords: blockchain, consensus protocols, bitcoin, identity theft, digital identity management, pandemic, COVID-19, self-sovereign identity

Procedia PDF Downloads 101
225 A Geometrical Multiscale Approach to Blood Flow Simulation: Coupling 2-D Navier-Stokes and 0-D Lumped Parameter Models

Authors: Azadeh Jafari, Robert G. Owens

Abstract:

In this study, a geometrical multiscale approach which means coupling together the 2-D Navier-Stokes equations, constitutive equations and 0-D lumped parameter models is investigated. A multiscale approach, suggest a natural way of coupling detailed local models (in the flow domain) with coarser models able to describe the dynamics over a large part or even the whole cardiovascular system at acceptable computational cost. In this study we introduce a new velocity correction scheme to decouple the velocity computation from the pressure one. To evaluate the capability of our new scheme, a comparison between the results obtained with Neumann outflow boundary conditions on the velocity and Dirichlet outflow boundary conditions on the pressure and those obtained using coupling with the lumped parameter model has been performed. Comprehensive studies have been done based on the sensitivity of numerical scheme to the initial conditions, elasticity and number of spectral modes. Improvement of the computational algorithm with stable convergence has been demonstrated for at least moderate Weissenberg number. We comment on mathematical properties of the reduced model, its limitations in yielding realistic and accurate numerical simulations, and its contribution to a better understanding of microvascular blood flow. We discuss the sophistication and reliability of multiscale models for computing correct boundary conditions at the outflow boundaries of a section of the cardiovascular system of interest. In this respect the geometrical multiscale approach can be regarded as a new method for solving a class of biofluids problems, whose application goes significantly beyond the one addressed in this work.

Keywords: geometrical multiscale models, haemorheology model, coupled 2-D navier-stokes 0-D lumped parameter modeling, computational fluid dynamics

Procedia PDF Downloads 339
224 Measurements for Risk Analysis and Detecting Hazards by Active Wearables

Authors: Werner Grommes

Abstract:

Intelligent wearables (illuminated vests or hand and foot-bands, smart watches with a laser diode, Bluetooth smart glasses) overflow the market today. They are integrated with complex electronics and are worn very close to the body. Optical measurements and limitation of the maximum light density are needed. Smart watches are equipped with a laser diode or control different body currents. Special glasses generate readable text information that is received via radio transmission. Small high-performance batteries (lithium-ion/polymer) supply the electronics. All these products have been tested and evaluated for risk. These products must, for example, meet the requirements for electromagnetic compatibility as well as the requirements for electromagnetic fields affecting humans or implant wearers. Extensive analyses and measurements were carried out for this purpose. Many users are not aware of these risks. The result of this study should serve as a suggestion to do it better in the future or simply to point out these risks. Commercial LED warning vests, LED hand and foot-bands, illuminated surfaces with inverter (high voltage), flashlights, smart watches, and Bluetooth smart glasses were checked for risks. The luminance, the electromagnetic emissions in the low-frequency as well as in the high-frequency range, audible noises, and nervous flashing frequencies were checked by measurements and analyzed. Rechargeable lithium-ion or lithium-polymer batteries can burn or explode under special conditions like overheating, overcharging, deep discharge or using out of the temperature specification. Some risk analysis becomes necessary. The result of this study is that many smart wearables are worn very close to the body, and an extensive risk analysis becomes necessary. Wearers of active implants like a pacemaker or implantable cardiac defibrillator must be considered. If the wearable electronics include switching regulators or inverter circuits, active medical implants in the near field can be disturbed. A risk analysis is necessary.

Keywords: safety and hazards, electrical safety, EMC, EMF, active medical implants, optical radiation, illuminated warning vest, electric luminescent, hand and head lamps, LED, e-light, safety batteries, light density, optical glare effects

Procedia PDF Downloads 88
223 3D Dentofacial Surgery Full Planning Procedures

Authors: Oliveira M., Gonçalves L., Francisco I., Caramelo F., Vale F., Sanz D., Domingues M., Lopes M., Moreia D., Lopes T., Santos T., Cardoso H.

Abstract:

The ARTHUR project consists of a platform that allows the virtual performance of maxillofacial surgeries, offering, in a photorealistic concept, the possibility for the patient to have an idea of the surgical changes before they are performed on their face. For this, the system brings together several image formats, dicoms and objs that, after loading, will generate the bone volume, soft tissues and hard tissues. The system also incorporates the patient's stereophotogrammetry, in addition to their data and clinical history. After loading and inserting data, the clinician can virtually perform the surgical operation and present the final result to the patient, generating a new facial surface that contemplates the changes made in the bone and tissues of the maxillary area. This tool acts in different situations that require facial reconstruction, however this project focuses specifically on two types of use cases: bone congenital disfigurement and acquired disfiguration such as oral cancer with bone attainment. Being developed a cloud based solution, with mobile support, the tool aims to reduce the decision time window of patient. Because the current simulations are not realistic or, if realistic, need time due to the need of building plaster models, patient rates on decision, rely on a long time window (1,2 months), because they don’t identify themselves with the presented surgical outcome. On the other hand, this planning was performed time based on average estimated values of the position of the maxilla and mandible. The team was based on averages of the facial measurements of the population, without specifying racial variability, so the proposed solution was not adjusted to the real individual physiognomic needs.

Keywords: 3D computing, image processing, image registry, image reconstruction

Procedia PDF Downloads 175
222 Design and Development of On-Line, On-Site, In-Situ Induction Motor Performance Analyser

Authors: G. S. Ayyappan, Srinivas Kota, Jaffer R. C. Sheriff, C. Prakash Chandra Joshua

Abstract:

In the present scenario of energy crises, energy conservation in the electrical machines is very important in the industries. In order to conserve energy, one needs to monitor the performance of an induction motor on-site and in-situ. The instruments available for this purpose are very meager and very expensive. This paper deals with the design and development of induction motor performance analyser on-line, on-site, and in-situ. The system measures only few electrical input parameters like input voltage, line current, power factor, frequency, powers, and motor shaft speed. These measured data are coupled to name plate details and compute the operating efficiency of induction motor. This system employs the method of computing motor losses with the help of equivalent circuit parameters. The equivalent circuit parameters of the concerned motor are estimated using the developed algorithm at any load conditions and stored in the system memory. The developed instrument is a reliable, accurate, compact, rugged, and cost-effective one. This portable instrument could be used as a handy tool to study the performance of both slip ring and cage induction motors. During the analysis, the data can be stored in SD Memory card and one can perform various analyses like load vs. efficiency, torque vs. speed characteristics, etc. With the help of the developed instrument, one can operate the motor around its Best Operating Point (BOP). Continuous monitoring of the motor efficiency could lead to Life Cycle Assessment (LCA) of motors. LCA helps in taking decisions on motor replacement or retaining or refurbishment.

Keywords: energy conservation, equivalent circuit parameters, induction motor efficiency, life cycle assessment, motor performance analysis

Procedia PDF Downloads 356
221 A Study on the Korean Connected Industrial Parks Smart Logistics It Financial Enterprise Architecture

Authors: Ilgoun Kim, Jongpil Jeong

Abstract:

Recently, a connected industrial parks (CIPs) architecture using new technologies such as RFID, cloud computing, CPS, Big Data, 5G 5G, IIOT, VR-AR, and ventral AI algorithms based on IoT has been proposed. This researcher noted the vehicle junction problem (VJP) as a more specific detail of the CIPs architectural models. The VJP noted by this researcher includes 'efficient AI physical connection challenges for vehicles' through ventilation, 'financial and financial issues with complex vehicle physical connections,' and 'welfare and working conditions of the performing personnel involved in complex vehicle physical connections.' In this paper, we propose a public solution architecture for the 'electronic financial problem of complex vehicle physical connections' as a detailed task during the vehicle junction problem (VJP). The researcher sought solutions to businesses, consumers, and Korean social problems through technological advancement. We studied how the beneficiaries of technological development can benefit from technological development with many consumers in Korean society and many small and small Korean company managers, not some specific companies. In order to more specifically implement the connected industrial parks (CIPs) architecture using the new technology, we noted the vehicle junction problem (VJP) within the smart factory industrial complex and noted the process of achieving the vehicle junction problem performance among several electronic processes. This researcher proposes a more detailed, integrated public finance enterprise architecture among the overall CIPs architectures. The main details of the public integrated financial enterprise architecture were largely organized into four main categories: 'business', 'data', 'technique', and 'finance'.

Keywords: enterprise architecture, IT Finance, smart logistics, CIPs

Procedia PDF Downloads 139
220 The Use of Artificial Intelligence in Digital Forensics and Incident Response in a Constrained Environment

Authors: Dipo Dunsin, Mohamed C. Ghanem, Karim Ouazzane

Abstract:

Digital investigators often have a hard time spotting evidence in digital information. It has become hard to determine which source of proof relates to a specific investigation. A growing concern is that the various processes, technology, and specific procedures used in the digital investigation are not keeping up with criminal developments. Therefore, criminals are taking advantage of these weaknesses to commit further crimes. In digital forensics investigations, artificial intelligence is invaluable in identifying crime. It has been observed that an algorithm based on artificial intelligence (AI) is highly effective in detecting risks, preventing criminal activity, and forecasting illegal activity. Providing objective data and conducting an assessment is the goal of digital forensics and digital investigation, which will assist in developing a plausible theory that can be presented as evidence in court. Researchers and other authorities have used the available data as evidence in court to convict a person. This research paper aims at developing a multiagent framework for digital investigations using specific intelligent software agents (ISA). The agents communicate to address particular tasks jointly and keep the same objectives in mind during each task. The rules and knowledge contained within each agent are dependent on the investigation type. A criminal investigation is classified quickly and efficiently using the case-based reasoning (CBR) technique. The MADIK is implemented using the Java Agent Development Framework and implemented using Eclipse, Postgres repository, and a rule engine for agent reasoning. The proposed framework was tested using the Lone Wolf image files and datasets. Experiments were conducted using various sets of ISA and VMs. There was a significant reduction in the time taken for the Hash Set Agent to execute. As a result of loading the agents, 5 percent of the time was lost, as the File Path Agent prescribed deleting 1,510, while the Timeline Agent found multiple executable files. In comparison, the integrity check carried out on the Lone Wolf image file using a digital forensic tool kit took approximately 48 minutes (2,880 ms), whereas the MADIK framework accomplished this in 16 minutes (960 ms). The framework is integrated with Python, allowing for further integration of other digital forensic tools, such as AccessData Forensic Toolkit (FTK), Wireshark, Volatility, and Scapy.

Keywords: artificial intelligence, computer science, criminal investigation, digital forensics

Procedia PDF Downloads 185
219 Design and Development of Fleet Management System for Multi-Agent Autonomous Surface Vessel

Authors: Zulkifli Zainal Abidin, Ahmad Shahril Mohd Ghani

Abstract:

Agent-based systems technology has been addressed as a new paradigm for conceptualizing, designing, and implementing software systems. Agents are sophisticated systems that act autonomously across open and distributed environments in solving problems. Nevertheless, it is impractical to rely on a single agent to do all computing processes in solving complex problems. An increasing number of applications lately require multiple agents to work together. A multi-agent system (MAS) is a loosely coupled network of agents that interact to solve problems that are beyond the individual capacities or knowledge of each problem solver. However, the network of MAS still requires a main system to govern or oversees the operation of the agents in order to achieve a unified goal. We had developed a fleet management system (FMS) in order to manage the fleet of agents, plan route for the agents, perform real-time data processing and analysis, and issue sets of general and specific instructions to the agents. This FMS should be able to perform real-time data processing, communicate with the autonomous surface vehicle (ASV) agents and generate bathymetric map according to the data received from each ASV unit. The first algorithm is developed to communicate with the ASV via radio communication using standard National Marine Electronics Association (NMEA) protocol sentences. Next, the second algorithm will take care of the path planning, formation and pattern generation is tested using various sample data. Lastly, the bathymetry map generation algorithm will make use of data collected by the agents to create bathymetry map in real-time. The outcome of this research is expected can be applied on various other multi-agent systems.

Keywords: autonomous surface vehicle, fleet management system, multi agent system, bathymetry

Procedia PDF Downloads 244
218 Human Capital Divergence and Team Performance: A Study of Major League Baseball Teams

Authors: Yu-Chen Wei

Abstract:

The relationship between organizational human capital and organizational effectiveness have been a common topic of interest to organization researchers. Much of this research has concluded that higher human capital can predict greater organizational outcomes. Whereas human capital research has traditionally focused on organizations, the current study turns to the team level human capital. In addition, there are no known empirical studies assessing the effect of human capital divergence on team performance. Team human capital refers to the sum of knowledge, ability, and experience embedded in team members. Team human capital divergence is defined as the variation of human capital within a team. This study is among the first to assess the role of human capital divergence as a moderator of the effect of team human capital on team performance. From the traditional perspective, team human capital represents the collective ability to solve problems and reducing operational risk of all team members. Hence, the higher team human capital, the higher the team performance. This study further employs social learning theory to explain the relationship between team human capital and team performance. According to this theory, the individuals will look for progress by way of learning from teammates in their teams. They expect to have upper human capital, in turn, to achieve high productivity, obtain great rewards and career success eventually. Therefore, the individual can have more chances to improve his or her capability by learning from peers of the team if the team members have higher average human capital. As a consequence, all team members can develop a quick and effective learning path in their work environment, and in turn enhance their knowledge, skill, and experience, leads to higher team performance. This is the first argument of this study. Furthermore, the current study argues that human capital divergence is negative to a team development. For the individuals with lower human capital in the team, they always feel the pressure from their outstanding colleagues. Under the pressure, they cannot give full play to their own jobs and lose more and more confidence. For the smart guys in the team, they are reluctant to be colleagues with the teammates who are not as intelligent as them. Besides, they may have lower motivation to move forward because they are prominent enough compared with their teammates. Therefore, human capital divergence will moderate the relationship between team human capital and team performance. These two arguments were tested in 510 team-seasons drawn from major league baseball (1998–2014). Results demonstrate that there is a positive relationship between team human capital and team performance which is consistent with previous research. In addition, the variation of human capital within a team weakens the above relationships. That is to say, an individual working with teammates who are comparable to them can produce better performance than working with people who are either too smart or too stupid to them.

Keywords: human capital divergence, team human capital, team performance, team level research

Procedia PDF Downloads 211
217 Selection of Qualitative Research Strategy for Bullying and Harassment in Sport

Authors: J. Vveinhardt, V. B. Fominiene, L. Jeseviciute-Ufartiene

Abstract:

Relevance of Research: Qualitative research is still regarded as highly subjective and not sufficiently scientific in order to achieve objective research results. However, it is agreed that a qualitative study allows revealing the hidden motives of the research participants, creating new theories, and highlighting the field of problem. There is enough research done to reveal these qualitative research aspects. However, each research area has its own specificity, and sport is unique due to the image of its participants, who are understood as strong and invincible. Therefore, a sport participant might have personal issues to recognize himself as a victim in the context of bullying and harassment. Accordingly, researcher has a dilemma in general making to speak a victim in sport. Thus, ethical aspects of qualitative research become relevant. The plenty fields of sport make a problem determining the sample size of research. Thus, the corresponding problem of this research is which and why qualitative research strategies are the most suitable revealing the phenomenon of bullying and harassment in sport. Object of research is qualitative research strategy for bullying and harassment in sport. Purpose of the research is to analyze strategies of qualitative research selecting suitable one for bullying and harassment in sport. Methods of research were scientific research analyses of qualitative research application for bullying and harassment research. Research Results: Four mane strategies are applied in the qualitative research; inductive, deductive, retroductive, and abductive. Inductive and deductive strategies are commonly used researching bullying and harassment in sport. The inductive strategy is applied as quantitative research in order to reveal and describe the prevalence of bullying and harassment in sport. The deductive strategy is used through qualitative methods in order to explain the causes of bullying and harassment and to predict the actions of the participants of bullying and harassment in sport and the possible consequences of these actions. The most commonly used qualitative method for the research of bullying and harassment in sports is semi-structured interviews in speech and in written. However, these methods may restrict the openness of the participants in the study when recording on the dictator or collecting incomplete answers when the participant in the survey responds in writing because it is not possible to refine the answers. Qualitative researches are more prevalent in terms of technology-defined research data. For example, focus group research in a closed forum allows participants freely interact with each other because of the confidentiality of the selected participants in the study. The moderator can purposefully formulate and submit problem-solving questions to the participants. Hence, the application of intelligent technology through in-depth qualitative research can help discover new and specific information on bullying and harassment in sport. Acknowledgement: This research is funded by the European Social Fund according to the activity ‘Improvement of researchers’ qualification by implementing world-class R&D projects of Measure No. 09.3.3-LMT-K-712.

Keywords: bullying, focus group, harassment, narrative, sport, qualitative research

Procedia PDF Downloads 155
216 Synthesis of Temperature Sensitive Nano/Microgels by Soap-Free Emulsion Polymerization and Their Application in Hydrate Sediments Drilling Operations

Authors: Xuan Li, Weian Huang, Jinsheng Sun, Fuhao Zhao, Zhiyuan Wang, Jintang Wang

Abstract:

Natural gas hydrates (NGHs) as promising alternative energy sources have gained increasing attention. Hydrate-bearing formation in marine areas is highly unconsolidated formation and is fragile, which is composed of weakly cemented sand-clay and silty sediments. During the drilling process, the invasion of drilling fluid can easily lead to excessive water content in the formation. It will change the soil liquid plastic limit index, which significantly affects the formation quality, leading to wellbore instability due to the metastable character of hydrate-bearing sediments. Therefore, controlling the filtrate loss into the formation in the drilling process has to be highly regarded for protecting the stability of the wellbore. In this study, the temperature-sensitive nanogel of P(NIPAM-co-AMPS-co-tBA) was prepared by soap-free emulsion polymerization, and the temperature-sensitive behavior was employed to achieve self-adaptive plugging in hydrate sediments. First, the effects of additional amounts of AMPS, tBA, and cross-linker MBA on the microgel synthesis process and temperature-sensitive behaviors were investigated. Results showed that, as a reactive emulsifier, AMPS can not only participate in the polymerization reaction but also act as an emulsifier to stabilize micelles and enhance the stability of nanoparticles. The volume phase transition temperature (VPTT) of nanogels gradually decreased with the increase of the contents of hydrophobic monomer tBA. An increase in the content of the cross-linking agent MBA can lead to a rise in the coagulum content and instability of the emulsion. The plugging performance of nanogel was evaluated in a core sample with a pore size distribution range of 100-1000nm. The temperature-sensitive nanogel can effectively improve the microfiltration performance of drilling fluid. Since a combination of a series of nanogels could have a wide particle size distribution at any temperature, around 200nm to 800nm, the self-adaptive plugging capacity of nanogels for the hydrate sediments was revealed. Thermosensitive nanogel is a potential intelligent plugging material for drilling operations in natural gas hydrate-bearing sediments.

Keywords: temperature-sensitive nanogel, NIPAM, self-adaptive plugging performance, drilling operations, hydrate-bearing sediments

Procedia PDF Downloads 126
215 The Effects of Culture and Language on Social Impression Formation from Voice Pleasantness: A Study with French and Iranian People

Authors: L. Bruckert, A. Mansourzadeh

Abstract:

The voice has a major influence on interpersonal communication in everyday life via the perception of pleasantness. The evolutionary perspective postulates that the mechanisms underlying the pleasantness judgments are universal adaptations that have evolved in the service of choosing a mate (through the process of sexual selection). From this point of view, the favorite voices would be those with more marked sexually dimorphic characteristics; for example, in men with lower voice pitch, pitch is the main criterion. On the other hand, one can postulate that the mechanisms involved are gradually established since childhood through exposure to the environment, and thus the prosodic elements could take precedence in everyday life communication as it conveys information about the speaker's attitude (willingness to communicate, interest toward the interlocutors). Our study focuses on voice pleasantness and its relationship with social impression formation, exploring both the spectral aspects (pitch, timbre) and the prosodic ones. In our study, we recorded the voices through two vocal corpus (five vowels and a reading text) of 25 French males speaking French and 25 Iranian males speaking Farsi. French listeners (40 male/40 female) listened to the French voices and made a judgment either on the voice's pleasantness or on the speaker (judgment about his intelligence, honesty, sociability). The regression analyses from our acoustic measures showed that the prosodic elements (for example, the intonation and the speech rate) are the most important criteria concerning pleasantness, whatever the corpus or the listener's gender. Moreover, the correlation analyses showed that the speakers with the voices judged as the most pleasant are considered the most intelligent, sociable, and honest. The voices in Farsi have been judged by 80 other French listeners (40 male/40 female), and we found the same effect of intonation concerning the judgment of pleasantness with the corpus «vowel» whereas with the corpus «text» the pitch is more important than the prosody. It may suggest that voice perception contains some elements invariant across culture/language, whereas others are influenced by the cultural/linguistic background of the listener. Shortly in the future, Iranian people will be asked to listen either to the French voices for half of them or to the Farsi voices for the other half and produce the same judgments as the French listeners. This experimental design could potentially make it possible to distinguish what is linked to culture and what is linked to language in the case of differences in voice perception.

Keywords: cross-cultural psychology, impression formation, pleasantness, voice perception

Procedia PDF Downloads 45
214 Profiling Risky Code Using Machine Learning

Authors: Zunaira Zaman, David Bohannon

Abstract:

This study explores the application of machine learning (ML) for detecting security vulnerabilities in source code. The research aims to assist organizations with large application portfolios and limited security testing capabilities in prioritizing security activities. ML-based approaches offer benefits such as increased confidence scores, false positives and negatives tuning, and automated feedback. The initial approach using natural language processing techniques to extract features achieved 86% accuracy during the training phase but suffered from overfitting and performed poorly on unseen datasets during testing. To address these issues, the study proposes using the abstract syntax tree (AST) for Java and C++ codebases to capture code semantics and structure and generate path-context representations for each function. The Code2Vec model architecture is used to learn distributed representations of source code snippets for training a machine-learning classifier for vulnerability prediction. The study evaluates the performance of the proposed methodology using two datasets and compares the results with existing approaches. The Devign dataset yielded 60% accuracy in predicting vulnerable code snippets and helped resist overfitting, while the Juliet Test Suite predicted specific vulnerabilities such as OS-Command Injection, Cryptographic, and Cross-Site Scripting vulnerabilities. The Code2Vec model achieved 75% accuracy and a 98% recall rate in predicting OS-Command Injection vulnerabilities. The study concludes that even partial AST representations of source code can be useful for vulnerability prediction. The approach has the potential for automated intelligent analysis of source code, including vulnerability prediction on unseen source code. State-of-the-art models using natural language processing techniques and CNN models with ensemble modelling techniques did not generalize well on unseen data and faced overfitting issues. However, predicting vulnerabilities in source code using machine learning poses challenges such as high dimensionality and complexity of source code, imbalanced datasets, and identifying specific types of vulnerabilities. Future work will address these challenges and expand the scope of the research.

Keywords: code embeddings, neural networks, natural language processing, OS command injection, software security, code properties

Procedia PDF Downloads 80
213 Combination between Intrusion Systems and Honeypots

Authors: Majed Sanan, Mohammad Rammal, Wassim Rammal

Abstract:

Today, security is a major concern. Intrusion Detection, Prevention Systems and Honeypot can be used to moderate attacks. Many researchers have proposed to use many IDSs ((Intrusion Detection System) time to time. Some of these IDS’s combine their features of two or more IDSs which are called Hybrid Intrusion Detection Systems. Most of the researchers combine the features of Signature based detection methodology and Anomaly based detection methodology. For a signature based IDS, if an attacker attacks slowly and in organized way, the attack may go undetected through the IDS, as signatures include factors based on duration of the events but the actions of attacker do not match. Sometimes, for an unknown attack there is no signature updated or an attacker attack in the mean time when the database is updating. Thus, signature-based IDS fail to detect unknown attacks. Anomaly based IDS suffer from many false-positive readings. So there is a need to hybridize those IDS which can overcome the shortcomings of each other. In this paper we propose a new approach to IDS (Intrusion Detection System) which is more efficient than the traditional IDS (Intrusion Detection System). The IDS is based on Honeypot Technology and Anomaly based Detection Methodology. We have designed Architecture for the IDS in a packet tracer and then implemented it in real time. We have discussed experimental results performed: both the Honeypot and Anomaly based IDS have some shortcomings but if we hybridized these two technologies, the newly proposed Hybrid Intrusion Detection System (HIDS) is capable enough to overcome these shortcomings with much enhanced performance. In this paper, we present a modified Hybrid Intrusion Detection System (HIDS) that combines the positive features of two different detection methodologies - Honeypot methodology and anomaly based intrusion detection methodology. In the experiment, we ran both the Intrusion Detection System individually first and then together and recorded the data from time to time. From the data we can conclude that the resulting IDS are much better in detecting intrusions from the existing IDSs.

Keywords: security, intrusion detection, intrusion prevention, honeypot, anomaly-based detection, signature-based detection, cloud computing, kfsensor

Procedia PDF Downloads 347
212 Factors Affecting M-Government Deployment and Adoption

Authors: Saif Obaid Alkaabi, Nabil Ayad

Abstract:

Governments constantly seek to offer faster, more secure, efficient and effective services for their citizens. Recent changes and developments to communication services and technologies, mainly due the Internet, have led to immense improvements in the way governments of advanced countries carry out their interior operations Therefore, advances in e-government services have been broadly adopted and used in various developed countries, as well as being adapted to developing countries. The implementation of advances depends on the utilization of the most innovative structures of data techniques, mainly in web dependent applications, to enhance the main functions of governments. These functions, in turn, have spread to mobile and wireless techniques, generating a new advanced direction called m-government. This paper discusses a selection of available m-government applications and several business modules and frameworks in various fields. Practically, the m-government models, techniques and methods have become the improved version of e-government. M-government offers the potential for applications which will work better, providing citizens with services utilizing mobile communication and data models incorporating several government entities. Developing countries can benefit greatly from this innovation due to the fact that a large percentage of their population is young and can adapt to new technology and to the fact that mobile computing devices are more affordable. The use of models of mobile transactions encourages effective participation through the use of mobile portals by businesses, various organizations, and individual citizens. Although the application of m-government has great potential, it does have major limitations. The limitations include: the implementation of wireless networks and relative communications, the encouragement of mobile diffusion, the administration of complicated tasks concerning the protection of security (including the ability to offer privacy for information), and the management of the legal issues concerning mobile applications and the utilization of services.

Keywords: e-government, m-government, system dependability, system security, trust

Procedia PDF Downloads 361
211 Experimental and Modal Determination of the State-Space Model Parameters of a Uni-Axial Shaker System for Virtual Vibration Testing

Authors: Jonathan Martino, Kristof Harri

Abstract:

In some cases, the increase in computing resources makes simulation methods more affordable. The increase in processing speed also allows real time analysis or even more rapid tests analysis offering a real tool for test prediction and design process optimization. Vibration tests are no exception to this trend. The so called ‘Virtual Vibration Testing’ offers solution among others to study the influence of specific loads, to better anticipate the boundary conditions between the exciter and the structure under test, to study the influence of small changes in the structure under test, etc. This article will first present a virtual vibration test modeling with a main focus on the shaker model and will afterwards present the experimental parameters determination. The classical way of modeling a shaker is to consider the shaker as a simple mechanical structure augmented by an electrical circuit that makes the shaker move. The shaker is modeled as a two or three degrees of freedom lumped parameters model while the electrical circuit takes the coil impedance and the dynamic back-electromagnetic force into account. The establishment of the equations of this model, describing the dynamics of the shaker, is presented in this article and is strongly related to the internal physical quantities of the shaker. Those quantities will be reduced into global parameters which will be estimated through experiments. Different experiments will be carried out in order to design an easy and practical method for the identification of the shaker parameters leading to a fully functional shaker model. An experimental modal analysis will also be carried out to extract the modal parameters of the shaker and to combine them with the electrical measurements. Finally, this article will conclude with an experimental validation of the model.

Keywords: lumped parameters model, shaker modeling, shaker parameters, state-space, virtual vibration

Procedia PDF Downloads 249
210 Stoa: Urban Community-Building Social Experiment through Mixed Reality Game Environment

Authors: Radek Richtr, Petr Pauš

Abstract:

Social media nowadays connects people more tightly and intensively than ever, but simultaneously, some sort of social distance, incomprehension, lost of social integrity appears. People can be strongly connected to the person on the other side of the world but unaware of neighbours in the same district or street. The Stoa is a type of application from the ”serious games” genre- it is research augmented reality experiment masked as a gaming environment. In the Stoa environment, the player can plant and grow virtual (organic) structure, a Pillar, that represent the whole suburb. Everybody has their own idea of what is an acceptable, admirable or harmful visual intervention in the area they live in; the purpose of this research experiment is to find and/or define residents shared subconscious spirit, genius loci of the Pillars vicinity, where residents live in. The appearance and evolution of Stoa’s Pillars reflect the real world as perceived by not only the creator but also by other residents/players, who, with their actions, refine the environment. Squares, parks, patios and streets get their living avatar depictions; investors and urban planners obtain information on the occurrence and level of motivation for reshaping the public space. As the project is in product conceptual design phase, the function is one of its most important factors. Function-based modelling makes design problem modular and structured and thus decompose it into sub-functions or function-cells. Paper discuss the current conceptual model for Stoa project, the using of different organic structure textures and models, user interface design, UX study and project’s developing to the final state.

Keywords: augmented reality, urban computing, interaction design, mixed reality, social engineering

Procedia PDF Downloads 192
209 Graduates Perceptions Towards the Image of Suan Sunandha Rajabhat University on the Graduation Rehearsal Day

Authors: Suangsuda Subjaroen, Chutikarn Sriviboon, Rosjana Chandhasa

Abstract:

This research aims to examine the graduates' overall satisfaction and influential factors that affect the image of Suan Sunandha Rajabhat University, according to the graduates' viewpoints on the graduation rehearsal day. In accordance with the graduates' perceptions, the study is related to the levels of graduates' satisfaction, their perceived quality, perceived value, and the image of Suan Sunandha Rajabhat University. The sample group in this study involved 1,129 graduates of Suan Sunandha Rajabhat University who attended on 2019 graduation rehearsal day. A questionnaire was used as an instrument in order to collect data. By the use of computing software, the statistics used for data analysis were various, ranging from frequencies, percentage, mean, and standard deviation, One-Way ANOVA, and Multiple Regression Analysis. The majority of participants were graduates with a bachelor's degree, followed by masters graduates and PhD graduates, respectively. Among the participants, most of them graduated from the Faculty of Management Sciences, followed by the Faculty of Humanities and Social Sciences and Faculty of Education, respectively. Overall, the graduates were satisfied with the graduation rehearsal day, and each aspect was rated at a satisfactory level. Formality, steps, and procedures were the aspects that graduates were most satisfied with, followed by graduation rehearsal personnel and staff, venue, and facilities. Referring to graduates' perceptions, the perceived quality was rated at a very good level, the perceived value was at a good level, whereas the image of Suan Sunandha Rajabhat University was perceived at a good level, respectively. There were differences in satisfaction levels among graduates with a bachelor's degree, graduates with a master's degree and a doctoral degree with statistical significance at the level of 0.05. There was a statistical significance at the level of 0.05 in perceived quality and perceived value affecting the image of Suan Sunandha Rajabhat University. The image of Suan Sunandha Rajabhat University influenced graduates' satisfaction level with statistical significance at the level of 0.01.

Keywords: university image, perceived quality, perceived value, intention to study higher education, intention to recommend the university to others

Procedia PDF Downloads 90