Search results for: geometric complexity
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2172

Search results for: geometric complexity

1902 Factors Influencing the Adoption of Social Media as a Medium of Public Service Broadcasting

Authors: Seyed Mohammadbagher Jafari, Izmeera Shiham, Masoud Arianfar

Abstract:

The increased usage of Social media for different uses in turn makes it important to develop an understanding of users and their attitudes toward these sites, and moreover, the uses of such sites in a broader perspective such as broadcasting. This quantitative study addressed the problem of factors influencing the adoption of social media as a medium of public service broadcasting in the Republic of Maldives. These powerful and increasingly usable tools, accompanied by large public social media datasets, are bringing in a golden age of social science by empowering researchers to measure social behavior on a scale never before possible. This was conducted by exploring social responses on the use of social media. Research model was developed based on the previous models such as TAM, DOI and Trust combined model. It evaluates the influence of perceived ease of use, perceived usefulness, trust, complexity, compatibility and relative advantage influence on the adoption of social Media. The model was tested on a sample of 365 Maldivian people using survey method via questionnaire. The result showed that perceived usefulness, trust, relative advantage and complexity would highly influence the adoption of social media.

Keywords: adoption, broadcasting, maldives, social media

Procedia PDF Downloads 445
1901 A Hybrid Watermarking Scheme Using Discrete and Discrete Stationary Wavelet Transformation For Color Images

Authors: Bülent Kantar, Numan Ünaldı

Abstract:

This paper presents a new method which includes robust and invisible digital watermarking on images that is colored. Colored images are used as watermark. Frequency region is used for digital watermarking. Discrete wavelet transform and discrete stationary wavelet transform are used for frequency region transformation. Low, medium and high frequency coefficients are obtained by applying the two-level discrete wavelet transform to the original image. Low frequency coefficients are obtained by applying one level discrete stationary wavelet transform separately to all frequency coefficient of the two-level discrete wavelet transformation of the original image. For every low frequency coefficient obtained from one level discrete stationary wavelet transformation, watermarks are added. Watermarks are added to all frequency coefficients of two-level discrete wavelet transform. Totally, four watermarks are added to original image. In order to get back the watermark, the original and watermarked images are applied with two-level discrete wavelet transform and one level discrete stationary wavelet transform. The watermark is obtained from difference of the discrete stationary wavelet transform of the low frequency coefficients. A total of four watermarks are obtained from all frequency of two-level discrete wavelet transform. Obtained watermark results are compared with real watermark results, and a similarity result is obtained. A watermark is obtained from the highest similarity values. Proposed methods of watermarking are tested against attacks of the geometric and image processing. The results show that proposed watermarking method is robust and invisible. All features of frequencies of two level discrete wavelet transform watermarking are combined to get back the watermark from the watermarked image. Watermarks have been added to the image by converting the binary image. These operations provide us with better results in getting back the watermark from watermarked image by attacking of the geometric and image processing.

Keywords: watermarking, DWT, DSWT, copy right protection, RGB

Procedia PDF Downloads 507
1900 The Effects of Consumer Inertia and Emotions on New Technology Acceptance

Authors: Chyi Jaw

Abstract:

Prior literature on innovation diffusion or acceptance has almost exclusively concentrated on consumers’ positive attitudes and behaviors for new products/services. Consumers’ negative attitudes or behaviors to innovations have received relatively little marketing attention, but it happens frequently in practice. This study discusses consumer psychological factors when they try to learn or use new technologies. According to recent research, technological innovation acceptance has been considered as a dynamic or mediated process. This research argues that consumers can experience inertia and emotions in the initial use of new technologies. However, given such consumer psychology, the argument can be made as to whether the inclusion of consumer inertia (routine seeking and cognitive rigidity) and emotions increases the predictive power of new technology acceptance model. As data from the empirical study find, the process is potentially consumer emotion changing (independent of performance benefits) because of technology complexity and consumer inertia, and impact innovative technology use significantly. Finally, the study presents the superior predictability of the hypothesized model, which let managers can better predict and influence the successful diffusion of complex technological innovations.

Keywords: cognitive rigidity, consumer emotions, new technology acceptance, routine seeking, technology complexity

Procedia PDF Downloads 266
1899 Customer Adoption and Attitudes in Mobile Banking in Sri Lanka

Authors: Prasansha Kumari

Abstract:

This paper intends to identify and analyze customer adoption and attitudes towards mobile banking facilities. The study uses six perceived characteristics of innovation that can be used to form a favorable or unfavorable attitude toward an innovation, namely: Relative advantage, compatibility, complexity, trailability, risk, and observability. Collected data were analyzed using Pearson Chi-Square test. The results showed that mobile bank users were predominantly males. There is a growing trend among young, educated customers towards converting to mobile banking in Sri Lanka. The research outcomes suggested that all the six factors are statistically highly significant in influencing mobile banking adoption and attitude formation towards mobile banking in Sri Lanka. The major reasons for adopting mobile banking services are the accessibility and availability of services regardless of time and place. Over the 75 percent of the respondents mentioned that savings in time and effort and low financial costs of conducting mobile banking were advantageous. Issue of security was found to be the most important factor that motivated consumer adoption and attitude formation towards mobile banking. Main barriers to mobile banking were the lack of technological skills, the traditional cash‐carry banking culture, and the lack of awareness and insufficient guidance to using mobile banking.

Keywords: compatibility, complexity, mobile banking, observability, risk

Procedia PDF Downloads 169
1898 Reliability Analysis of Geometric Performance of Onboard Satellite Sensors: A Study on Location Accuracy

Authors: Ch. Sridevi, A. Chalapathi Rao, P. Srinivasulu

Abstract:

The location accuracy of data products is a critical parameter in assessing the geometric performance of satellite sensors. This study focuses on reliability analysis of onboard sensors to evaluate their performance in terms of location accuracy performance over time. The analysis utilizes field failure data and employs the weibull distribution to determine the reliability and in turn to understand the improvements or degradations over a period of time. The analysis begins by scrutinizing the location accuracy error which is the root mean square (RMS) error of differences between ground control point coordinates observed on the product and the map and identifying the failure data with reference to time. A significant challenge in this study is to thoroughly analyze the possibility of an infant mortality phase in the data. To address this, the Weibull distribution is utilized to determine if the data exhibits an infant stage or if it has transitioned into the operational phase. The shape parameter beta plays a crucial role in identifying this stage. Additionally, determining the exact start of the operational phase and the end of the infant stage poses another challenge as it is crucial to eliminate residual infant mortality or wear-out from the model, as it can significantly increase the total failure rate. To address this, an approach utilizing the well-established statistical Laplace test is applied to infer the behavior of sensors and to accurately ascertain the duration of different phases in the lifetime and the time required for stabilization. This approach also helps in understanding if the bathtub curve model, which accounts for the different phases in the lifetime of a product, is appropriate for the data and whether the thresholds for the infant period and wear-out phase are accurately estimated by validating the data in individual phases with Weibull distribution curve fitting analysis. Once the operational phase is determined, reliability is assessed using Weibull analysis. This analysis not only provides insights into the reliability of individual sensors with regards to location accuracy over the required period of time, but also establishes a model that can be applied to automate similar analyses for various sensors and parameters using field failure data. Furthermore, the identification of the best-performing sensor through this analysis serves as a benchmark for future missions and designs, ensuring continuous improvement in sensor performance and reliability. Overall, this study provides a methodology to accurately determine the duration of different phases in the life data of individual sensors. It enables an assessment of the time required for stabilization and provides insights into the reliability during the operational phase and the commencement of the wear-out phase. By employing this methodology, designers can make informed decisions regarding sensor performance with regards to location accuracy, contributing to enhanced accuracy in satellite-based applications.

Keywords: bathtub curve, geometric performance, Laplace test, location accuracy, reliability analysis, Weibull analysis

Procedia PDF Downloads 43
1897 Designing Presentational Writing Assessments for the Advanced Placement World Language and Culture Exams

Authors: Mette Pedersen

Abstract:

This paper outlines the criteria that assessment specialists use when they design the 'Persuasive Essay' task for the four Advanced Placement World Language and Culture Exams (AP French, German, Italian, and Spanish). The 'Persuasive Essay' is a free-response, source-based, standardized measure of presentational writing. Each 'Persuasive Essay' item consists of three sources (an article, a chart, and an audio) and a prompt, which is a statement of the topic phrased as an interrogative sentence. Due to its richness of source materials and due to the amount of time that test takers are given to prepare for and write their responses (a total of 55 minutes), the 'Persuasive Essay' is the free-response task on the AP World Language and Culture Exams that goes to the greatest lengths to unleash the test takers' proficiency potential. The author focuses on the work that goes into designing the 'Persuasive Essay' task, outlining best practices for the selection of topics and sources, the interplay that needs to be present among the sources and the thinking behind the articulation of prompts for the 'Persuasive Essay' task. Using released 'Persuasive Essay' items from the AP World Language and Culture Exams and accompanying data on test taker performance, the author shows how different passages, and features of passages, have succeeded (and sometimes not succeeded) in eliciting writing proficiency among test takers over time. Data from approximately 215.000 test takers per year from 2014 to 2017 and approximately 35.000 test takers per year from 2012 to 2013 form the basis of this analysis. The conclusion of the study is that test taker performance improves significantly when the sources that test takers are presented with express directly opposing viewpoints. Test taker performance also improves when the interrogative prompt that the test takers respond to is phrased as a yes/no question. Finally, an analysis of linguistic difficulty and complexity levels of the printed sources reveals that test taker performance does not decrease when the complexity level of the article of the 'Persuasive Essay' increases. This last text complexity analysis is performed with the help of the 'ETS TextEvaluator' tool and the 'Complexity Scale for Information Texts (Scale)', two tools, which, in combination, provide a rubric and a fully-automated technology for evaluating nonfiction and informational texts in English translation.

Keywords: advanced placement world language and culture exams, designing presentational writing assessments, large-scale standardized assessments of written language proficiency, source-based language testing

Procedia PDF Downloads 109
1896 Component Interface Formalization in Robotic Systems

Authors: Anton Hristozov, Eric Matson, Eric Dietz, Marcus Rogers

Abstract:

Components are heavily used in many software systems, including robotics systems. The growth of sophistication and diversity of new capabilities for robotic systems presents new challenges to their architectures. Their complexity is growing exponentially with the advent of AI, smart sensors, and the complex tasks they have to accomplish. Such complexity requires a more rigorous approach to the creation, use, and interoperability of software components. The issue is exacerbated because robotic systems are becoming more and more reliant on third-party components for certain functions. In order to achieve this kind of interoperability, including dynamic component replacement, we need a way to standardize their interfaces. A formal approach is desperately needed to specify what an interface of a robotic software component should contain. This study performs an analysis of the issue and presents a universal and generic approach to standardizing component interfaces for robotic systems. Our approach is inspired by well-established robotic architectures such as ROS, PX4, and Ardupilot. The study is also applicable to other software systems that share similar characteristics with robotic systems. We consider the use of JSON or Domain Specific Languages (DSL) development with tools such as Antlr and automatic code and configuration file generation for frameworks such as ROS and PX4. A case study with ROS2 is presented as a proof of concept for the proposed methodology.

Keywords: CPS, robots, software architecture, interface, ROS, autopilot

Procedia PDF Downloads 57
1895 Maximum Power and Bone Variables in Young Adult Men

Authors: Anthony Khawaja, Jacques Prioux, Ghassan Maalouf, Rawad El Hage

Abstract:

The regular practice of physical activities characterized by significant mechanical stresses stimulates bone formation and improves bone mineral density (BMD) in the most solicited sites. The purpose of this study was to explore the relationships between maximum power and bone variables in a group of young adult men. Identification of new determinants of BMD, bone mineral content (BMC) and hip geometric indices in young adult men, would allow screening and early management of future cases of osteopenia and osteoporosis. Fifty-three young adult men (18 – 35yr) voluntarily participated in this study. Weight and height were measured, and body mass index was calculated. Body composition, BMC and BMD were determined for each individual by Dual-energy X-ray absorptiometry (DXA; GE Healthcare, Madison, WI) at whole body (WB), lumbar spine (L1-L4), total hip (TH), and femoral neck (FN). FN cross-sectional area (CSA), strength index (SI), buckling ratio (BR), FN section modulus (Z), cross-sectional moment of inertia (CSMI) and L1-L4 TBS were also evaluated by DXA. The vertical jump was evaluated using a field test (sargent test). Two main parameters were retained: vertical jump performance (cm) and power (w). The subjects performed three jumps with 2 minutes of recovery between jumps. The highest vertical jump was selected. Maximum power (P max, in watts) was calculated. Maximum power was positively correlated to WB BMD (r = 0.41; p < 0.01), WB BMC (r = 0.65; p < 0.001), L1-L4 BMC (r = 0.54; p < 0.001), FN BMC (r = 0.35; p < 0.01), TH BMC (r = 0.50; p < 0.001), CSMI (r = 0.50; p < 0.001), CSA (r = 0.33; p < 0.05). Vertical jump was positively correlated to WB BMC (r = 0.31; p < 0.05), L1-L4 BMC (r = 0.40; p < 0.01), CSMI (r = 0.29; p < 0.05). The current study suggests that maximum power is a positive determinant of BMD, BMC and hip geometric indices in young adult men. In addition, it shows also that maximum power is a stronger positive determinant of bone variables than vertical jump in this population. Implementing strategies to increase maximum power in young adult men may be useful for preventing osteoporotic fractures later in life.

Keywords: bone variables, maximum power, osteopenia, osteoporosis, vertical jump, young adult men

Procedia PDF Downloads 156
1894 A Configurational Approach to Understand the Effect of Organizational Structure on Absorptive Capacity: Results from PLS and fsQCA

Authors: Murad Ali, Anderson Konan Seny Kan, Khalid A. Maimani

Abstract:

Based on the theory of organizational design and the theory of knowledge, this study uses complexity theory to explain and better understand the causal impacts of various patterns of organizational structural factors stimulating absorptive capacity (ACAP). Organizational structure can be thought of as heterogeneous configurations where various components are often intertwined. This study argues that impact of the traditional variables which define a firm’s organizational structure (centralization, formalization, complexity and integration) on ACAP is better understood in terms of set-theoretic relations rather than correlations. This study uses a data sample of 347 from a multiple industrial sector in South Korea. The results from PLS-SEM support all the hypothetical relationships among the variables. However, fsQCA results suggest the possible configurations of centralization, formalization, complexity, integration, age, size, industry and revenue factors that contribute to high level of ACAP. The results from fsQCA demonstrate the usefulness of configurational approaches in helping understand equifinality in the field of knowledge management. A recent fsQCA procedure based on a modeling subsample and holdout subsample is use in this study to assess the predictive validity of the model under investigation. The same type predictive analysis is also made through PLS-SEM. These analyses reveal a good relevance of causal solutions leading to high level of ACAP. In overall, the results obtained from combining PLS-SEM and fsQCA are very insightful. In particular, they could help managers to link internal organizational structural with ACAP. In other words, managers may comprehend finely how different components of organizational structure can increase the level of ACAP. The configurational approach may trigger new insights that could help managers prioritize selection criteria and understand the interactions between organizational structure and ACAP. The paper also discusses theoretical and managerial implications arising from these findings.

Keywords: absorptive capacity, organizational structure, PLS-SEM, fsQCA, predictive analysis, modeling subsample, holdout subsample

Procedia PDF Downloads 302
1893 Simulation Model for Optimizing Energy in Supply Chain Management

Authors: Nazli Akhlaghinia, Ali Rajabzadeh Ghatari

Abstract:

In today's world, with increasing environmental awareness, firms are facing severe pressure from various stakeholders, including the government and customers, to reduce their harmful effects on the environment. Over the past few decades, the increasing effects of global warming, climate change, waste, and air pollution have increased the global attention of experts to the issue of the green supply chain and led them to the optimal solution for greenery. Green supply chain management (GSCM) plays an important role in motivating the sustainability of the organization. With increasing environmental concerns, the main objective of the research is to use system thinking methodology and Vensim software for designing a dynamic system model for green supply chain and observing behaviors. Using this methodology, we look for the effects of a green supply chain structure on the behavioral dynamics of output variables. We try to simulate the complexity of GSCM in a period of 30 months and observe the complexity of behaviors of variables including sustainability, providing green products, and reducing energy consumption, and consequently reducing sample pollution.

Keywords: supply chain management, green supply chain management, system dynamics, energy consumption

Procedia PDF Downloads 108
1892 Scalable Systolic Multiplier over Binary Extension Fields Based on Two-Level Karatsuba Decomposition

Authors: Chiou-Yng Lee, Wen-Yo Lee, Chieh-Tsai Wu, Cheng-Chen Yang

Abstract:

Shifted polynomial basis (SPB) is a variation of polynomial basis representation. SPB has potential for efficient bit-level and digit-level implementations of multiplication over binary extension fields with subquadratic space complexity. For efficient implementation of pairing computation with large finite fields, this paper presents a new SPB multiplication algorithm based on Karatsuba schemes, and used that to derive a novel scalable multiplier architecture. Analytical results show that the proposed multiplier provides a trade-off between space and time complexities. Our proposed multiplier is modular, regular, and suitable for very-large-scale integration (VLSI) implementations. It involves less area complexity compared to the multipliers based on traditional decomposition methods. It is therefore, more suitable for efficient hardware implementation of pairing based cryptography and elliptic curve cryptography (ECC) in constraint driven applications.

Keywords: digit-serial systolic multiplier, elliptic curve cryptography (ECC), Karatsuba algorithm (KA), shifted polynomial basis (SPB), pairing computation

Procedia PDF Downloads 333
1891 Evaluation of the Performance Measures of Two-Lane Roundabout and Turbo Roundabout with Varying Truck Percentages

Authors: Evangelos Kaisar, Anika Tabassum, Taraneh Ardalan, Majed Al-Ghandour

Abstract:

The economy of any country is dependent on its ability to accommodate the movement and delivery of goods. The demand for goods movement and services increases truck traffic on highways and inside the cities. The livability of most cities is directly affected by the congestion and environmental impacts of trucks, which are the backbone of the urban freight system. Better operation of heavy vehicles on highways and arterials could lead to the network’s efficiency and reliability. In many cases, roundabouts can respond better than at-level intersections to enable traffic operations with increased safety for both cars and heavy vehicles. Recently emerged, the concept of turbo-roundabout is a viable alternative to the two-lane roundabout aiming to improve traffic efficiency. The primary objective of this study is to evaluate the operation and performance level of an at-grade intersection, a conventional two-lane roundabout, and a basic turbo roundabout for freight movements. To analyze and evaluate the performances of the signalized intersections and the roundabouts, micro simulation models were developed PTV VISSIM. The networks chosen for this analysis in this study are to experiment and evaluate changes in the performance of the movement of vehicles with different geometric and flow scenarios. There are several scenarios that were examined when attempting to assess the impacts of various geometric designs on vehicle movements. The overall traffic efficiency depends on the geometric layout of the intersections, which consists of traffic congestion rate, hourly volume, frequency of heavy vehicles, type of road, and the ratio of major-street versus side-street traffic. The traffic performance was determined by evaluating the delay time, number of stops, and queue length of each intersection for varying truck percentages. The results indicate that turbo-roundabouts can replace signalized intersections and two-lane roundabouts only when the traffic demand is low, even with high truck volume. More specifically, it is clear that two-lane roundabouts are seen to have shorter queue lengths compared to signalized intersections and turbo-roundabouts. For instance, considering the scenario where the volume is highest, and the truck movement and left turn movement are maximum, the signalized intersection has 3 times, and the turbo-roundabout has 5 times longer queue length than a two-lane roundabout in major roads. Similarly, on minor roads, signalized intersections and turbo-roundabouts have 11 times longer queue lengths than two-lane roundabouts for the same scenario. As explained from all the developed scenarios, while the traffic demand lowers, the queue lengths of turbo-roundabouts shorten. This proves that turbo roundabouts perform well for low and medium traffic demand. The results indicate that turbo-roundabouts can replace signalized intersections and two-lane roundabouts only when the traffic demand is low, even with high truck volume. Finally, this study provides recommendations on the conditions under which different intersections perform better than each other.

Keywords: At-grade intersection, simulation, turbo-roundabout, two-lane roundabout

Procedia PDF Downloads 111
1890 Hardware Implementation of Local Binary Pattern Based Two-Bit Transform Motion Estimation

Authors: Seda Yavuz, Anıl Çelebi, Aysun Taşyapı Çelebi, Oğuzhan Urhan

Abstract:

Nowadays, demand for using real-time video transmission capable devices is ever-increasing. So, high resolution videos have made efficient video compression techniques an essential component for capturing and transmitting video data. Motion estimation has a critical role in encoding raw video. Hence, various motion estimation methods are introduced to efficiently compress the video. Low bit‑depth representation based motion estimation methods facilitate computation of matching criteria and thus, provide small hardware footprint. In this paper, a hardware implementation of a two-bit transformation based low-complexity motion estimation method using local binary pattern approach is proposed. Image frames are represented in two-bit depth instead of full-depth by making use of the local binary pattern as a binarization approach and the binarization part of the hardware architecture is explained in detail. Experimental results demonstrate the difference between the proposed hardware architecture and the architectures of well-known low-complexity motion estimation methods in terms of important aspects such as resource utilization, energy and power consumption.

Keywords: binarization, hardware architecture, local binary pattern, motion estimation, two-bit transform

Procedia PDF Downloads 274
1889 Structural Design Optimization of Reinforced Thin-Walled Vessels under External Pressure Using Simulation and Machine Learning Classification Algorithm

Authors: Lydia Novozhilova, Vladimir Urazhdin

Abstract:

An optimization problem for reinforced thin-walled vessels under uniform external pressure is considered. The conventional approaches to optimization generally start with pre-defined geometric parameters of the vessels, and then employ analytic or numeric calculations and/or experimental testing to verify functionality, such as stability under the projected conditions. The proposed approach consists of two steps. First, the feasibility domain will be identified in the multidimensional parameter space. Every point in the feasibility domain defines a design satisfying both geometric and functional constraints. Second, an objective function defined in this domain is formulated and optimized. The broader applicability of the suggested methodology is maximized by implementing the Support Vector Machines (SVM) classification algorithm of machine learning for identification of the feasible design region. Training data for SVM classifier is obtained using the Simulation package of SOLIDWORKS®. Based on the data, the SVM algorithm produces a curvilinear boundary separating admissible and not admissible sets of design parameters with maximal margins. Then optimization of the vessel parameters in the feasibility domain is performed using the standard algorithms for the constrained optimization. As an example, optimization of a ring-stiffened closed cylindrical thin-walled vessel with semi-spherical caps under high external pressure is implemented. As a functional constraint, von Mises stress criterion is used but any other stability constraint admitting mathematical formulation can be incorporated into the proposed approach. Suggested methodology has a good potential for reducing design time for finding optimal parameters of thin-walled vessels under uniform external pressure.

Keywords: design parameters, feasibility domain, von Mises stress criterion, Support Vector Machine (SVM) classifier

Procedia PDF Downloads 298
1888 Healthcare Big Data Analytics Using Hadoop

Authors: Chellammal Surianarayanan

Abstract:

Healthcare industry is generating large amounts of data driven by various needs such as record keeping, physician’s prescription, medical imaging, sensor data, Electronic Patient Record(EPR), laboratory, pharmacy, etc. Healthcare data is so big and complex that they cannot be managed by conventional hardware and software. The complexity of healthcare big data arises from large volume of data, the velocity with which the data is accumulated and different varieties such as structured, semi-structured and unstructured nature of data. Despite the complexity of big data, if the trends and patterns that exist within the big data are uncovered and analyzed, higher quality healthcare at lower cost can be provided. Hadoop is an open source software framework for distributed processing of large data sets across clusters of commodity hardware using a simple programming model. The core components of Hadoop include Hadoop Distributed File System which offers way to store large amount of data across multiple machines and MapReduce which offers way to process large data sets with a parallel, distributed algorithm on a cluster. Hadoop ecosystem also includes various other tools such as Hive (a SQL-like query language), Pig (a higher level query language for MapReduce), Hbase(a columnar data store), etc. In this paper an analysis has been done as how healthcare big data can be processed and analyzed using Hadoop ecosystem.

Keywords: big data analytics, Hadoop, healthcare data, towards quality healthcare

Procedia PDF Downloads 378
1887 Pricing European Options under Jump Diffusion Models with Fast L-stable Padé Scheme

Authors: Salah Alrabeei, Mohammad Yousuf

Abstract:

The goal of option pricing theory is to help the investors to manage their money, enhance returns and control their financial future by theoretically valuing their options. Modeling option pricing by Black-School models with jumps guarantees to consider the market movement. However, only numerical methods can solve this model. Furthermore, not all the numerical methods are efficient to solve these models because they have nonsmoothing payoffs or discontinuous derivatives at the exercise price. In this paper, the exponential time differencing (ETD) method is applied for solving partial integrodifferential equations arising in pricing European options under Merton’s and Kou’s jump-diffusion models. Fast Fourier Transform (FFT) algorithm is used as a matrix-vector multiplication solver, which reduces the complexity from O(M2) into O(M logM). A partial fraction form of Pad`e schemes is used to overcome the complexity of inverting polynomial of matrices. These two tools guarantee to get efficient and accurate numerical solutions. We construct a parallel and easy to implement a version of the numerical scheme. Numerical experiments are given to show how fast and accurate is our scheme.

Keywords: Integral differential equations, , L-stable methods, pricing European options, Jump–diffusion model

Procedia PDF Downloads 116
1886 Cryptic Diversity: Identifying Two Morphologically Similar Species of Invasive Apple Snails in Peninsular Malaysia

Authors: Suganiya Rama Rao, Yoon-Yen Yow, Thor-Seng Liew, Shyamala Ratnayeke

Abstract:

Invasive snails in the genus Pomacea have spread across Southeast Asia including Peninsular Malaysia. Apart from significant economic costs to wetland crops, very little is known about the snails’ effects on native species, and wetland function through their alteration of macrophyte communities. This study was conducted to establish diagnostic characteristics of Pomacea species in the Malaysian environment using genetic and morphological criteria. Snails were collected from eight localities in northern and central regions of Peninsular Malaysia. The mitochondrial COI gene of 52 adult snails was amplified and sequenced. Maximum likelihood analysis was used to analyse species identity and assess phylogenetic relationships among snails from different geographic locations. Shells of the two species were compared using geometric morphometric analysis and covariance analyses. Shell height accounted for most of the observed variation between P. canaliculata and P. maculata, with the latter possessing a smaller mean ratio of shell height: aperture height (p < 0.0001) and shell height to shell width (give p < 0.0001). Genomic and phylogenetic analysis demonstrated the presence of two monophyletic taxa, P. canaliculata and P. maculata, in Peninsular Malaysia samples. P. maculata co-occurred with P. canaliculata in 5 localities, but samples from 3 localities contained only P. canaliculata. This study is the first to confirm the presence of two of the most invasive species of Pomacea in Peninsular Malaysia using a genomic approach. P. canaliculata appears to be the more widespread species. Despite statistical differences, both quantitative and qualitative morphological characteristics demonstrate much interspecific overlap and intraspecific variability; thus morphology alone cannot reliably verify species identity. Molecular techniques for distinguishing between these two highly invasive Pomacea species are needed to understand their specific ecological niches and develop effective protocols for their management.

Keywords: Pomacea canaliculata, Pomacea maculata, invasive species, phylog enetic analysis, geometric morphometric analysis

Procedia PDF Downloads 234
1885 Investigating a Modern Accident Analysis Model for Textile Building Fires through Numerical Reconstruction

Authors: Mohsin Ali Shaikh, Weiguo Song, Rehmat Karim, Muhammad Kashan Surahio, Muhammad Usman Shahid

Abstract:

Fire investigations face challenges due to the complexity of fire development, and real-world accidents lack repeatability, making it difficult to apply standardized approaches. The unpredictable nature of fires and the unique conditions of each incident contribute to the complexity, requiring innovative methods and tools for effective analysis and reconstruction. This study proposes to provide the modern accident analysis model through numerical reconstruction for fire investigation in textile buildings. This method employs computer simulation to enhance the overall effectiveness of textile-building investigations. The materials and evidence collected from past incidents reconstruct fire occurrences, progressions, and catastrophic processes. The approach is demonstrated through a case study involving a tragic textile factory fire in Karachi, Pakistan, which claimed 257 lives. The reconstruction method proves invaluable for determining fire origins, assessing losses, establishing accountability, and, significantly, providing preventive insights for complex fire incidents.

Keywords: fire investigation, numerical simulation, fire safety, fire incident, textile building

Procedia PDF Downloads 41
1884 Applying Multiple Kinect on the Development of a Rapid 3D Mannequin Scan Platform

Authors: Shih-Wen Hsiao, Yi-Cheng Tsao

Abstract:

In the field of reverse engineering and creative industries, applying 3D scanning process to obtain geometric forms of the objects is a mature and common technique. For instance, organic objects such as faces and nonorganic objects such as products could be scanned to acquire the geometric information for further application. However, although the data resolution of 3D scanning device is increasing and there are more and more abundant complementary applications, the penetration rate of 3D scanning for the public is still limited by the relative high price of the devices. On the other hand, Kinect, released by Microsoft, is known for its powerful functions, considerably low price, and complete technology and database support. Therefore, related studies can be done with the applying of Kinect under acceptable cost and data precision. Due to the fact that Kinect utilizes optical mechanism to extracting depth information, limitations are found due to the reason of the straight path of the light. Thus, various angles are required sequentially to obtain the complete 3D information of the object when applying a single Kinect for 3D scanning. The integration process which combines the 3D data from different angles by certain algorithms is also required. This sequential scanning process costs much time and the complex integration process often encounter some technical problems. Therefore, this paper aimed to apply multiple Kinects simultaneously on the field of developing a rapid 3D mannequin scan platform and proposed suggestions on the number and angles of Kinects. In the content, a method of establishing the coordination based on the relation between mannequin and the specifications of Kinect is proposed, and a suggestion of angles and number of Kinects is also described. An experiment of applying multiple Kinect on the scanning of 3D mannequin is constructed by Microsoft API, and the results show that the time required for scanning and technical threshold can be reduced in the industries of fashion and garment design.

Keywords: 3D scan, depth sensor, fashion and garment design, mannequin, multiple Kinect sensor

Procedia PDF Downloads 343
1883 The Material-Process Perspective: Design and Engineering

Authors: Lars Andersen

Abstract:

The development of design and engineering in large construction projects are characterized by an increased degree of flattening out of formal structures, extended use of parallel and integrated processes (‘Integrated Concurrent Engineering’) and an increased number of expert disciplines. The integration process is based on ongoing collaborations, dialogues, intercommunication and comments on each other’s work (iterations). This process based on reciprocal communication between actors and disciplines triggers value creation. However, communication between equals is not in itself sufficient to create effective decision making. The complexity of the process and time pressure contribute to an increased risk of a deficit of decisions and loss of process control. The paper refers to a study that aims at developing a resilient decision-making system that does not come in conflict with communication processes based on equality between the disciplines in the process. The study includes the construction of a hospital, following the phases design, engineering and physical building. The Research method is a combination of formative process research, process tracking and phenomenological analyses. The study tracked challenges and problems in the building process to the projection substrates (drawing and models) and further to the organization of the engineering and design phase. A comparative analysis of traditional and new ways of organizing the projecting made it possible to uncover an implicit material order or structure in the process. This uncovering implied a development of a material process perspective. According to this perspective the complexity of the process is rooted in material-functional differentiation. This differentiation presupposes a structuring material (the skeleton of the building) that coordinates the other types of material. Each expert discipline´s competence is related to one or a set of materials. The architect, consulting engineer construction etc. have their competencies related to structuring material, and inherent in this; coordination competence. When dialogues between the disciplines concerning the coordination between them do not result in agreement, the disciplines with responsibility for the structuring material decide the interface issues. Based on these premises, this paper develops a self-organized expert-driven interdisciplinary decision-making system.

Keywords: collaboration, complexity, design, engineering, materiality

Procedia PDF Downloads 196
1882 Enhancing Disaster Response Capabilities in Asia-Pacific: An Explorative Study Applied to Decision Support Tools for Logistics Network Design

Authors: Giuseppe Timperio, Robert de Souza

Abstract:

Logistics operations in the context of disaster response are characterized by a high degree of complexity due to the combined effect of a large number of stakeholders involved, time pressure, uncertainties at various levels, massive deployment of goods and personnel, and gigantic financial flow to be managed. It also involves several autonomous parties such as government agencies, militaries, NGOs, UN agencies, private sector to name few, to have a highly collaborative approach especially in the critical phase of the immediate response. This is particularly true in the context of L3 emergencies that are the most severe, large-scale humanitarian crises. Decision-making processes in disaster management are thus extremely difficult due to the presence of multiple decision-makers involved, and the complexity of the tasks being tackled. Hence, in this paper, we look at applying ICT based solutions to enable a speedy and effective decision making in the golden window of humanitarian operations. A high-level view of ICT based solutions in the context of logistics operations for humanitarian response in Southeast Asia is presented, and their viability in a real-life case about logistics network design is explored.

Keywords: decision support, disaster preparedness, humanitarian logistics, network design

Procedia PDF Downloads 147
1881 A Mixed Integer Programming Model for Optimizing the Layout of an Emergency Department

Authors: Farhood Rismanchian, Seong Hyeon Park, Young Hoon Lee

Abstract:

During the recent years, demand for healthcare services has dramatically increased. As the demand for healthcare services increases, so does the necessity of constructing new healthcare buildings and redesigning and renovating existing ones. Increasing demands necessitate the use of optimization techniques to improve the overall service efficiency in healthcare settings. However, high complexity of care processes remains the major challenge to accomplish this goal. This study proposes a method based on process mining results to address the high complexity of care processes and to find the optimal layout of the various medical centers in an emergency department. ProM framework is used to discover clinical pathway patterns and relationship between activities. Sequence clustering plug-in is used to remove infrequent events and to derive the process model in the form of Markov chain. The process mining results served as an input for the next phase which consists of the development of the optimization model. Comparison of the current ED design with the one obtained from the proposed method indicated that a carefully designed layout can significantly decrease the distances that patients must travel.

Keywords: Mixed Integer programming, Facility layout problem, Process Mining, Healthcare Operation Management

Procedia PDF Downloads 309
1880 The Effects of Three Levels of Contextual Inference among adult Athletes

Authors: Abdulaziz Almustafa

Abstract:

Considering the critical role permanence has on predictions related to the contextual interference effect on laboratory and field research, this study sought to determine whether the paradigm of the effect depends on the complexity of the skill during the acquisition and transfer phases. The purpose of the present study was to investigate the effects of contextual interference CI by extending previous laboratory and field research with adult athletes through the acquisition and transfer phases. Male (n=60) athletes age 18-22 years-old, were chosen randomly from Eastern Province Clubs. They were assigned to complete blocked, random, or serial practices. Analysis of variance with repeated measures MANOVA indicated that, the results did not support the notion of CI. There were no significant differences in acquisition phase between blocked, serial and random practice groups. During the transfer phase, there were no major differences between the practice groups. Apparently, due to the task complexity, participants were probably confused and not able to use the advantages of contextual interference. This is another contradictory result to contextual interference effects in acquisition and transfer phases in sport settings. One major factor that can influence the effect of contextual interference is task characteristics as the nature of level of difficulty in sport-related skill.

Keywords: contextual interference, acquisition, transfer, task difficulty

Procedia PDF Downloads 431
1879 Characterising Stable Model by Extended Labelled Dependency Graph

Authors: Asraful Islam

Abstract:

Extended dependency graph (EDG) is a state-of-the-art isomorphic graph to represent normal logic programs (NLPs) that can characterize the consistency of NLPs by graph analysis. To construct the vertices and arcs of an EDG, additional renaming atoms and rules besides those the given program provides are used, resulting in higher space complexity compared to the corresponding traditional dependency graph (TDG). In this article, we propose an extended labeled dependency graph (ELDG) to represent an NLP that shares an equal number of nodes and arcs with TDG and prove that it is isomorphic to the domain program. The number of nodes and arcs used in the underlying dependency graphs are formulated to compare the space complexity. Results show that ELDG uses less memory to store nodes, arcs, and cycles compared to EDG. To exhibit the desirability of ELDG, firstly, the stable models of the kernel form of NLP are characterized by the admissible coloring of ELDG; secondly, a relation of the stable models of a kernel program with the handles of the minimal, odd cycles appearing in the corresponding ELDG has been established; thirdly, to our best knowledge, for the first time an inverse transformation from a dependency graph to the representing NLP w.r.t. ELDG has been defined that enables transferring analytical results from the graph to the program straightforwardly.

Keywords: normal logic program, isomorphism of graph, extended labelled dependency graph, inverse graph transforma-tion, graph colouring

Procedia PDF Downloads 180
1878 Improving Student Programming Skills in Introductory Computer and Data Science Courses Using Generative AI

Authors: Genady Grabarnik, Serge Yaskolko

Abstract:

Generative Artificial Intelligence (AI) has significantly expanded its applicability with the incorporation of Large Language Models (LLMs) and become a technology with promise to automate some areas that were very difficult to automate before. The paper describes the introduction of generative Artificial Intelligence into Introductory Computer and Data Science courses and analysis of effect of such introduction. The generative Artificial Intelligence is incorporated in the educational process two-fold: For the instructors, we create templates of prompts for generation of tasks, and grading of the students work, including feedback on the submitted assignments. For the students, we introduce them to basic prompt engineering, which in turn will be used for generation of test cases based on description of the problems, generating code snippets for the single block complexity programming, and partitioning into such blocks of an average size complexity programming. The above-mentioned classes are run using Large Language Models, and feedback from instructors and students and courses’ outcomes are collected. The analysis shows statistically significant positive effect and preference of both stakeholders.

Keywords: introductory computer and data science education, generative AI, large language models, application of LLMS to computer and data science education

Procedia PDF Downloads 31
1877 Computation of Natural Logarithm Using Abstract Chemical Reaction Networks

Authors: Iuliia Zarubiieva, Joyun Tseng, Vishwesh Kulkarni

Abstract:

Recent researches has focused on nucleic acids as a substrate for designing biomolecular circuits for in situ monitoring and control. A common approach is to express them by a set of idealised abstract chemical reaction networks (ACRNs). Here, we present new results on how abstract chemical reactions, viz., catalysis, annihilation and degradation, can be used to implement circuit that accurately computes logarithm function using the method of Arithmetic-Geometric Mean (AGM), which has not been previously used in conjunction with ACRNs.

Keywords: chemical reaction networks, ratio computation, stability, robustness

Procedia PDF Downloads 134
1876 An Improved Data Aided Channel Estimation Technique Using Genetic Algorithm for Massive Multi-Input Multiple-Output

Authors: M. Kislu Noman, Syed Mohammed Shamsul Islam, Shahriar Hassan, Raihana Pervin

Abstract:

With the increasing rate of wireless devices and high bandwidth operations, wireless networking and communications are becoming over crowded. To cope with such crowdy and messy situation, massive MIMO is designed to work with hundreds of low costs serving antennas at a time as well as improve the spectral efficiency at the same time. TDD has been used for gaining beamforming which is a major part of massive MIMO, to gain its best improvement to transmit and receive pilot sequences. All the benefits are only possible if the channel state information or channel estimation is gained properly. The common methods to estimate channel matrix used so far is LS, MMSE and a linear version of MMSE also proposed in many research works. We have optimized these methods using genetic algorithm to minimize the mean squared error and finding the best channel matrix from existing algorithms with less computational complexity. Our simulation result has shown that the use of GA worked beautifully on existing algorithms in a Rayleigh slow fading channel and existence of Additive White Gaussian Noise. We found that the GA optimized LS is better than existing algorithms as GA provides optimal result in some few iterations in terms of MSE with respect to SNR and computational complexity.

Keywords: channel estimation, LMMSE, LS, MIMO, MMSE

Procedia PDF Downloads 163
1875 Geometric Properties of Some q-Bessel Functions

Authors: İbrahim Aktaş, Árpád Baricz

Abstract:

In this paper, the radii of star likeness of the Jackson and Hahn-Exton q-Bessel functions are considered, and for each of them three different normalizations is applied. By applying Euler-Rayleigh inequalities for the first positive zeros of these functions tight lower, and upper bounds for the radii of starlikeness of these functions are obtained. The Laguerre-Pólya class of real entire functions plays an important role in this study. In particular, we obtain some new bounds for the first positive zero of the derivative of the classical Bessel function of the first kind.

Keywords: bessel function, lommel function, radius of starlikeness and convexity, Struve function

Procedia PDF Downloads 249
1874 Factors Impacting Geostatistical Modeling Accuracy and Modeling Strategy of Fluvial Facies Models

Authors: Benbiao Song, Yan Gao, Zhuo Liu

Abstract:

Geostatistical modeling is the key technic for reservoir characterization, the quality of geological models will influence the prediction of reservoir performance greatly, but few studies have been done to quantify the factors impacting geostatistical reservoir modeling accuracy. In this study, 16 fluvial prototype models have been established to represent different geological complexity, 6 cases range from 16 to 361 wells were defined to reproduce all those 16 prototype models by different methodologies including SIS, object-based and MPFS algorithms accompany with different constraint parameters. Modeling accuracy ratio was defined to quantify the influence of each factor, and ten realizations were averaged to represent each accuracy ratio under the same modeling condition and parameters association. Totally 5760 simulations were done to quantify the relative contribution of each factor to the simulation accuracy, and the results can be used as strategy guide for facies modeling in the similar condition. It is founded that data density, geological trend and geological complexity have great impact on modeling accuracy. Modeling accuracy may up to 90% when channel sand width reaches up to 1.5 times of well space under whatever condition by SIS and MPFS methods. When well density is low, the contribution of geological trend may increase the modeling accuracy from 40% to 70%, while the use of proper variogram may have very limited contribution for SIS method. It can be implied that when well data are dense enough to cover simple geobodies, few efforts were needed to construct an acceptable model, when geobodies are complex with insufficient data group, it is better to construct a set of robust geological trend than rely on a reliable variogram function. For object-based method, the modeling accuracy does not increase obviously as SIS method by the increase of data density, but kept rational appearance when data density is low. MPFS methods have the similar trend with SIS method, but the use of proper geological trend accompany with rational variogram may have better modeling accuracy than MPFS method. It implies that the geological modeling strategy for a real reservoir case needs to be optimized by evaluation of dataset, geological complexity, geological constraint information and the modeling objective.

Keywords: fluvial facies, geostatistics, geological trend, modeling strategy, modeling accuracy, variogram

Procedia PDF Downloads 236
1873 Deep Routing Strategy: Deep Learning based Intelligent Routing in Software Defined Internet of Things.

Authors: Zabeehullah, Fahim Arif, Yawar Abbas

Abstract:

Software Defined Network (SDN) is a next genera-tion networking model which simplifies the traditional network complexities and improve the utilization of constrained resources. Currently, most of the SDN based Internet of Things(IoT) environments use traditional network routing strategies which work on the basis of max or min metric value. However, IoT network heterogeneity, dynamic traffic flow and complexity demands intelligent and self-adaptive routing algorithms because traditional routing algorithms lack the self-adaptions, intelligence and efficient utilization of resources. To some extent, SDN, due its flexibility, and centralized control has managed the IoT complexity and heterogeneity but still Software Defined IoT (SDIoT) lacks intelligence. To address this challenge, we proposed a model called Deep Routing Strategy (DRS) which uses Deep Learning algorithm to perform routing in SDIoT intelligently and efficiently. Our model uses real-time traffic for training and learning. Results demonstrate that proposed model has achieved high accuracy and low packet loss rate during path selection. Proposed model has also outperformed benchmark routing algorithm (OSPF). Moreover, proposed model provided encouraging results during high dynamic traffic flow.

Keywords: SDN, IoT, DL, ML, DRS

Procedia PDF Downloads 84