Search results for: multi-party computation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 535

Search results for: multi-party computation

145 Window Analysis and Malmquist Index for Assessing Efficiency and Productivity Growth in a Pharmaceutical Industry

Authors: Abbas Al-Refaie, Ruba Najdawi, Nour Bata, Mohammad D. AL-Tahat

Abstract:

The pharmaceutical industry is an important component of health care systems throughout the world. Measurement of a production unit-performance is crucial in determining whether it has achieved its objectives or not. This paper applies data envelopment (DEA) window analysis to assess the efficiencies of two packaging lines; Allfill (new) and DP6, in the Penicillin plant in a Jordanian Medical Company in 2010. The CCR and BCC models are used to estimate the technical efficiency, pure technical efficiency, and scale efficiency. Further, the Malmquist productivity index is computed to measure then employed to assess productivity growth relative to a reference technology. Two primary issues are addressed in computation of Malmquist indices of productivity growth. The first issue is the measurement of productivity change over the period, while the second is to decompose changes in productivity into what are generally referred to as a ‘catching-up’ effect (efficiency change) and a ‘frontier shift’ effect (technological change). Results showed that DP6 line outperforms the Allfill in technical and pure technical efficiency. However, the Allfill line outperforms DP6 line in scale efficiency. The obtained efficiency values can guide production managers in taking effective decisions related to operation, management, and plant size. Moreover, both machines exhibit a clear fluctuations in technological change, which is the main reason for the positive total factor productivity change. That is, installing a new Allfill production line can be of great benefit to increasing productivity. In conclusions, the DEA window analysis combined with the Malmquist index are supportive measures in assessing efficiency and productivity in pharmaceutical industry.

Keywords: window analysis, malmquist index, efficiency, productivity

Procedia PDF Downloads 587
144 Microwave Single Photon Source Using Landau-Zener Transitions

Authors: Siddhi Khaire, Samarth Hawaldar, Baladitya Suri

Abstract:

As efforts towards quantum communication advance, the need for single photon sources becomes imminent. Due to the extremely low energy of a single microwave photon, efforts to build single photon sources and detectors in the microwave range are relatively recent. We plan to use a Cooper Pair Box (CPB) that has a ‘sweet-spot’ where the two energy levels have minimal separation. Moreover, these qubits have fairly large anharmonicity making them close to ideal two-level systems. If the external gate voltage of these qubits is varied rapidly while passing through the sweet-spot, due to Landau-Zener effect, the qubit can be excited almost deterministically. The rapid change of the gate control voltage through the sweet spot induces a non-adiabatic population transfer from the ground to the excited state. The qubit eventually decays into the emission line emitting a single photon. The advantage of this setup is that the qubit can be excited without any coherent microwave excitation, thereby effectively increasing the usable source efficiency due to the absence of control pulse microwave photons. Since the probability of a Landau-Zener transition can be made almost close to unity by the appropriate design of parameters, this source behaves as an on-demand source of single microwave photons. The large anharmonicity of the CPB also ensures that only one excited state is involved in the transition and multiple photon output is highly improbable. Such a system has so far not been implemented and would find many applications in the areas of quantum optics, quantum computation as well as quantum communication.

Keywords: quantum computing, quantum communication, quantum optics, superconducting qubits, flux qubit, charge qubit, microwave single photon source, quantum information processing

Procedia PDF Downloads 64
143 Simulating Human Behavior in (Un)Built Environments: Using an Actor Profiling Method

Authors: Hadas Sopher, Davide Schaumann, Yehuda E. Kalay

Abstract:

This paper addresses the shortcomings of architectural computation tools in representing human behavior in built environments, prior to construction and occupancy of those environments. Evaluating whether a design fits the needs of its future users is currently done solely post construction, or is based on the knowledge and intuition of the designer. This issue is of high importance when designing complex buildings such as hospitals, where the quality of treatment as well as patient and staff satisfaction are of major concern. Existing computational pre-occupancy human behavior evaluation methods are geared mainly to test ergonomic issues, such as wheelchair accessibility, emergency egress, etc. As such, they rely on Agent Based Modeling (ABM) techniques, which emphasize the individual user. Yet we know that most human activities are social, and involve a number of actors working together, which ABM methods cannot handle. Therefore, we present an event-based model that manages the interaction between multiple Actors, Spaces, and Activities, to describe dynamically how people use spaces. This approach requires expanding the computational representation of Actors beyond their physical description, to include psychological, social, cultural, and other parameters. The model presented in this paper includes cognitive abilities and rules that describe the response of actors to their physical and social surroundings, based on the actors’ internal status. The model has been applied in a simulation of hospital wards, and showed adaptability to a wide variety of situated behaviors and interactions.

Keywords: agent based modeling, architectural design evaluation, event modeling, human behavior simulation, spatial cognition

Procedia PDF Downloads 232
142 Suitability of Direct Strength Method-Based Approach for Web Crippling Strength of Flange Fastened Cold-Formed Steel Channel Beams Subjected to Interior Two-Flange Loading: A Comprehensive Investigation

Authors: Hari Krishnan K. P., Anil Kumar M. V.

Abstract:

The Direct Strength Method (DSM) is used for the computation of the design strength of members whose behavior is governed by any form of buckling. DSM based semiempirical equations have been successfully used for cold-formed steel (CFS) members subjected to compression, bending, and shear. The DSM equations for the strength of a CFS member are based on the parameters accounting for strength [yield load (Py), yield moment (My), and shear yield load (Vy) for compression, bending, and shear respectively] and stability [buckling load (Pcr), buckling moment (Mcr), and shear buckling load (Vcr) for compression, bending and shear respectively]. The buckling of column and beam shall be governed by local, distortional, or global buckling modes and their interaction. Recently DSM-based methods are extended for the web crippling strength of CFS beams also. Numerous DSM-based expressions were reported in the literature, which is the function of loading case, cross-section shape, and boundary condition. Unlike members subjected to axial load, bending, or shear, no unified expression for the design web crippling strength irrespective of the loading case, cross-section shape, and end boundary conditions are available yet. This study, based on nonlinear finite element analysis results, shows that the slenderness of the web, which shall be represented either using web height to thickness ratio (h=t) or Pcr has negligible contribution to web crippling strength. Hence, the results in this paper question the suitability of DSM based approach for the web crippling strength of CFS beams.

Keywords: cold-formed steel, beams, DSM-based procedure, interior two flanged loading, web crippling

Procedia PDF Downloads 68
141 A Coupled Stiffened Skin-Rib Fully Gradient Based Optimization Approach for a Wing Box Made of Blended Composite Materials

Authors: F. Farzan Nasab, H. J. M. Geijselaers, I. Baran, A. De Boer

Abstract:

A method is introduced for the coupled skin-rib optimization of a wing box where mass minimization is the objective and local buckling is the constraint. The structure is made of composite materials where continuity of plies in multiple adjacent panels (blending) has to be satisfied. Blending guarantees the manufacturability of the structure; however, it is a highly challenging constraint to treat and has been under debate in recent research in the same area. To fulfill design guidelines with respect to symmetry, balance, contiguity, disorientation and percentage rule of the layup, a reference for the stacking sequences (stacking sequence table or SST) is generated first. Then, an innovative fully gradient-based optimization approach in relation to a specific SST is introduced to obtain the optimum thickness distribution all over the structure while blending is fulfilled. The proposed optimization approach aims to turn the discrete optimization problem associated with the integer number of plies into a continuous one. As a result of a wing box deflection, a rib is subjected to load values which vary nonlinearly with the amount of deflection. The bending stiffness of a skin affects the wing box deflection and thus affects the load applied to a rib. This indicates the necessity of a coupled skin-rib optimization approach for a more realistic optimized design. The proposed method is examined with the optimization of the layup of a composite stiffened skin and rib of a wing torsion box subjected to in-plane normal and shear loads. Results show that the method can successfully prescribe a valid design with a significantly cheap computation cost.

Keywords: blending, buckling optimization, composite panels, wing torsion box

Procedia PDF Downloads 385
140 Faster Pedestrian Recognition Using Deformable Part Models

Authors: Alessandro Preziosi, Antonio Prioletti, Luca Castangia

Abstract:

Deformable part models achieve high precision in pedestrian recognition, but all publicly available implementations are too slow for real-time applications. We implemented a deformable part model algorithm fast enough for real-time use by exploiting information about the camera position and orientation. This implementation is both faster and more precise than alternative DPM implementations. These results are obtained by computing convolutions in the frequency domain and using lookup tables to speed up feature computation. This approach is almost an order of magnitude faster than the reference DPM implementation, with no loss in precision. Knowing the position of the camera with respect to horizon it is also possible prune many hypotheses based on their size and location. The range of acceptable sizes and positions is set by looking at the statistical distribution of bounding boxes in labelled images. With this approach it is not needed to compute the entire feature pyramid: for example higher resolution features are only needed near the horizon. This results in an increase in mean average precision of 5% and an increase in speed by a factor of two. Furthermore, to reduce misdetections involving small pedestrians near the horizon, input images are supersampled near the horizon. Supersampling the image at 1.5 times the original scale, results in an increase in precision of about 4%. The implementation was tested against the public KITTI dataset, obtaining an 8% improvement in mean average precision over the best performing DPM-based method. By allowing for a small loss in precision computational time can be easily brought down to our target of 100ms per image, reaching a solution that is faster and still more precise than all publicly available DPM implementations.

Keywords: autonomous vehicles, deformable part model, dpm, pedestrian detection, real time

Procedia PDF Downloads 255
139 An Extensive Review Of Drought Indices

Authors: Shamsulhaq Amin

Abstract:

Drought can arise from several hydrometeorological phenomena that result in insufficient precipitation, soil moisture, and surface and groundwater flow, leading to conditions that are considerably drier than the usual water content or availability. Drought is often assessed using indices that are associated with meteorological, agricultural, and hydrological phenomena. In order to effectively handle drought disasters, it is essential to accurately determine the kind, intensity, and extent of the drought using drought characterization. This information is critical for managing the drought before, during, and after the rehabilitation process. Over a hundred drought assessments have been created in literature to evaluate drought disasters, encompassing a range of factors and variables. Some models utilise solely hydrometeorological drivers, while others employ remote sensing technology, and some incorporate a combination of both. Comprehending the entire notion of drought and taking into account drought indices along with their calculation processes are crucial for researchers in this discipline. Examining several drought metrics in different studies requires additional time and concentration. Hence, it is crucial to conduct a thorough examination of approaches used in drought indices in order to identify the most straightforward approach to avoid any discrepancies in numerous scientific studies. In case of practical application in real-world, categorizing indices relative to their usage in meteorological, agricultural, and hydrological phenomena might help researchers maximize their efficiency. Users have the ability to explore different indexes at the same time, allowing them to compare the convenience of use and evaluate the benefits and drawbacks of each. Moreover, certain indices exhibit interdependence, which enhances comprehension of their connections and assists in making informed decisions about their suitability in various scenarios. This study provides a comprehensive assessment of various drought indices, analysing their types and computation methodologies in a detailed and systematic manner.

Keywords: drought classification, drought severity, drought indices, agricultur, hydrological

Procedia PDF Downloads 16
138 ChaQra: A Cellular Unit of the Indian Quantum Network

Authors: Shashank Gupta, Iteash Agarwal, Vijayalaxmi Mogiligidda, Rajesh Kumar Krishnan, Sruthi Chennuri, Deepika Aggarwal, Anwesha Hoodati, Sheroy Cooper, Ranjan, Mohammad Bilal Sheik, Bhavya K. M., Manasa Hegde, M. Naveen Krishna, Amit Kumar Chauhan, Mallikarjun Korrapati, Sumit Singh, J. B. Singh, Sunil Sud, Sunil Gupta, Sidhartha Pant, Sankar, Neha Agrawal, Ashish Ranjan, Piyush Mohapatra, Roopak T., Arsh Ahmad, Nanjunda M., Dilip Singh

Abstract:

Major research interests on quantum key distribution (QKD) are primarily focussed on increasing 1. point-to-point transmission distance (1000 Km), 2. secure key rate (Mbps), 3. security of quantum layer (device-independence). It is great to push the boundaries on these fronts, but these isolated approaches are neither scalable nor cost-effective due to the requirements of specialised hardware and different infrastructure. Current and future QKD network requires addressing different sets of challenges apart from distance, key rate, and quantum security. In this regard, we present ChaQra -a sub-quantum network with core features as 1) Crypto agility (integration in the already deployed telecommunication fibres), 2) Software defined networking (SDN paradigm for routing different nodes), 3) reliability (addressing denial-of-service with hybrid quantum safe cryptography), 4) upgradability (modules upgradation based on scientific and technological advancements), 5) Beyond QKD (using QKD network for distributed computing, multi-party computation etc). Our results demonstrate a clear path to create and accelerate quantum secure Indian subcontinent under the national quantum mission.

Keywords: quantum network, quantum key distribution, quantum security, quantum information

Procedia PDF Downloads 20
137 Biomechanical Performance of the Synovial Capsule of the Glenohumeral Joint with a BANKART Lesion through Finite Element Analysis

Authors: Duvert A. Puentes T., Javier A. Maldonado E., Ivan Quintero., Diego F. Villegas

Abstract:

Mechanical Computation is a great tool to study the performance of complex models. An example of it is the study of the human body structure. This paper took advantage of different types of software to make a 3D model of the glenohumeral joint and apply a finite element analysis. The main objective was to study the change in the biomechanical properties of the joint when it presents an injury. Specifically, a BANKART lesion, which consists in the detachment of the anteroinferior labrum from the glenoid. Stress and strain distribution of the soft tissues were the focus of this study. First, a 3D model was made of a joint without any pathology, as a control sample, using segmentation software for the bones with the support of medical imagery and a cadaveric model to represent the soft tissue. The joint was built to simulate a compression and external rotation test using CAD to prepare the model in the adequate position. When the healthy model was finished, it was submitted to a finite element analysis and the results were validated with experimental model data. With the validated model, it was sensitized to obtain the best mesh measurement. Finally, the geometry of the 3D model was changed to imitate a BANKART lesion. Then, the contact zone of the glenoid with the labrum was slightly separated simulating a tissue detachment. With this new geometry, the finite element analysis was applied again, and the results were compared with the control sample created initially. With the data gathered, this study can be used to improve understanding of the labrum tears. Nevertheless, it is important to remember that the computational analysis are approximations and the initial data was taken from an in vitro assay.

Keywords: biomechanics, computational model, finite elements, glenohumeral joint, bankart lesion, labrum

Procedia PDF Downloads 133
136 Bayesian System and Copula for Event Detection and Summarization of Soccer Videos

Authors: Dhanuja S. Patil, Sanjay B. Waykar

Abstract:

Event detection is a standout amongst the most key parts for distinctive sorts of area applications of video data framework. Recently, it has picked up an extensive interest of experts and in scholastics from different zones. While detecting video event has been the subject of broad study efforts recently, impressively less existing methodology has considered multi-model data and issues related efficiency. Start of soccer matches different doubtful circumstances rise that can't be effectively judged by the referee committee. A framework that checks objectively image arrangements would prevent not right interpretations because of some errors, or high velocity of the events. Bayesian networks give a structure for dealing with this vulnerability using an essential graphical structure likewise the probability analytics. We propose an efficient structure for analysing and summarization of soccer videos utilizing object-based features. The proposed work utilizes the t-cherry junction tree, an exceptionally recent advancement in probabilistic graphical models, to create a compact representation and great approximation intractable model for client’s relationships in an interpersonal organization. There are various advantages in this approach firstly; the t-cherry gives best approximation by means of junction trees class. Secondly, to construct a t-cherry junction tree can be to a great extent parallelized; and at last inference can be performed utilizing distributed computation. Examination results demonstrates the effectiveness, adequacy, and the strength of the proposed work which is shown over a far reaching information set, comprising more soccer feature, caught at better places.

Keywords: summarization, detection, Bayesian network, t-cherry tree

Procedia PDF Downloads 297
135 Application of Rapidly Exploring Random Tree Star-Smart and G2 Quintic Pythagorean Hodograph Curves to the UAV Path Planning Problem

Authors: Luiz G. Véras, Felipe L. Medeiros, Lamartine F. Guimarães

Abstract:

This work approaches the automatic planning of paths for Unmanned Aerial Vehicles (UAVs) through the application of the Rapidly Exploring Random Tree Star-Smart (RRT*-Smart) algorithm. RRT*-Smart is a sampling process of positions of a navigation environment through a tree-type graph. The algorithm consists of randomly expanding a tree from an initial position (root node) until one of its branches reaches the final position of the path to be planned. The algorithm ensures the planning of the shortest path, considering the number of iterations tending to infinity. When a new node is inserted into the tree, each neighbor node of the new node is connected to it, if and only if the extension of the path between the root node and that neighbor node, with this new connection, is less than the current extension of the path between those two nodes. RRT*-smart uses an intelligent sampling strategy to plan less extensive routes by spending a smaller number of iterations. This strategy is based on the creation of samples/nodes near to the convex vertices of the navigation environment obstacles. The planned paths are smoothed through the application of the method called quintic pythagorean hodograph curves. The smoothing process converts a route into a dynamically-viable one based on the kinematic constraints of the vehicle. This smoothing method models the hodograph components of a curve with polynomials that obey the Pythagorean Theorem. Its advantage is that the obtained structure allows computation of the curve length in an exact way, without the need for quadratural techniques for the resolution of integrals.

Keywords: path planning, path smoothing, Pythagorean hodograph curve, RRT*-Smart

Procedia PDF Downloads 148
134 Computation of Residual Stresses in Human Face Due to Growth

Authors: M. A. Askari, M. A. Nazari, P. Perrier, Y. Payan

Abstract:

Growth and remodeling of biological structures have gained lots of attention over the past decades. Determining the response of the living tissues to the mechanical loads is necessary for a wide range of developing fields such as, designing of prosthetics and optimized surgery operations. It is a well-known fact that biological structures are never stress-free, even when externally unloaded. The exact origin of these residual stresses is not clear, but theoretically growth and remodeling is one of the main sources. Extracting body organs from medical imaging, does not produce any information regarding the existing residual stresses in that organ. The simplest cause of such stresses is the gravity since an organ grows under its influence from its birth. Ignoring such residual stresses might cause erroneous results in numerical simulations. Accounting for residual stresses due to tissue growth can improve the accuracy of mechanical analysis results. In this paper, we have implemented a computational framework based on fixed-point iteration to determine the residual stresses due to growth. Using nonlinear continuum mechanics and the concept of fictitious configuration we find the unknown stress-free reference configuration which is necessary for mechanical analysis. To illustrate the method, we apply it to a finite element model of healthy human face whose geometry has been extracted from medical images. We have computed the distribution of residual stress in facial tissues, which can overcome the effect of gravity and cause that tissues remain firm. Tissue wrinkles caused by aging could be a consequence of decreasing residual stress and not counteracting the gravity. Considering these stresses has important application in maxillofacial surgery. It helps the surgeons to predict the changes after surgical operations and their consequences.

Keywords: growth, soft tissue, residual stress, finite element method

Procedia PDF Downloads 327
133 Application of the Best Technique for Estimating the Rest-Activity Rhythm Period in Shift Workers

Authors: Rakesh Kumar Soni

Abstract:

Under free living conditions, human biological clocks show a periodicity of 24 hour for numerous physiological, behavioral and biochemical variables. However, this period is not the original period; rather it merely exhibits synchronization with the solar clock. It is, therefore, most important to investigate characteristics of human circadian clock, essentially in shift workers, who normally confront with contrasting social clocks. Aim of the present study was to investigate rest-activity rhythm and to vouch for the best technique for the computation of periods in this rhythm in subjects randomly selected from different groups of shift workers. The rest-activity rhythm was studied in forty-eight shift workers from three different organizations, namely Newspaper Printing Press (NPP), Chhattisgarh State Electricity Board (CSEB) and Raipur Alloys (RA). Shift workers of NPP (N = 20) were working on a permanent night shift schedule (NS; 20:00-04:00). However, in CSEB (N = 14) and RA (N = 14), shift workers were working in a 3-shift system comprising of rotations from night (NS; 22:00-06:00) to afternoon (AS; 14:00-22:00) and to morning shift (MS; 06:00-14:00). Each subject wore an Actiwatch (AW64, Mini Mitter Co. Inc., USA) for 7 and/or 21 consecutive days, only after furnishing a certificate of consent. One-minute epoch length was chosen for the collection of wrist activity data. Period was determined by using Actiware sleep software (Periodogram), Lomb-Scargle Periodogram (LSP) and Spectral analysis software (Spectre). Other statistical techniques, such as ANOVA and Duncan’s multiple-range test were also used whenever required. A statistically significant circadian rhythm in rest-activity, gauged by cosinor, was documented in all shift workers, irrespective of shift work. Results indicate that the efficiency of the technique to determine the period (τ) depended upon the clipping limits of the τs. It appears that the technique of spectre is more reliable.

Keywords: biological clock, rest activity rhythm, spectre, periodogram

Procedia PDF Downloads 138
132 A Geometrical Multiscale Approach to Blood Flow Simulation: Coupling 2-D Navier-Stokes and 0-D Lumped Parameter Models

Authors: Azadeh Jafari, Robert G. Owens

Abstract:

In this study, a geometrical multiscale approach which means coupling together the 2-D Navier-Stokes equations, constitutive equations and 0-D lumped parameter models is investigated. A multiscale approach, suggest a natural way of coupling detailed local models (in the flow domain) with coarser models able to describe the dynamics over a large part or even the whole cardiovascular system at acceptable computational cost. In this study we introduce a new velocity correction scheme to decouple the velocity computation from the pressure one. To evaluate the capability of our new scheme, a comparison between the results obtained with Neumann outflow boundary conditions on the velocity and Dirichlet outflow boundary conditions on the pressure and those obtained using coupling with the lumped parameter model has been performed. Comprehensive studies have been done based on the sensitivity of numerical scheme to the initial conditions, elasticity and number of spectral modes. Improvement of the computational algorithm with stable convergence has been demonstrated for at least moderate Weissenberg number. We comment on mathematical properties of the reduced model, its limitations in yielding realistic and accurate numerical simulations, and its contribution to a better understanding of microvascular blood flow. We discuss the sophistication and reliability of multiscale models for computing correct boundary conditions at the outflow boundaries of a section of the cardiovascular system of interest. In this respect the geometrical multiscale approach can be regarded as a new method for solving a class of biofluids problems, whose application goes significantly beyond the one addressed in this work.

Keywords: geometrical multiscale models, haemorheology model, coupled 2-D navier-stokes 0-D lumped parameter modeling, computational fluid dynamics

Procedia PDF Downloads 338
131 Quantitative Analysis of Nutrient Inflow from River and Groundwater to Imazu Bay in Fukuoka, Japan

Authors: Keisuke Konishi, Yoshinari Hiroshiro, Kento Terashima, Atsushi Tsutsumi

Abstract:

Imazu Bay plays an important role for endangered species such as horseshoe crabs and black-faced spoonbills that stay in the bay for spawning or the passing of winter. However, this bay is semi-enclosed with slow water exchange, which could lead to eutrophication under the condition of excess nutrient inflow to the bay. Therefore, quantification of nutrient inflow is of great importance. Generally, analysis of nutrient inflow to the bays takes into consideration nutrient inflow from only the river, but that from groundwater should not be ignored for more accurate results. The main objective of this study is to estimate the amounts of nutrient inflow from river and groundwater to Imazu Bay by analyzing water budget in Zuibaiji River Basin and loads of T-N, T-P, NO3-N and NH4-N. The water budget computation in the basin is performed using groundwater recharge model and quasi three-dimensional two-phase groundwater flow model, and the multiplication of the measured amount of nutrient inflow with the computed discharge gives the total amount of nutrient inflow to the bay. In addition, in order to evaluate nutrient inflow to the bay, the result is compared with nutrient inflow from geologically similar river basins. The result shows that the discharge is 3.50×107 m3/year from the river and 1.04×107 m3/year from groundwater. The submarine groundwater discharge accounts for approximately 23 % of the total discharge, which is large compared to the other river basins. It is also revealed that the total nutrient inflow is not particularly large. The sum of NO3-N and NH4-N loadings from groundwater is less than 10 % of that from the river because of denitrification in groundwater. The Shin Seibu Sewage Treatment Plant located below the observation points discharges treated water of 15,400 m3/day and plans to increase it. However, the loads of T-N and T-P from the treatment plant are 3.9 mg/L and 0.19 mg/L, so that it does not contribute a lot to eutrophication.

Keywords: Eutrophication, groundwater recharge model, nutrient inflow, quasi three-dimensional two-phase groundwater flow model, submarine groundwater discharge

Procedia PDF Downloads 434
130 An Unbiased Profiling of Immune Repertoire via Sequencing and Analyzing T-Cell Receptor Genes

Authors: Yi-Lin Chen, Sheng-Jou Hung, Tsunglin Liu

Abstract:

Adaptive immune system recognizes a wide range of antigens via expressing a large number of structurally distinct T cell and B cell receptor genes. The distinct receptor genes arise from complex rearrangements called V(D)J recombination, and constitute the immune repertoire. A common method of profiling immune repertoire is via amplifying recombined receptor genes using multiple primers and high-throughput sequencing. This multiplex-PCR approach is efficient; however, the resulting repertoire can be distorted because of primer bias. To eliminate primer bias, 5’ RACE is an alternative amplification approach. However, the application of RACE approach is limited by its low efficiency (i.e., the majority of data are non-regular receptor sequences, e.g., containing intronic segments) and lack of the convenient tool for analysis. We propose a computational tool that can correctly identify non-regular receptor sequences in RACE data via aligning receptor sequences against the whole gene instead of only the exon regions as done in all other tools. Using our tool, the remaining regular data allow for an accurate profiling of immune repertoire. In addition, a RACE approach is improved to yield a higher fraction of regular T-cell receptor sequences. Finally, we quantify the degree of primer bias of a multiplex-PCR approach via comparing it to the RACE approach. The results reveal significant differences in frequency of VJ combination by the two approaches. Together, we provide a new experimental and computation pipeline for an unbiased profiling of immune repertoire. As immune repertoire profiling has many applications, e.g., tracing bacterial and viral infection, detection of T cell lymphoma and minimal residual disease, monitoring cancer immunotherapy, etc., our work should benefit scientists who are interested in the applications.

Keywords: immune repertoire, T-cell receptor, 5' RACE, high-throughput sequencing, sequence alignment

Procedia PDF Downloads 167
129 Lightweight and Seamless Distributed Scheme for the Smart Home

Authors: Muhammad Mehran Arshad Khan, Chengliang Wang, Zou Minhui, Danyal Badar Soomro

Abstract:

Security of the smart home in terms of behavior activity pattern recognition is a totally dissimilar and unique issue as compared to the security issues of other scenarios. Sensor devices (low capacity and high capacity) interact and negotiate each other by detecting the daily behavior activity of individuals to execute common tasks. Once a device (e.g., surveillance camera, smart phone and light detection sensor etc.) is compromised, an adversary can then get access to a specific device and can damage daily behavior activity by altering the data and commands. In this scenario, a group of common instruction processes may get involved to generate deadlock. Therefore, an effective suitable security solution is required for smart home architecture. This paper proposes seamless distributed Scheme which fortifies low computational wireless devices for secure communication. Proposed scheme is based on lightweight key-session process to upheld cryptic-link for trajectory by recognizing of individual’s behavior activities pattern. Every device and service provider unit (low capacity sensors (LCS) and high capacity sensors (HCS)) uses an authentication token and originates a secure trajectory connection in network. Analysis of experiments is revealed that proposed scheme strengthens the devices against device seizure attack by recognizing daily behavior activities, minimum utilization memory space of LCS and avoids network from deadlock. Additionally, the results of a comparison with other schemes indicate that scheme manages efficiency in term of computation and communication.

Keywords: authentication, key-session, security, wireless sensors

Procedia PDF Downloads 296
128 Economical Transformer Selection Implementing Service Lifetime Cost

Authors: Bonginkosi A. Thango, Jacobus A. Jordaan, Agha F. Nnachi

Abstract:

In this day and age, there is a proliferate concern from all governments across the globe to barricade the environment from greenhouse gases, which absorb infrared radiation. As a result, solar photovoltaic (PV) electricity has been an expeditiously growing renewable energy source and will eventually undertake a prominent role in the global energy generation. The selection and purchasing of energy-efficient transformers that meet the operational requirements of the solar photovoltaic energy generation plants then become a part of the Independent Power Producers (IPP’s) investment plan of action. Taking these into account, this paper proposes a procedure that put into effect the intricate financial analysis necessitated to precisely evaluate the transformer service lifetime no-load and load loss factors. This procedure correctly set forth the transformer service lifetime loss factors as a result of a solar PV plant’s sporadic generation profile and related levelized costs of electricity into the computation of the transformer’s total ownership cost. The results are then critically compared with the conventional transformer total ownership cost unaccompanied by the emission costs, and demonstrate the significance of the sporadic energy generation nature of the solar PV plant on the total ownership cost. The findings indicate that the latter play a crucial role for developers and Independent Power Producers (IPP’s) in making the purchase decision during a tender bid where competing offers from different transformer manufactures are evaluated. Additionally, the susceptibility analysis of different factors engrossed in the transformer service lifetime cost is carried out; factors including the levelized cost of electricity, solar PV plant’s generation modes, and the loading profile are examined.

Keywords: solar photovoltaic plant, transformer, total ownership cost, loss factors

Procedia PDF Downloads 105
127 Direct Approach in Modeling Particle Breakage Using Discrete Element Method

Authors: Ebrahim Ghasemi Ardi, Ai Bing Yu, Run Yu Yang

Abstract:

Current study is aimed to develop an available in-house discrete element method (DEM) code and link it with direct breakage event. So, it became possible to determine the particle breakage and then its fragments size distribution, simultaneous with DEM simulation. It directly applies the particle breakage inside the DEM computation algorithm and if any breakage happens the original particle is replaced with daughters. In this way, the calculation will be followed based on a new updated particles list which is very similar to the real grinding environment. To validate developed model, a grinding ball impacting an unconfined particle bed was simulated. Since considering an entire ball mill would be too computationally demanding, this method provided a simplified environment to test the model. Accordingly, a representative volume of the ball mill was simulated inside a box, which could emulate media (ball)–powder bed impacts in a ball mill and during particle bed impact tests. Mono, binary and ternary particle beds were simulated to determine the effects of granular composition on breakage kinetics. The results obtained from the DEM simulations showed a reduction in the specific breakage rate for coarse particles in binary mixtures. The origin of this phenomenon, commonly known as cushioning or decelerated breakage in dry milling processes, was explained by the DEM simulations. Fine particles in a particle bed increase mechanical energy loss, and reduce and distribute interparticle forces thereby inhibiting the breakage of the coarse component. On the other hand, the specific breakage rate of fine particles increased due to contacts associated with coarse particles. Such phenomenon, known as acceleration, was shown to be less significant, but should be considered in future attempts to accurately quantify non-linear breakage kinetics in the modeling of dry milling processes.

Keywords: particle bed, breakage models, breakage kinetic, discrete element method

Procedia PDF Downloads 172
126 Human Leukocyte Antigen Class 1 Phenotype Distribution and Analysis in Persons from Central Uganda with Active Tuberculosis and Latent Mycobacterium tuberculosis Infection

Authors: Helen K. Buteme, Rebecca Axelsson-Robertson, Moses L. Joloba, Henry W. Boom, Gunilla Kallenius, Markus Maeurer

Abstract:

Background: The Ugandan population is heavily affected by infectious diseases and Human leukocyte antigen (HLA) diversity plays a crucial role in the host-pathogen interaction and affects the rates of disease acquisition and outcome. The identification of HLA class 1 alleles and determining which alleles are associated with tuberculosis (TB) outcomes would help in screening individuals in TB endemic areas for susceptibility to TB and to predict resistance or progression to TB which would inevitably lead to better clinical management of TB. Aims: To be able to determine the HLA class 1 phenotype distribution in a Ugandan TB cohort and to establish the relationship between these phenotypes and active and latent TB. Methods: Blood samples were drawn from 32 HIV negative individuals with active TB and 45 HIV negative individuals with latent MTB infection. DNA was extracted from the blood samples and the DNA samples HLA typed by the polymerase chain reaction-sequence specific primer method. The allelic frequencies were determined by direct count. Results: HLA-A*02, A*01, A*74, A*30, B*15, B*58, C*07, C*03 and C*04 were the dominant phenotypes in this Ugandan cohort. There were differences in the distribution of HLA types between the individuals with active TB and the individuals with LTBI with only HLA-A*03 allele showing a statistically significant difference (p=0.0136). However, after FDR computation the corresponding q-value is above the expected proportion of false discoveries (q-value 0.2176). Key findings: We identified a number of HLA class I alleles in a population from Central Uganda which will enable us to carry out a functional characterization of CD8+ T-cell mediated immune responses to MTB. Our results also suggest that there may be a positive association between the HLA-A*03 allele and TB implying that individuals with the HLA-A*03 allele are at a higher risk of developing active TB.

Keywords: HLA, phenotype, tuberculosis, Uganda

Procedia PDF Downloads 383
125 Comparison of Modulus from Repeated Plate Load Test and Resonant Column Test for Compaction Control of Trackbed Foundation

Authors: JinWoog Lee, SeongHyeok Lee, ChanYong Choi, Yujin Lim, Hojin Cho

Abstract:

Primary function of the trackbed in a conventional railway track system is to decrease the stresses in the subgrade to be in an acceptable level. A properly designed trackbed layer performs this task adequately. Many design procedures have used assumed and/or are based on critical stiffness values of the layers obtained mostly in the field to calculate an appropriate thickness of the sublayers of the trackbed foundation. However, those stiffness values do not consider strain levels clearly and precisely in the layers. This study proposes a method of computation of stiffness that can handle with strain level in the layers of the trackbed foundation in order to provide properly selected design values of the stiffness of the layers. The shear modulus values are dependent on shear strain level so that the strain levels generated in the subgrade in the trackbed under wheel loading and below plate of Repeated Plate Bearing Test (RPBT) are investigated by finite element analysis program ABAQUS and PLAXIS programs. The strain levels generated in the subgrade from RPBT are compared to those values from RC (Resonant Column) test after some consideration of strain levels and stress consideration. For comparison of shear modulus G obtained from RC test and stiffness moduli Ev2 obtained from RPBT in the field, many numbers of mid-size RC tests in laboratory and RPBT in field were performed extensively. It was found in this study that there is a big difference in stiffness modulus when the converted Ev2 values were compared to those values of RC test. It is verified in this study that it is necessary to use precise and increased loading steps to construct nonlinear curves from RPBT in order to get correct Ev2 values in proper strain levels.

Keywords: modulus, plate load test, resonant column test, trackbed foundation

Procedia PDF Downloads 470
124 Improving Fake News Detection Using K-means and Support Vector Machine Approaches

Authors: Kasra Majbouri Yazdi, Adel Majbouri Yazdi, Saeid Khodayi, Jingyu Hou, Wanlei Zhou, Saeed Saedy

Abstract:

Fake news and false information are big challenges of all types of media, especially social media. There is a lot of false information, fake likes, views and duplicated accounts as big social networks such as Facebook and Twitter admitted. Most information appearing on social media is doubtful and in some cases misleading. They need to be detected as soon as possible to avoid a negative impact on society. The dimensions of the fake news datasets are growing rapidly, so to obtain a better result of detecting false information with less computation time and complexity, the dimensions need to be reduced. One of the best techniques of reducing data size is using feature selection method. The aim of this technique is to choose a feature subset from the original set to improve the classification performance. In this paper, a feature selection method is proposed with the integration of K-means clustering and Support Vector Machine (SVM) approaches which work in four steps. First, the similarities between all features are calculated. Then, features are divided into several clusters. Next, the final feature set is selected from all clusters, and finally, fake news is classified based on the final feature subset using the SVM method. The proposed method was evaluated by comparing its performance with other state-of-the-art methods on several specific benchmark datasets and the outcome showed a better classification of false information for our work. The detection performance was improved in two aspects. On the one hand, the detection runtime process decreased, and on the other hand, the classification accuracy increased because of the elimination of redundant features and the reduction of datasets dimensions.

Keywords: clustering, fake news detection, feature selection, machine learning, social media, support vector machine

Procedia PDF Downloads 149
123 Implications of Optimisation Algorithm on the Forecast Performance of Artificial Neural Network for Streamflow Modelling

Authors: Martins Y. Otache, John J. Musa, Abayomi I. Kuti, Mustapha Mohammed

Abstract:

The performance of an artificial neural network (ANN) is contingent on a host of factors, for instance, the network optimisation scheme. In view of this, the study examined the general implications of the ANN training optimisation algorithm on its forecast performance. To this end, the Bayesian regularisation (Br), Levenberg-Marquardt (LM), and the adaptive learning gradient descent: GDM (with momentum) algorithms were employed under different ANN structural configurations: (1) single-hidden layer, and (2) double-hidden layer feedforward back propagation network. Results obtained revealed generally that the gradient descent with momentum (GDM) optimisation algorithm, with its adaptive learning capability, used a relatively shorter time in both training and validation phases as compared to the Levenberg- Marquardt (LM) and Bayesian Regularisation (Br) algorithms though learning may not be consummated; i.e., in all instances considering also the prediction of extreme flow conditions for 1-day and 5-day ahead, respectively especially using the ANN model. In specific statistical terms on the average, model performance efficiency using the coefficient of efficiency (CE) statistic were Br: 98%, 94%; LM: 98 %, 95 %, and GDM: 96 %, 96% respectively for training and validation phases. However, on the basis of relative error distribution statistics (MAE, MAPE, and MSRE), GDM performed better than the others overall. Based on the findings, it is imperative to state that the adoption of ANN for real-time forecasting should employ training algorithms that do not have computational overhead like the case of LM that requires the computation of the Hessian matrix, protracted time, and sensitivity to initial conditions; to this end, Br and other forms of the gradient descent with momentum should be adopted considering overall time expenditure and quality of the forecast as well as mitigation of network overfitting. On the whole, it is recommended that evaluation should consider implications of (i) data quality and quantity and (ii) transfer functions on the overall network forecast performance.

Keywords: streamflow, neural network, optimisation, algorithm

Procedia PDF Downloads 127
122 Multivariate Analysis on Water Quality Attributes Using Master-Slave Neural Network Model

Authors: A. Clementking, C. Jothi Venkateswaran

Abstract:

Mathematical and computational functionalities such as descriptive mining, optimization, and predictions are espoused to resolve natural resource planning. The water quality prediction and its attributes influence determinations are adopted optimization techniques. The water properties are tainted while merging water resource one with another. This work aimed to predict influencing water resource distribution connectivity in accordance to water quality and sediment using an innovative proposed master-slave neural network back-propagation model. The experiment results are arrived through collecting water quality attributes, computation of water quality index, design and development of neural network model to determine water quality and sediment, master–slave back propagation neural network back-propagation model to determine variations on water quality and sediment attributes between the water resources and the recommendation for connectivity. The homogeneous and parallel biochemical reactions are influences water quality and sediment while distributing water from one location to another. Therefore, an innovative master-slave neural network model [M (9:9:2)::S(9:9:2)] designed and developed to predict the attribute variations. The result of training dataset given as an input to master model and its maximum weights are assigned as an input to the slave model to predict the water quality. The developed master-slave model is predicted physicochemical attributes weight variations for 85 % to 90% of water quality as a target values.The sediment level variations also predicated from 0.01 to 0.05% of each water quality percentage. The model produced the significant variations on physiochemical attribute weights. According to the predicated experimental weight variation on training data set, effective recommendations are made to connect different resources.

Keywords: master-slave back propagation neural network model(MSBPNNM), water quality analysis, multivariate analysis, environmental mining

Procedia PDF Downloads 447
121 Loss Function Optimization for CNN-Based Fingerprint Anti-Spoofing

Authors: Yehjune Heo

Abstract:

As biometric systems become widely deployed, the security of identification systems can be easily attacked by various spoof materials. This paper contributes to finding a reliable and practical anti-spoofing method using Convolutional Neural Networks (CNNs) based on the types of loss functions and optimizers. The types of CNNs used in this paper include AlexNet, VGGNet, and ResNet. By using various loss functions including Cross-Entropy, Center Loss, Cosine Proximity, and Hinge Loss, and various loss optimizers which include Adam, SGD, RMSProp, Adadelta, Adagrad, and Nadam, we obtained significant performance changes. We realize that choosing the correct loss function for each model is crucial since different loss functions lead to different errors on the same evaluation. By using a subset of the Livdet 2017 database, we validate our approach to compare the generalization power. It is important to note that we use a subset of LiveDet and the database is the same across all training and testing for each model. This way, we can compare the performance, in terms of generalization, for the unseen data across all different models. The best CNN (AlexNet) with the appropriate loss function and optimizers result in more than 3% of performance gain over the other CNN models with the default loss function and optimizer. In addition to the highest generalization performance, this paper also contains the models with high accuracy associated with parameters and mean average error rates to find the model that consumes the least memory and computation time for training and testing. Although AlexNet has less complexity over other CNN models, it is proven to be very efficient. For practical anti-spoofing systems, the deployed version should use a small amount of memory and should run very fast with high anti-spoofing performance. For our deployed version on smartphones, additional processing steps, such as quantization and pruning algorithms, have been applied in our final model.

Keywords: anti-spoofing, CNN, fingerprint recognition, loss function, optimizer

Procedia PDF Downloads 110
120 Free Energy Computation of A G-Quadruplex-Ligand Structure: A Classical Molecular Dynamics and Metadynamics Simulation Study

Authors: Juan Antonio Mondragon Sanchez, Ruben Santamaria

Abstract:

The DNA G-quadruplex is a four-stranded DNA structure formed by stacked planes of four base paired guanines (G-quartet). Guanine rich DNA sequences appear in many sites of genomic DNA and can potential form G-quadruplexes, such as those occurring at 3'-terminus of the human telomeric DNA. The formation and stabilization of a G-quadruplex by small ligands at the telomeric region can inhibit the telomerase activity. In turn, the ligands can be used to down regulate oncogene expression making G-quadruplex an attractive target for anticancer therapy. Many G-quadruplex ligands have been proposed with a planar core to facilitate the pi–pi stacking and electrostatic interactions with the G-quartets. However, many drug candidates are impossibilitated to discriminate a G-quadruplex from a double helix DNA structure. In this context, it is important to investigate the site topology for the interaction of a G-quadruplex with a ligand. In this work, we determine the free energy surface of a G-quadruplex-ligand to study the binding modes of the G-quadruplex (TG4T) with the daunomycin (DM) drug. The complex TG4T-DM is studied using classical molecular dynamics in combination with metadynamics simulations. The metadynamics simulations permit an enhanced sampling of the conformational space with a modest computational cost and obtain free energy surfaces in terms of the collective variables (CV). The free energy surfaces of TG4T-DM exhibit other local minima, indicating the presence of additional binding modes of daunomycin that are not observed in short MD simulations without the metadynamics approach. The results are compared with similar calculations on a different structure (the mutated mu-G4T-DM where the 5' thymines on TG4T-DM have been deleted). The results should be of help to design new G-quadruplex drugs, and understand the differences in the recognition topology sites of the duplex and quadruplex DNA structures in their interaction with ligands.

Keywords: g-quadruplex, cancer, molecular dynamics, metadynamics

Procedia PDF Downloads 433
119 Effects of Computer Aided Instructional Package on Performance and Retention of Genetic Concepts amongst Secondary School Students in Niger State, Nigeria

Authors: Muhammad R. Bello, Mamman A. Wasagu, Yahya M. Kamar

Abstract:

The study investigated the effects of computer-aided instructional package (CAIP) on performance and retention of genetic concepts among secondary school students in Niger State. Quasi-experimental research design i.e. pre-test-post-test experimental and control groups were adopted for the study. The population of the study was all senior secondary school three (SS3) students’ offering biology. A sample of 223 students was randomly drawn from six purposively selected secondary schools. The researchers’ developed computer aided instructional package (CAIP) on genetic concepts was used as treatment instrument for the experimental group while the control group was exposed to the conventional lecture method (CLM). The instrument for data collection was a Genetic Performance Test (GEPET) that had 50 multiple-choice questions which were validated by science educators. A Reliability coefficient of 0.92 was obtained for GEPET using Pearson Product Moment Correlation (PPMC). The data collected were analyzed using IBM SPSS Version 20 package for computation of Means, Standard deviation, t-test, and analysis of covariance (ANCOVA). The ANOVA analysis (Fcal (220) = 27.147, P < 0.05) shows that students who received instruction with CAIP outperformed the students who received instruction with CLM and also had higher retention. The findings also revealed no significant difference in performance and retention between male and female students (tcal (103) = -1.429, P > 0.05). It was recommended amongst others that teachers should use computer-aided instructional package in teaching genetic concepts in order to improve students’ performance and retention in biology subject. Keywords: Computer-aided Instructional Package, Performance, Retention and Genetic Concepts.

Keywords: computer aided instructional package, performance, retention, genetic concepts, senior secondary school students

Procedia PDF Downloads 339
118 Evaluation of Chitin Filled Epoxy Coating for Corrosion Protection of Q235 Steel in Saline Environment

Authors: Innocent O. Arukalam, Emeka E. Oguzie

Abstract:

Interest in the development of eco-friendly anti-corrosion coatings using bio-based renewable materials is gaining momentum recently. To this effect, chitin biopolymer, which is non-toxic, biodegradable, and inherently possesses anti-microbial property, was successfully synthesized from snail shells and used as a filler in the preparation of epoxy coating. The chitin particles were characterized with contact angle goniometer, scanning electron microscope (SEM), Fourier transform infrared (FTIR) spectrophotometer, and X-ray diffractometer (XRD). The performance of the coatings was evaluated by immersion and electrochemical impedance spectroscopy (EIS) tests. Electronic structure properties of the coating ingredients and molecular level interaction of the corrodent and coated Q235 steel were appraised by quantum chemical computations (QCC) and molecular dynamics (MD) simulation techniques, respectively. The water contact angle (WCA) measurement of chitin particles was found to be 129.3o while that of chitin particles modified with amino trimethoxy silane (ATMS) was 149.6o, suggesting it is highly hydrophobic. Immersion and EIS analyses revealed that epoxy coating containing silane-modified chitin exhibited lowest water absorption and highest barrier as well as anti-corrosion performances. The QCC showed that quantum parameters for the coating containing silane-modified chitin are optimum and therefore corresponds to high corrosion protection. The high negative value of adsorption energies (Eads) for the coating containing silane-modified chitin indicates the coating molecules interacted and adsorbed strongly on the steel surface. The observed results have shown that silane-modified epoxy-chitin coating would perform satisfactorily for surface protection of metal structures in saline environment.

Keywords: chitin, EIS, epoxy coating, hydrophobic, molecular dynamics simulation, quantum chemical computation

Procedia PDF Downloads 65
117 Analysis of a IncResU-Net Model for R-Peak Detection in ECG Signals

Authors: Beatriz Lafuente Alcázar, Yash Wani, Amit J. Nimunkar

Abstract:

Cardiovascular Diseases (CVDs) are the leading cause of death globally, and around 80% of sudden cardiac deaths are due to arrhythmias or irregular heartbeats. The majority of these pathologies are revealed by either short-term or long-term alterations in the electrocardiogram (ECG) morphology. The ECG is the main diagnostic tool in cardiology. It is a non-invasive, pain free procedure that measures the heart’s electrical activity and that allows the detecting of abnormal rhythms and underlying conditions. A cardiologist can diagnose a wide range of pathologies based on ECG’s form alterations, but the human interpretation is subjective and it is contingent to error. Moreover, ECG records can be quite prolonged in time, which can further complicate visual diagnosis, and deeply retard disease detection. In this context, deep learning methods have risen as a promising strategy to extract relevant features and eliminate individual subjectivity in ECG analysis. They facilitate the computation of large sets of data and can provide early and precise diagnoses. Therefore, the cardiology field is one of the areas that can most benefit from the implementation of deep learning algorithms. In the present study, a deep learning algorithm is trained following a novel approach, using a combination of different databases as the training set. The goal of the algorithm is to achieve the detection of R-peaks in ECG signals. Its performance is further evaluated in ECG signals with different origins and features to test the model’s ability to generalize its outcomes. Performance of the model for detection of R-peaks for clean and noisy ECGs is presented. The model is able to detect R-peaks in the presence of various types of noise, and when presented with data, it has not been trained. It is expected that this approach will increase the effectiveness and capacity of cardiologists to detect divergences in the normal cardiac activity of their patients.

Keywords: arrhythmia, deep learning, electrocardiogram, machine learning, R-peaks

Procedia PDF Downloads 149
116 An Overview of Domain Models of Urban Quantitative Analysis

Authors: Mohan Li

Abstract:

Nowadays, intelligent research technology is more and more important than traditional research methods in urban research work, and this proportion will greatly increase in the next few decades. Frequently such analyzing work cannot be carried without some software engineering knowledge. And here, domain models of urban research will be necessary when applying software engineering knowledge to urban work. In many urban plan practice projects, making rational models, feeding reliable data, and providing enough computation all make indispensable assistance in producing good urban planning. During the whole work process, domain models can optimize workflow design. At present, human beings have entered the era of big data. The amount of digital data generated by cities every day will increase at an exponential rate, and new data forms are constantly emerging. How to select a suitable data set from the massive amount of data, manage and process it has become an ability that more and more planners and urban researchers need to possess. This paper summarizes and makes predictions of the emergence of technologies and technological iterations that may affect urban research in the future, discover urban problems, and implement targeted sustainable urban strategies. They are summarized into seven major domain models. They are urban and rural regional domain model, urban ecological domain model, urban industry domain model, development dynamic domain model, urban social and cultural domain model, urban traffic domain model, and urban space domain model. These seven domain models can be used to guide the construction of systematic urban research topics and help researchers organize a series of intelligent analytical tools, such as Python, R, GIS, etc. These seven models make full use of quantitative spatial analysis, machine learning, and other technologies to achieve higher efficiency and accuracy in urban research, assisting people in making reasonable decisions.

Keywords: big data, domain model, urban planning, urban quantitative analysis, machine learning, workflow design

Procedia PDF Downloads 156