Search results for: approximation of analytic functions
987 Ideology and Lexicogrammar: Discourse Against the Power in Lyrical Texts (XIII, XVII and XX Centuries)
Authors: Ulisses Tadeu Vaz de Oliveira
Abstract:
The development of multifunctional studies in the theoretical-methodological perspective of the Systemic-Functional Grammar (SFG) and the increasing number of critical literary studies have introduced new opportunities for the study of ideologies and societies, but also brought up new challenges across and within many areas. In this regard, the Critical Linguistics researches allow a form of pairing a textual linguistic analysis method (micro level) with a social language theory in political and ideological processes (macro level), presented in the literature. This presentation will report on strategies to criticize power holders in literary productions from three distinct eras, namely: (a) Satirical Galego-Portuguese chants of Gil Pérez Conde (thirteenth century), (b) Poems of Gregorio de Matos Guerra (seventeenth century), and (c) Songs of Chico Buarque de Holanda (twentieth century). The analysis of these productions is based on the SFG proposals, which considers the clause as a social event. Therefore, the structure serves to realize three concurrent meanings (metafunctions): Ideational, Interpersonal and Textual. The presenter aims to shed light on the core issues relevant to the successes of the authors to criticize authorities in repressive times while caring about face-threatening and politeness. The effective and meaningful critical discourse was a way of moving the society`s chains towards new ideologies reflected in the lexicogrammatical choices made and the rhetorical functions of the persuasive structures used by the authors.Keywords: ideology, literature, persuasion, systemic-functional grammar
Procedia PDF Downloads 418986 Enhancing Quality Management Systems through Automated Controls and Neural Networks
Authors: Shara Toibayeva, Irbulat Utepbergenov, Lyazzat Issabekova, Aidana Bodesova
Abstract:
The article discusses the importance of quality assessment as a strategic tool in business and emphasizes the significance of the effectiveness of quality management systems (QMS) for enterprises. The evaluation of these systems takes into account the specificity of quality indicators, the multilevel nature of the system, and the need for optimal selection of the number of indicators and evaluation of the system state, which is critical for making rational management decisions. Methods and models of automated enterprise quality management are proposed, including an intelligent automated quality management system integrated with the Management Information and Control System. These systems make it possible to automate the implementation and support of QMS, increasing the validity, efficiency, and effectiveness of management decisions by automating the functions performed by decision makers and personnel. The paper also emphasizes the use of recurrent neural networks to improve automated quality management. Recurrent neural networks (RNNs) are used to analyze and process sequences of data, which is particularly useful in the context of document quality assessment and non-conformance detection in quality management systems. These networks are able to account for temporal dependencies and complex relationships between different data elements, which improves the accuracy and efficiency of automated decisions. The project was supported by a grant from the Ministry of Education and Science of the Republic of Kazakhstan under the Zhas Galym project No. AR 13268939, dedicated to research and development of digital technologies to ensure consistency of QMS regulatory documents.Keywords: automated control system, quality management, document structure, formal language
Procedia PDF Downloads 39985 Liesegang Phenomena: Experimental and Simulation Studies
Authors: Vemula Amalakrishna, S. Pushpavanam
Abstract:
Change and motion characterize and persistently reshape the world around us, on scales from molecular to global. The subtle interplay between change (Reaction) and motion (Diffusion) gives rise to an astonishing intricate spatial or temporal pattern. These pattern formation in nature has been intellectually appealing for many scientists since antiquity. Periodic precipitation patterns, also known as Liesegang patterns (LP), are one of the stimulating examples of such self-assembling reaction-diffusion (RD) systems. LP formation has a great potential in micro and nanotechnology. So far, the research on LPs has been concentrated mostly on how these patterns are forming, retrieving information to build a universal mathematical model for them. Researchers have developed various theoretical models to comprehensively construct the geometrical diversity of LPs. To the best of our knowledge, simulation studies of LPs assume an arbitrary value of RD parameters to explain experimental observation qualitatively. In this work, existing models were studied to understand the mechanism behind this phenomenon and challenges pertaining to models were understood and explained. These models are not computationally effective due to the presence of discontinuous precipitation rate in RD equations. To overcome the computational challenges, smoothened Heaviside functions have been introduced, which downsizes the computational time as well. Experiments were performed using a conventional LP system (AgNO₃-K₂Cr₂O₇) to understand the effects of different gels and temperatures on formed LPs. The model is extended for real parameter values to compare the simulated results with experimental data for both 1-D (Cartesian test tubes) and 2-D(cylindrical and Petri dish).Keywords: reaction-diffusion, spatio-temporal patterns, nucleation and growth, supersaturation
Procedia PDF Downloads 152984 The Relevance of Family Involvement in the Journey of Dementia Patients
Authors: Akankunda Veronicah Karuhanga
Abstract:
Dementia is an age mental disorder that makes victims lose normal functionality that needs delicate attention. It has been technically defined as a clinical syndrome that presents a number of difficulties in speech and other cognitive functions that change someone’s behaviors and can also cause impairments in activities of daily living, not forgetting a range of neurological disorders that bring memory loss and cognitive impairment. Family members are the primary healthcare givers and therefore, the way how they handle the situation in its early stages determines future deterioration syndromes like total memory loss. Unfortunately, most family members are ignorant about this condition and in most cases, the patients are brought to our facilities when their condition was already mismanaged by family members and we thus cannot do much. For example, incontinence can be managed at early stages through potty training or toilet scheduling before resorting to 24/7 diapers which are also not good. Professional Elderly care should be understood and practiced as an extension of homes, not a dumping place for people considered “abnormal” on account of ignorance. Immediate relatives should therefore be sensitized concerning the normalcy of dementia in the context of old age so that they can be understanding and supportive of dementia patients rather than discriminating against them as present-day lepers. There is a need to skill home-based caregivers on how to handle dementia in its early stages. Unless this is done, many of our elderly homes shall be filled with patients who should have been treated and supported from their homes. This skilling of home-based caregivers is a vital intervention because until elderly care is appreciated as a human moral obligation, many transactional rehabilitation centers will crop up and this shall be one of the worst moral decadences of our times.Keywords: dementia, family, Alzheimers, relevancy
Procedia PDF Downloads 97983 The Translation of Code-Switching in African Literature: Comparing the Two German Translations of Ngugi Wa Thiongo’s "Petals of Blood"
Authors: Omotayo Olalere
Abstract:
The relevance of code-switching for intercultural communication through literary translation cannot be overemphasized. The translation of code-switching and its implications for translations studies have been studied in the context of African literature. In these cases, code-switching was examined in the more general terms of its usage in source text and not particularly in Ngugi’s novels and its translations. In addition, the functions of translation and code-switching in the lyrics of some popular African songs have been studied, but this study is related more with oral performance than with written literature. As such, little has been done on the German translation of code-switching in African works. This study intends to fill this lacuna by examining the concept of code-switching in the German translations in Ngugi’s Petals of Blood. The aim is to highlight the significance of code-switching as a phenomenon in this African (Ngugi’s) novel written in English and to also focus on its representation in the two German translations. The target texts to be used are Verbrannte Blueten and Land der flammenden Blueten. “Abrogration“ as a concept will play an important role in the analysis of the data. Findings will show that the ideology of a translator plays a huge role in representing the concept of “abrogration” in the translation of code-switching in the selected source text. The study will contribute to knowledge in translation studies by bringing to limelight the need to foreground aspects of language contact in translation theory and practice, particularly in the African context. Relevant translation theories adopted for the study include Bandia’s (2008) postcolonial theory of translation and Snell-Hornby”s (1988) cultural translation theory.Keywords: code switching, german translation, ngugi wa thiong’o, petals of blood
Procedia PDF Downloads 91982 Research on Spatial Distribution of Service Facilities Based on Innovation Function: A Case Study of Zhejiang University Zijin Co-Maker Town
Authors: Zhang Yuqi
Abstract:
Service facilities are the boosters for the cultivation and development of innovative functions in innovative cluster areas. At the same time, reasonable service facilities planning can better link the internal functional blocks. This paper takes Zhejiang University Zijin Co-Maker Town as the research object, based on the combination of network data mining and field research and verification, combined with the needs of its internal innovative groups. It studies the distribution characteristics and existing problems of service facilities and then proposes a targeted planning suggestion. The main conclusions are as follows: (1) From the perspective of view, the town is rich in general life-supporting services, but lacking of provision targeted and distinctive service facilities for innovative groups; (2) From the perspective of scale structure, small-scale street shops are the main business form, lack of large-scale service center; (3) From the perspective of spatial structure, service facilities layout of each functional block is too fragile to fit the characteristics of 2aggregation- distribution' of innovation and entrepreneurial activities; (4) The goal of optimizing service facilities planning should be guided for fostering function of innovation and entrepreneurship and meet the actual needs of the innovation and entrepreneurial groups.Keywords: the cultivation of innovative function, Zhejiang University Zijin Co-Maker Town, service facilities, network data mining, space optimization advice
Procedia PDF Downloads 117981 The Interaction between Blood-Brain Barrier and the Cerebral Lymphatics Proposes Therapeutic Method for Alzheimer’S Disease
Authors: M. Klimova, O. Semyachkina-Glushkovskaya, J. Kurts, E. Zinchenko, N. Navolokin, A. Shirokov, A. Dubrovsky, A. Abdurashitov, A. Terskov, A. Mamedova, I. Agranovich, T. Antonova, I. Blokhina
Abstract:
The direction for research of Alzheimer's disease is to find an effective non-invasive and non-pharmacological way of treatment. Here we tested our hypothesis that the opening of the blood-brain barrier (BBB) induces activation of lymphatic drainage and clearing functions that can be used as a method for non-invasive stimulation of clearance of beta-amyloid and therapy of Alzheimer’s disease (AD). To test our hypothesis, in this study on healthy male mice we analyzed the interaction between BBB opening by repeated loud music (100-10000 Hz, 100 dB, duration 2 h: 60 sec – sound; 60 sec - pause) and functional changes in the meningeal lymphatic vessels (MLVs). We demonstrate clearance of dextran 70 kDa (i.v. injection), fluorescent beta-amyloid (intrahippocampal injection) and gold nanorods (intracortical injection) via MLV that significantly increased after the opening of BBB. Our studies also demonstrate that the BBB opening was associated with the improvement of neurocognitive status in mice with AD. Thus, we uncover therapeutic effects of BBB opening by loud music, such as non-invasive stimulation of lymphatic clearance of beta-amyloid in mice with AD, accompanied by improvement of their neurocognitive status. Our data are consistent with other results suggesting the therapeutic effect of BBB opening by focused ultrasound without drugs for patients with AD. This research was supported by a grant from RSF 18-75-10033Keywords: Alzheimer's disease, beta-amyloid, blood-brain barrier, meningeal lymphatic vessels, repeated loud music
Procedia PDF Downloads 142980 First Order Moment Bounds on DMRL and IMRL Classes of Life Distributions
Authors: Debasis Sengupta, Sudipta Das
Abstract:
The class of life distributions with decreasing mean residual life (DMRL) is well known in the field of reliability modeling. It contains the IFR class of distributions and is contained in the NBUE class of distributions. While upper and lower bounds of the reliability distribution function of aging classes such as IFR, IFRA, NBU, NBUE, and HNBUE have discussed in the literature for a long time, there is no analogous result available for the DMRL class. We obtain the upper and lower bounds for the reliability function of the DMRL class in terms of first order finite moment. The lower bound is obtained by showing that for any fixed time, the minimization of the reliability function over the class of all DMRL distributions with a fixed mean is equivalent to its minimization over a smaller class of distribution with a special form. Optimization over this restricted set can be made algebraically. Likewise, the maximization of the reliability function over the class of all DMRL distributions with a fixed mean turns out to be a parametric optimization problem over the class of DMRL distributions of a special form. The constructive proofs also establish that both the upper and lower bounds are sharp. Further, the DMRL upper bound coincides with the HNBUE upper bound and the lower bound coincides with the IFR lower bound. We also prove that a pair of sharp upper and lower bounds for the reliability function when the distribution is increasing mean residual life (IMRL) with a fixed mean. This result is proved in a similar way. These inequalities fill a long-standing void in the literature of the life distribution modeling.Keywords: DMRL, IMRL, reliability bounds, hazard functions
Procedia PDF Downloads 397979 Structural Health Monitoring Method Using Stresses Occurring on Bridge Bearings Under Temperature
Authors: T. Nishido, S. Fukumoto
Abstract:
The functions of movable bearings decline due to corrosion and sediments. As the result, they cannot move or rotate according to the behaviors of girders. Because of the constraints, the bending moments are generated by the horizontal reaction forces and the heights of girders. Under these conditions, the authors obtained the following results by analysis and experiment. Tensile stresses due to the moments occurred at temperature fluctuations. The large tensile stresses on concrete slabs around the bearings caused cracks. Even if concrete slabs are newly replaced, cracks will come out again with function declined bearings. The functional declines of bearings are generally found by using displacement gauges. However the method is not suitable for long-term measurements. We focused on the change in the strains at the bearings and the lower flanges near them at temperature fluctuations. It was found that their strains were particularly large when the movements of the bearings were constrained. Therefore, we developed a long-term health monitoring wireless system with FBG (Fiber Bragg Grating) sensors which were attached to bearings and lower flanges. The FBG sensors have the characteristics such as non-electrical influence, resistance to weather, and high strain sensitivity. Such characteristics are suitable for long-term measurements. The monitoring system was inexpensive because it was limited to the purpose of measuring strains and temperature. Engineers can monitor the behaviors of bearings in real time with the wireless system. If an office is away from bridge sites, the system will save traveling time and cost.Keywords: bridge bearing, concrete slab, FBG sensor, health monitoring
Procedia PDF Downloads 221978 Comparative Stem Cells Therapy for Regeneration of Liver Fibrosis
Authors: H. M. Imam, H. M. Rezk, A. F. Tohamy
Abstract:
Background: Human umbilical cord blood (HUCB) is considered as a unique source for stem cells. HUCB contain different types of progenitor cells which could differentiate into hepatocytes. Aims: To investigate the potential of rat's liver damage repair using human umbilical cord mesenchymal stem cells (hUCMSCs). We investigated the feasibility for hUCMSCs in recovery from liver damage. Moreover, investigating fibrotic liver repair and using the CCl4-induced model for liver damage in the rat. Methods: Rats were injected with 0.5 ml/kg CCl4 to induce liver damage and progressive liver fibrosis. hUCMSCs were injected into the rats through the tail vein; Stem cells were transplanted at a dose of 1×106 cells/rat after 72 hours of CCl4 injection without receiving any immunosuppressant. After (6 and 8 weeks) of transplantation, blood samples were collected to assess liver functions (ALT, AST, GGT and ALB) and level of Procollagen III as a liver fibrosis marker. In addition, hepatic tissue regeneration was assessed histopathologically and immunohistochemically using antihuman monoclonal antibodies against CD34, CK19 and albumin. Results: Biochemical and histopathological analysis showed significantly increased recovery from liver damage in the transplanted group. In addition, HUCB stem cells transdifferentiated into functional hepatocytes in rats with hepatic injury which results in improving liver structure and function. Conclusion: Our findings suggest that transplantation of hUCMSCs may be a novel therapeutic approach for treating liver fibrosis. Therefore, hUCMSCs are a potential option for treatment of liver cirrhosis.Keywords: carbon tetra chloride, liver fibrosis, mesenchymal stem cells, rat
Procedia PDF Downloads 342977 Cache Analysis and Software Optimizations for Faster on-Chip Network Simulations
Authors: Khyamling Parane, B. M. Prabhu Prasad, Basavaraj Talawar
Abstract:
Fast simulations are critical in reducing time to market in CMPs and SoCs. Several simulators have been used to evaluate the performance and power consumed by Network-on-Chips. Researchers and designers rely upon these simulators for design space exploration of NoC architectures. Our experiments show that simulating large NoC topologies take hours to several days for completion. To speed up the simulations, it is necessary to investigate and optimize the hotspots in simulator source code. Among several simulators available, we choose Booksim2.0, as it is being extensively used in the NoC community. In this paper, we analyze the cache and memory system behaviour of Booksim2.0 to accurately monitor input dependent performance bottlenecks. Our measurements show that cache and memory usage patterns vary widely based on the input parameters given to Booksim2.0. Based on these measurements, the cache configuration having least misses has been identified. To further reduce the cache misses, we use software optimization techniques such as removal of unused functions, loop interchanging and replacing post-increment operator with pre-increment operator for non-primitive data types. The cache misses were reduced by 18.52%, 5.34% and 3.91% by employing above technology respectively. We also employ thread parallelization and vectorization to improve the overall performance of Booksim2.0. The OpenMP programming model and SIMD are used for parallelizing and vectorizing the more time-consuming portions of Booksim2.0. Speedups of 2.93x and 3.97x were observed for the Mesh topology with 30 × 30 network size by employing thread parallelization and vectorization respectively.Keywords: cache behaviour, network-on-chip, performance profiling, vectorization
Procedia PDF Downloads 197976 Localization of Frontal and Temporal Speech Areas in Brain Tumor Patients by Their Structural Connections with Probabilistic Tractography
Authors: B.Shukir, H.Woo, P.Barzo, D.Kis
Abstract:
Preoperative brain mapping in tumors involving the speech areas has an important role to reduce surgical risks. Functional magnetic resonance imaging (fMRI) is the gold standard method to localize cortical speech areas preoperatively, but its availability in clinical routine is difficult. Diffusion MRI based probabilistic tractography is available in head MRI. It’s used to segment cortical subregions by their structural connectivity. In our study, we used probabilistic tractography to localize the frontal and temporal cortical speech areas. 15 patients with left frontal tumor were enrolled to our study. Speech fMRI and diffusion MRI acquired preoperatively. The standard automated anatomical labelling atlas 3 (AAL3) cortical atlas used to define 76 left frontal and 118 left temporal potential speech areas. 4 types of tractography were run according to the structural connection of these regions to the left arcuate fascicle (FA) to localize those cortical areas which have speech functions: 1, frontal through FA; 2, frontal with FA; 3, temporal to FA; 4, temporal with FA connections were determined. Thresholds of 1%, 5%, 10% and 15% applied. At each level, the number of affected frontal and temporal regions by fMRI and tractography were defined, the sensitivity and specificity were calculated. At the level of 1% threshold showed the best results. Sensitivity was 61,631,4% and 67,1523,12%, specificity was 87,210,4% and 75,611,37% for frontal and temporal regions, respectively. From our study, we conclude that probabilistic tractography is a reliable preoperative technique to localize cortical speech areas. However, its results are not feasible that the neurosurgeon rely on during the operation.Keywords: brain mapping, brain tumor, fMRI, probabilistic tractography
Procedia PDF Downloads 166975 Web-Based Decision Support Systems and Intelligent Decision-Making: A Systematic Analysis
Authors: Serhat Tüzün, Tufan Demirel
Abstract:
Decision Support Systems (DSS) have been investigated by researchers and technologists for more than 35 years. This paper analyses the developments in the architecture and software of these systems, provides a systematic analysis for different Web-based DSS approaches and Intelligent Decision-making Technologies (IDT), with the suggestion for future studies. Decision Support Systems literature begins with building model-oriented DSS in the late 1960s, theory developments in the 1970s, and the implementation of financial planning systems and Group DSS in the early and mid-80s. Then it documents the origins of Executive Information Systems, online analytic processing (OLAP) and Business Intelligence. The implementation of Web-based DSS occurred in the mid-1990s. With the beginning of the new millennia, intelligence is the main focus on DSS studies. Web-based technologies are having a major impact on design, development and implementation processes for all types of DSS. Web technologies are being utilized for the development of DSS tools by leading developers of decision support technologies. Major companies are encouraging its customers to port their DSS applications, such as data mining, customer relationship management (CRM) and OLAP systems, to a web-based environment. Similarly, real-time data fed from manufacturing plants are now helping floor managers make decisions regarding production adjustment to ensure that high-quality products are produced and delivered. Web-based DSS are being employed by organizations as decision aids for employees as well as customers. A common usage of Web-based DSS has been to assist customers configure product and service according to their needs. These systems allow individual customers to design their own products by choosing from a menu of attributes, components, prices and delivery options. The Intelligent Decision-making Technologies (IDT) domain is a fast growing area of research that integrates various aspects of computer science and information systems. This includes intelligent systems, intelligent technology, intelligent agents, artificial intelligence, fuzzy logic, neural networks, machine learning, knowledge discovery, computational intelligence, data science, big data analytics, inference engines, recommender systems or engines, and a variety of related disciplines. Innovative applications that emerge using IDT often have a significant impact on decision-making processes in government, industry, business, and academia in general. This is particularly pronounced in finance, accounting, healthcare, computer networks, real-time safety monitoring and crisis response systems. Similarly, IDT is commonly used in military decision-making systems, security, marketing, stock market prediction, and robotics. Even though lots of research studies have been conducted on Decision Support Systems, a systematic analysis on the subject is still missing. Because of this necessity, this paper has been prepared to search recent articles about the DSS. The literature has been deeply reviewed and by classifying previous studies according to their preferences, taxonomy for DSS has been prepared. With the aid of the taxonomic review and the recent developments over the subject, this study aims to analyze the future trends in decision support systems.Keywords: decision support systems, intelligent decision-making, systematic analysis, taxonomic review
Procedia PDF Downloads 279974 A Quality Index Optimization Method for Non-Invasive Fetal ECG Extraction
Authors: Lucia Billeci, Gennaro Tartarisco, Maurizio Varanini
Abstract:
Fetal cardiac monitoring by fetal electrocardiogram (fECG) can provide significant clinical information about the healthy condition of the fetus. Despite this potentiality till now the use of fECG in clinical practice has been quite limited due to the difficulties in its measuring. The recovery of fECG from the signals acquired non-invasively by using electrodes placed on the maternal abdomen is a challenging task because abdominal signals are a mixture of several components and the fetal one is very weak. This paper presents an approach for fECG extraction from abdominal maternal recordings, which exploits the characteristics of pseudo-periodicity of fetal ECG. It consists of devising a quality index (fQI) for fECG and of finding the linear combinations of preprocessed abdominal signals, which maximize these fQI (quality index optimization - QIO). It aims at improving the performances of the most commonly adopted methods for fECG extraction, usually based on maternal ECG (mECG) estimating and canceling. The procedure for the fECG extraction and fetal QRS (fQRS) detection is completely unsupervised and based on the following steps: signal pre-processing; maternal ECG (mECG) extraction and maternal QRS detection; mECG component approximation and canceling by weighted principal component analysis; fECG extraction by fQI maximization and fetal QRS detection. The proposed method was compared with our previously developed procedure, which obtained the highest at the Physionet/Computing in Cardiology Challenge 2013. That procedure was based on removing the mECG from abdominal signals estimated by a principal component analysis (PCA) and applying the Independent component Analysis (ICA) on the residual signals. Both methods were developed and tuned using 69, 1 min long, abdominal measurements with fetal QRS annotation of the dataset A provided by PhysioNet/Computing in Cardiology Challenge 2013. The QIO-based and the ICA-based methods were compared in analyzing two databases of abdominal maternal ECG available on the Physionet site. The first is the Abdominal and Direct Fetal Electrocardiogram Database (ADdb) which contains the fetal QRS annotations thus allowing a quantitative performance comparison, the second is the Non-Invasive Fetal Electrocardiogram Database (NIdb), which does not contain the fetal QRS annotations so that the comparison between the two methods can be only qualitative. In particular, the comparison on NIdb was performed defining an index of quality for the fetal RR series. On the annotated database ADdb the QIO method, provided the performance indexes Sens=0.9988, PPA=0.9991, F1=0.9989 overcoming the ICA-based one, which provided Sens=0.9966, PPA=0.9972, F1=0.9969. The comparison on NIdb was performed defining an index of quality for the fetal RR series. The index of quality resulted higher for the QIO-based method compared to the ICA-based one in 35 records out 55 cases of the NIdb. The QIO-based method gave very high performances with both the databases. The results of this study foresees the application of the algorithm in a fully unsupervised way for the implementation in wearable devices for self-monitoring of fetal health.Keywords: fetal electrocardiography, fetal QRS detection, independent component analysis (ICA), optimization, wearable
Procedia PDF Downloads 280973 Entrepreneurial Venture Creation through Anchor Event Activities: Pop-Up Stores as On-Site Arenas
Authors: Birgit A. A. Solem, Kristin Bentsen
Abstract:
Scholarly attention in entrepreneurship is currently directed towards understanding entrepreneurial venture creation as a process -the journey of new economic activities from nonexistence to existence often studied through flow- or network models. To complement existing research on entrepreneurial venture creation with more interactivity-based research of organized activities, this study examines two pop-up stores as anchor events involving on-site activities of fifteen participating entrepreneurs launching their new ventures. The pop-up stores were arranged in two middle-sized Norwegian cities and contained different brand stores that brought together actors of sub-networks and communities executing venture creation activities. The pop-up stores became on-site arenas for the entrepreneurs to create, maintain, and rejuvenate their networks, at the same time as becoming venues for temporal coordination of activities involving existing and potential customers in their venture creation. In this work, we apply a conceptual framework based on frequently addressed dilemmas within entrepreneurship theory (discovery/creation, causation/effectuation) to further shed light on the broad aspect of on-site anchor event activities and their venture creation outcomes. The dilemma-based concepts are applied as an analytic toolkit to pursue answers regarding the nature of anchor event activities typically found within entrepreneurial venture creation and how these anchor event activities affect entrepreneurial venture creation outcomes. Our study combines researcher participation with 200 hours of observation and twenty in-depth interviews. Data analysis followed established guidelines for hermeneutic analysis and was intimately intertwined with ongoing data collection. Data was coded and categorized in NVivo 12 software, and iterated several times as patterns were steadily developing. Our findings suggest that core anchor event activities typically found within entrepreneurial venture creation are; a concept- and product experimentation with visitors, arrangements to socialize (evening specials, auctions, and exhibitions), store-in-store concepts, arranged meeting places for peers and close connection with municipality and property owners. Further, this work points to four main entrepreneurial venture creation outcomes derived from the core anchor event activities; (1) venture attention, (2) venture idea-realization, (3) venture collaboration, and (4) venture extension. Our findings show that, depending on which anchor event activities are applied, the outcomes vary. Theoretically, this study offers two main implications. First, anchor event activities are both discovered and created, following the logic of causation, at the same time as being experimental, based on “learning by doing” principles of effectuation during the execution. Second, our research enriches prior studies on venture creation as a process. In this work, entrepreneurial venture creation activities and outcomes are understood through pop-up stores as on-site anchor event arenas, particularly suitable for interactivity-based research requested by the entrepreneurship field. This study also reveals important managerial implications, such as that entrepreneurs should allow themselves to find creative physical venture creation arenas (e.g., pop-up stores, showrooms), as well as collaborate with partners when discovering and creating concepts and activities based on new ideas. In this way, they allow themselves to both strategically plan for- and continually experiment with their venture.Keywords: anchor event, interactivity-based research, pop-up store, entrepreneurial venture creation
Procedia PDF Downloads 91972 Reinforcement Learning for Robust Missile Autopilot Design: TRPO Enhanced by Schedule Experience Replay
Authors: Bernardo Cortez, Florian Peter, Thomas Lausenhammer, Paulo Oliveira
Abstract:
Designing missiles’ autopilot controllers have been a complex task, given the extensive flight envelope and the nonlinear flight dynamics. A solution that can excel both in nominal performance and in robustness to uncertainties is still to be found. While Control Theory often debouches into parameters’ scheduling procedures, Reinforcement Learning has presented interesting results in ever more complex tasks, going from videogames to robotic tasks with continuous action domains. However, it still lacks clearer insights on how to find adequate reward functions and exploration strategies. To the best of our knowledge, this work is a pioneer in proposing Reinforcement Learning as a framework for flight control. In fact, it aims at training a model-free agent that can control the longitudinal non-linear flight dynamics of a missile, achieving the target performance and robustness to uncertainties. To that end, under TRPO’s methodology, the collected experience is augmented according to HER, stored in a replay buffer and sampled according to its significance. Not only does this work enhance the concept of prioritized experience replay into BPER, but it also reformulates HER, activating them both only when the training progress converges to suboptimal policies, in what is proposed as the SER methodology. The results show that it is possible both to achieve the target performance and to improve the agent’s robustness to uncertainties (with low damage on nominal performance) by further training it in non-nominal environments, therefore validating the proposed approach and encouraging future research in this field.Keywords: Reinforcement Learning, flight control, HER, missile autopilot, TRPO
Procedia PDF Downloads 264971 Embedded System of Signal Processing on FPGA: Underwater Application Architecture
Authors: Abdelkader Elhanaoui, Mhamed Hadji, Rachid Skouri, Said Agounad
Abstract:
The purpose of this paper is to study the phenomenon of acoustic scattering by using a new method. The signal processing (Fast Fourier Transform FFT Inverse Fast Fourier Transform iFFT and BESSEL functions) is widely applied to obtain information with high precision accuracy. Signal processing has a wider implementation in general-purpose pro-cessors. Our interest was focused on the use of FPGAs (Field-Programmable Gate Ar-rays) in order to minimize the computational complexity in single processor architecture, then be accelerated on FPGA and meet real-time and energy efficiency requirements. Gen-eral-purpose processors are not efficient for signal processing. We implemented the acous-tic backscattered signal processing model on the Altera DE-SOC board and compared it to Odroid xu4. By comparison, the computing latency of Odroid xu4 and FPGA is 60 sec-onds and 3 seconds, respectively. The detailed SoC FPGA-based system has shown that acoustic spectra are performed up to 20 times faster than the Odroid xu4 implementation. FPGA-based system of processing algorithms is realized with an absolute error of about 10⁻³. This study underlines the increasing importance of embedded systems in underwater acoustics, especially in non-destructive testing. It is possible to obtain information related to the detection and characterization of submerged cells. So we have achieved good exper-imental results in real-time and energy efficiency.Keywords: DE1 FPGA, acoustic scattering, form function, signal processing, non-destructive testing
Procedia PDF Downloads 79970 Neural Network Supervisory Proportional-Integral-Derivative Control of the Pressurized Water Reactor Core Power Load Following Operation
Authors: Derjew Ayele Ejigu, Houde Song, Xiaojing Liu
Abstract:
This work presents the particle swarm optimization trained neural network (PSO-NN) supervisory proportional integral derivative (PID) control method to monitor the pressurized water reactor (PWR) core power for safe operation. The proposed control approach is implemented on the transfer function of the PWR core, which is computed from the state-space model. The PWR core state-space model is designed from the neutronics, thermal-hydraulics, and reactivity models using perturbation around the equilibrium value. The proposed control approach computes the control rod speed to maneuver the core power to track the reference in a closed-loop scheme. The particle swarm optimization (PSO) algorithm is used to train the neural network (NN) and to tune the PID simultaneously. The controller performance is examined using integral absolute error, integral time absolute error, integral square error, and integral time square error functions, and the stability of the system is analyzed by using the Bode diagram. The simulation results indicated that the controller shows satisfactory performance to control and track the load power effectively and smoothly as compared to the PSO-PID control technique. This study will give benefit to design a supervisory controller for nuclear engineering research fields for control application.Keywords: machine learning, neural network, pressurized water reactor, supervisory controller
Procedia PDF Downloads 156969 Modeling Core Flooding Experiments for Co₂ Geological Storage Applications
Authors: Avinoam Rabinovich
Abstract:
CO₂ geological storage is a proven technology for reducing anthropogenic carbon emissions, which is paramount for achieving the ambitious net zero emissions goal. Core flooding experiments are an important step in any CO₂ storage project, allowing us to gain information on the flow of CO₂ and brine in the porous rock extracted from the reservoir. This information is important for understanding basic mechanisms related to CO₂ geological storage as well as for reservoir modeling, which is an integral part of a field project. In this work, a different method for constructing accurate models of CO₂-brine core flooding will be presented. Results for synthetic cases and real experiments will be shown and compared with numerical models to exhibit their predictive capabilities. Furthermore, the various mechanisms which impact the CO₂ distribution and trapping in the rock samples will be discussed, and examples from models and experiments will be provided. The new method entails solving an inverse problem to obtain a three-dimensional permeability distribution which, along with the relative permeability and capillary pressure functions, constitutes a model of the flow experiments. The model is more accurate when data from a number of experiments are combined to solve the inverse problem. This model can then be used to test various other injection flow rates and fluid fractions which have not been tested in experiments. The models can also be used to bridge the gap between small-scale capillary heterogeneity effects (sub-core and core scale) and large-scale (reservoir scale) effects, known as the upscaling problem.Keywords: CO₂ geological storage, residual trapping, capillary heterogeneity, core flooding, CO₂-brine flow
Procedia PDF Downloads 70968 Linux Security Management: Research and Discussion on Problems Caused by Different Aspects
Authors: Ma Yuzhe, Burra Venkata Durga Kumar
Abstract:
The computer is a great invention. As people use computers more and more frequently, the demand for PCs is growing, and the performance of computer hardware is also rising to face more complex processing and operation. However, the operating system, which provides the soul for computers, has stopped developing at a stage. In the face of the high price of UNIX (Uniplexed Information and Computering System), batch after batch of personal computer owners can only give up. Disk Operating System is too simple and difficult to bring innovation into play, which is not a good choice. And MacOS is a special operating system for Apple computers, and it can not be widely used on personal computers. In this environment, Linux, based on the UNIX system, was born. Linux combines the advantages of the operating system and is composed of many microkernels, which is relatively powerful in the core architecture. Linux system supports all Internet protocols, so it has very good network functions. Linux supports multiple users. Each user has no influence on their own files. Linux can also multitask and run different programs independently at the same time. Linux is a completely open source operating system. Users can obtain and modify the source code for free. Because of these advantages of Linux, it has also attracted a large number of users and programmers. The Linux system is also constantly upgraded and improved. It has also issued many different versions, which are suitable for community use and commercial use. Linux system has good security because it relies on a file partition system. However, due to the constant updating of vulnerabilities and hazards, the using security of the operating system also needs to be paid more attention to. This article will focus on the analysis and discussion of Linux security issues.Keywords: Linux, operating system, system management, security
Procedia PDF Downloads 108967 A Cost Effective Approach to Develop Mid-Size Enterprise Software Adopted the Waterfall Model
Authors: Mohammad Nehal Hasnine, Md Kamrul Hasan Chayon, Md Mobasswer Rahman
Abstract:
Organizational tendencies towards computer-based information processing have been observed noticeably in the third-world countries. Many enterprises are taking major initiatives towards computerized working environment because of massive benefits of computer-based information processing. However, designing and developing information resource management software for small and mid-size enterprises under budget costs and strict deadline is always challenging for software engineers. Therefore, we introduced an approach to design mid-size enterprise software by using the Waterfall model, which is one of the SDLC (Software Development Life Cycles), in a cost effective way. To fulfill research objectives, in this study, we developed mid-sized enterprise software named “BSK Management System” that assists enterprise software clients with information resource management and perform complex organizational tasks. Waterfall model phases have been applied to ensure that all functions, user requirements, strategic goals, and objectives are met. In addition, Rich Picture, Structured English, and Data Dictionary have been implemented and investigated properly in engineering manner. Furthermore, an assessment survey with 20 participants has been conducted to investigate the usability and performance of the proposed software. The survey results indicated that our system featured simple interfaces, easy operation and maintenance, quick processing, and reliable and accurate transactions.Keywords: end-user application development, enterprise software design, information resource management, usability
Procedia PDF Downloads 438966 Bulk/Hull Cavitation Induced by Underwater Explosion: Effect of Material Elasticity and Surface Curvature
Authors: Wenfeng Xie
Abstract:
Bulk/hull cavitation evolution induced by an underwater explosion (UNDEX) near a free surface (bulk) or a deformable structure (hull) is numerically investigated using a multiphase compressible fluid solver coupled with a one-fluid cavitation model. A series of two-dimensional computations is conducted with varying material elasticity and surface curvature. Results suggest that material elasticity and surface curvature influence the peak pressures generated from UNDEX shock and cavitation collapse, as well as the bulk/hull cavitation regions near the surface. Results also show that such effects can be different for bulk cavitation generated from UNDEX-free surface interaction and for hull cavitation generated from UNDEX-structure interaction. More importantly, results demonstrate that shock wave focusing caused by a concave solid surface can lead to a larger cavitation region and thus intensify the cavitation reload. The findings can be linked to the strength and the direction of reflected waves from the structural surface and reflected waves from the expanding bubble surface, which are functions of material elasticity and surface curvature. Shockwave focusing effects are also observed for axisymmetric simulations, but the strength of the pressure contours for the axisymmetric simulations is less than those for the 2D simulations due to the difference between the initial shock energy. The current method is limited to two-dimensional or axisymmetric applications. Moreover, the thermal effects are neglected and the liquid is not allowed to sustain tension in the cavitation model.Keywords: cavitation, UNDEX, fluid-structure interaction, multiphase
Procedia PDF Downloads 186965 Eroticism as a Tool for Addressing Socio-Cultural Inequalities
Authors: Amin Khaksar
Abstract:
The popular music scene is a highly speculative field of cultural production in which eroticism plays an essential role in attracting audiences. The juxtaposition of eroticism and cultural products possibly implies the importance of the representation of cultural values in popular music videos. As with norms in conservative societies, however, there are some types of inequalities, most of which are dominated by institutional inclinations as opposed to socio-cultural inclinations. This paper explores the challenges that increasing structural inequality poses to erotic representations, focusing on Iranian popular music videos. It outlines how eroticism is becoming a leading tool for circumventing institutional inequalities that affect some cultural values. Using the value-based approach, which draws on visual semiotics and content analysis of Iranian popular music videos compared to Western popular music videos, this study contends that the problematic nature of eroticism emerges when sexual representation takes on meaning beyond its commercial purpose. Indeed, erotica has more to say about freedom, social violence, gender discrimination, and, most importantly, values that can be shared and communicated. The concept of eroticism used in this study functions as a shared practice and can be perceived through symbols. Furthermore, the conclusions show that music artists (performers) use eroticism in three ways to represent cultural values: erotic performances, erotic qualities, and erotic narratives. The expected contribution highlights the role that eroticism can play in the encounter with institutional inequality and injustice. Consider a female celebrity whose erotic qualities help her body gain attention.Keywords: inequality, value- based economics, eroticism, popular music video
Procedia PDF Downloads 124964 Theoretical Investigation of the Singlet and Triplet Electronic States of ⁹⁰ZrS Molecules
Authors: Makhlouf Sandy, Adem Ziad, Taher Fadia, Magnier Sylvie
Abstract:
The electronic structure of 90ZrS has been investigated using Ab-initio methods based on Complete Active Space Self Consistent Field and Multi-reference Configuration Interaction (CASSCF/MRCI). The number of predicted states has been extended to 14 singlet and 12 triplet lowest-lying states situated below 36000cm-1. The equilibrium energies of these 26 lowest-lying electronic states have been calculated in the 2S+1Λ(±) representation. The potential energy curves have been plotted in function of the inter-nuclear distances in a range of 1.5 to 4.5Å. Spectroscopic constants, permanent electric dipole moments and transition dipole moments between the different electronic states have also been determined. A discrepancy error of utmost 5% for the majority of values shows a good agreement with available experimental data. The ground state is found to be of symmetry X1Σ+ with an equilibrium inter-nuclear distance Re= 2.16Å. However, the (1)3Δ is the closest state to X1Σ+ and is situated at 514 cm-1. To the best of our knowledge, this is the first time that the spin-orbit coupling has been investigated for all the predicted states of ZrS. 52 electronic components in the Ω(±) representation have been predicted. The energies of these components, the spectroscopic constants ωe, ωeχe, βe and the equilibrium inter-nuclear distances have been also obtained. The percentage composition of the Ω state wave-functions in terms of S-Λ states was calculated to identify their corresponding main parents. These (SOC) calculations have determined the shift between (1)3Δ1 and X1Σ+ states and confirmed the ground state type being 1Σ+.Keywords: CASSCF/MRCI, electronic structure, spin-orbit effect, zirconium monosulfide
Procedia PDF Downloads 168963 Investigation of Factors Influencing Perceived Comfort During Take-Over in Automated Driving
Authors: Miriam Schäffer, Vinayak Mudgal, Wolfram Remlinger
Abstract:
The functions of automated driving will initially be limited to certain so-called Operating Driving Domains (ODD). Within the ODDs, the automated vehicle can handle all situations autonomously. In the event of a critical system failure, the vehicle will establish a condition of minimal risk or offer the driver to take over control of the vehicle. When the vehicle leaves the ODD, the driver is also prompted to take over vehicle control. During automated driving, the driver is legally allowed to perform non-driving-related activities (NDRAs) for the first time. When requested to take over, the driver must return from the NDRA state to a driving-ready state. The driver’s NDRA state may imply the use of items that are necessary for the NDRA or interior modifications. Since perceived comfort is an important factor in both manual and automated driving, a study was conducted in a static driving simulator to investigate factors that influence perceived comfort during the take-over process. Based on a literature review of factors influencing perceived comfort in different domains, selected parameters such as the TOR modality or elements to support handing over the item used for the NDRA to the interior were varied. Perceived comfort and discomfort were assessed using an adapted version of a standardized comfort questionnaire, as well as other previously identified aspects of comfort. The NDRA conducted was Using a Smartphone (playing Tetris) because of its high relevance as a future NDRA. The results show the potential to increase perceived comfort through interior adaptations and support elements. Further research should focus on different layouts of the investigated factors, as well as under different conditions, such as time budget, actions required within the intervention in the vehicle control system, and vehicle interior dimensions.Keywords: automated driving, comfort, take-over, vehicle interior
Procedia PDF Downloads 19962 Research on the Function Optimization of China-Hungary Economic and Trade Cooperation Zone
Authors: Wenjuan Lu
Abstract:
China and Hungary have risen from a friendly and comprehensive cooperative relationship to a comprehensive strategic partnership in recent years, and the economic and trade relations between the two countries have developed smoothly. As an important country along the ‘Belt and Road’, Hungary and China have strong economic complementarities and have unique advantages in carrying China's industrial transfer and economic transformation and development. The construction of the China-Hungary Economic and Trade Cooperation Zone, which was initiated by the ‘Sino-Hungarian Borsod Industrial Zone’ and the ‘Hungarian Central European Trade and Logistics Cooperation Park’ has promoted infrastructure construction, optimized production capacity, promoted industrial restructuring, and formed brand and agglomeration effects. Enhancing the influence of Chinese companies in the European market has also promoted economic development in Hungary and even in Central and Eastern Europe. However, as the China-Hungary Economic and Trade Cooperation Zone is still in its infancy, there are still shortcomings such as small scale, single function, and no prominent platform. In the future, based on the needs of China's cooperation with ‘17+1’ and China-Hungary cooperation, on the basis of appropriately expanding the scale of economic and trade cooperation zones and appropriately increasing the number of economic and trade cooperation zones, it is better to focus on optimizing and adjusting its functions and highlighting different economic and trade cooperation. The differentiated function of the trade zones strengthens the multi-faceted cooperation of economic and trade cooperation zones and highlights its role as a platform for cooperation in information, capital, and services.Keywords: ‘One Belt, One Road’ Initiative, China-Hungary economic and trade cooperation zone, function optimization, Central and Eastern Europe
Procedia PDF Downloads 180961 Numerical Solution of Momentum Equations Using Finite Difference Method for Newtonian Flows in Two-Dimensional Cartesian Coordinate System
Authors: Ali Ateş, Ansar B. Mwimbo, Ali H. Abdulkarim
Abstract:
General transport equation has a wide range of application in Fluid Mechanics and Heat Transfer problems. In this equation, generally when φ variable which represents a flow property is used to represent fluid velocity component, general transport equation turns into momentum equations or with its well known name Navier-Stokes equations. In these non-linear differential equations instead of seeking for analytic solutions, preferring numerical solutions is a more frequently used procedure. Finite difference method is a commonly used numerical solution method. In these equations using velocity and pressure gradients instead of stress tensors decreases the number of unknowns. Also, continuity equation, by integrating the system, number of equations is obtained as number of unknowns. In this situation, velocity and pressure components emerge as two important parameters. In the solution of differential equation system, velocities and pressures must be solved together. However, in the considered grid system, when pressure and velocity values are jointly solved for the same nodal points some problems confront us. To overcome this problem, using staggered grid system is a referred solution method. For the computerized solutions of the staggered grid system various algorithms were developed. From these, two most commonly used are SIMPLE and SIMPLER algorithms. In this study Navier-Stokes equations were numerically solved for Newtonian flow, whose mass or gravitational forces were neglected, for incompressible and laminar fluid, as a hydro dynamically fully developed region and in two dimensional cartesian coordinate system. Finite difference method was chosen as the solution method. This is a parametric study in which varying values of velocity components, pressure and Reynolds numbers were used. Differential equations were discritized using central difference and hybrid scheme. The discritized equation system was solved by Gauss-Siedel iteration method. SIMPLE and SIMPLER were used as solution algorithms. The obtained results, were compared for central difference and hybrid as discritization methods. Also, as solution algorithm, SIMPLE algorithm and SIMPLER algorithm were compared to each other. As a result, it was observed that hybrid discritization method gave better results over a larger area. Furthermore, as computer solution algorithm, besides some disadvantages, it can be said that SIMPLER algorithm is more practical and gave result in short time. For this study, a code was developed in DELPHI programming language. The values obtained in a computer program were converted into graphs and discussed. During sketching, the quality of the graph was increased by adding intermediate values to the obtained result values using Lagrange interpolation formula. For the solution of the system, number of grid and node was found as an estimated. At the same time, to indicate that the obtained results are satisfactory enough, by doing independent analysis from the grid (GCI analysis) for coarse, medium and fine grid system solution domain was obtained. It was observed that when graphs and program outputs were compared with similar studies highly satisfactory results were achieved.Keywords: finite difference method, GCI analysis, numerical solution of the Navier-Stokes equations, SIMPLE and SIMPLER algoritms
Procedia PDF Downloads 391960 Network Analysis of Genes Involved in the Biosynthesis of Medicinally Important Naphthodianthrone Derivatives of Hypericum perforatum
Authors: Nafiseh Noormohammadi, Ahmad Sobhani Najafabadi
Abstract:
Hypericins (hypericin and pseudohypericin) are natural napthodianthrone derivatives produced by Hypericum perforatum (St. John’s Wort), which have many medicinal properties such as antitumor, antineoplastic, antiviral, and antidepressant activities. Production and accumulation of hypericin in the plant are influenced by both genetic and environmental conditions. Despite the existence of different high-throughput data on the plant, genetic dimensions of hypericin biosynthesis have not yet been completely understood. In this research, 21 high-quality RNA-seq data on different parts of the plant were integrated into metabolic data to reconstruct a coexpression network. Results showed that a cluster of 30 transcripts was correlated with total hypericin. The identified transcripts were divided into three main groups based on their functions, including hypericin biosynthesis genes, transporters, detoxification genes, and transcription factors (TFs). In the biosynthetic group, different isoforms of polyketide synthase (PKSs) and phenolic oxidative coupling proteins (POCPs) were identified. Phylogenetic analysis of protein sequences integrated into gene expression analysis showed that some of the POCPs seem to be very important in the biosynthetic pathway of hypericin. In the TFs group, six TFs were correlated with total hypericin. qPCR analysis of these six TFs confirmed that three of them were highly correlated. The identified genes in this research are a rich resource for further studies on the molecular breeding of H. perforatum in order to obtain varieties with high hypericin production.Keywords: hypericin, St. John’s Wort, data mining, transcription factors, secondary metabolites
Procedia PDF Downloads 93959 Force Measurement for E-Cadherin-Mediated Intercellular Adhesion Probed by Protein Micropattern and Traction Force Microscopy
Authors: Chieh-Chung Tsou, Chun-Min Lo, Yeh-Shiu Chu
Abstract:
Cell’s mechanical forces provide important physical cues in regulation of proper cellular functions, such as cell differentiation, proliferation and migration. It is believed that adhesive forces generated by cell-cell interaction are able to transmit to the interior of cell through filamentous cortical cytoskeleton. Prominent among other membrane receptors, Cadherins are prototypical adhesive molecules able to generate remarkable forces to regulate intercellular adhesion. However, the mechanistic steps of mechano-transduction in Cadherin-mediated adhesion remain very controversial. We are interested in understanding how Cadherin protein complexes enable force generation and transmission at cell-cell contact in the initial stage of intercellular adhesion. For providing a better control of time, space, and substrate stiffness, in this study, a combination of protein micropattern, micropipette manipulation, and traction force microscopy is used. Pair micropattern with different forms confines cell spreading area and the gaps in pairs varied from 2 to 8 microns are applied for monitoring the forces that cell pairs generated, measured by traction force microscopy. Moreover, cell clones obtained from epithelial cells undergone genome editing are used to score the importance for known components of Cadherin complexes in force generation. We believe that our results from this combinatory mechanobiological method will provide deep insights on understanding the biophysical principle governing mechano- transduction of Cadherin-mediated intercellular adhesion.Keywords: cadherin, intercellular adhesion, protein micropattern, traction force microscopy
Procedia PDF Downloads 251958 A Fast Community Detection Algorithm
Authors: Chung-Yuan Huang, Yu-Hsiang Fu, Chuen-Tsai Sun
Abstract:
Community detection represents an important data-mining tool for analyzing and understanding real-world complex network structures and functions. We believe that at least four criteria determine the appropriateness of a community detection algorithm: (a) it produces useable normalized mutual information (NMI) and modularity results for social networks, (b) it overcomes resolution limitation problems associated with synthetic networks, (c) it produces good NMI results and performance efficiency for Lancichinetti-Fortunato-Radicchi (LFR) benchmark networks, and (d) it produces good modularity and performance efficiency for large-scale real-world complex networks. To our knowledge, no existing community detection algorithm meets all four criteria. In this paper, we describe a simple hierarchical arc-merging (HAM) algorithm that uses network topologies and rule-based arc-merging strategies to identify community structures that satisfy the criteria. We used five well-studied social network datasets and eight sets of LFR benchmark networks to validate the ground-truth community correctness of HAM, eight large-scale real-world complex networks to measure its performance efficiency, and two synthetic networks to determine its susceptibility to resolution limitation problems. Our results indicate that the proposed HAM algorithm is capable of providing satisfactory performance efficiency and that HAM-identified communities were close to ground-truth communities in social and LFR benchmark networks while overcoming resolution limitation problems.Keywords: complex network, social network, community detection, network hierarchy
Procedia PDF Downloads 228