Search results for: weak convergence
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1500

Search results for: weak convergence

1350 Examining the Relational Approach Elements in City Development Strategy of Qazvin 2031

Authors: Majid Etaati, Hamid Majedi

Abstract:

Relational planning approach proposed by Patsy Healey goes beyond the physical proximity and emphasizes social proximity. This approach stresses the importance of nodes and flows between nodes. Current plans in European cities have incrementally incorporated this approach, but urban plans in Iran have still stayed very detailed and rigid. In response to the weak evaluation results of the comprehensive planning approach in Qazvin, the local authorities applied the City Development Strategy (CDS) to cope with new urban challenges. The paper begins with an explanation of relational planning and suggests that Healey gives urban planners about spatial strategies and then it surveys relational factors in CDS of Qazvin. This study analyzes the extent which CDS of Qazvin have highlighted nodes, flows, and dynamics. In the end, the study concludes that there is a relational understanding of urban dynamics in the plan, but it is weak.

Keywords: relational, dynamics, city development strategy, urban planning, Qazvin

Procedia PDF Downloads 140
1349 An Exponential Field Path Planning Method for Mobile Robots Integrated with Visual Perception

Authors: Magdy Roman, Mostafa Shoeib, Mostafa Rostom

Abstract:

Global vision, whether provided by overhead fixed cameras, on-board aerial vehicle cameras, or satellite images can always provide detailed information on the environment around mobile robots. In this paper, an intelligent vision-based method of path planning and obstacle avoidance for mobile robots is presented. The method integrates visual perception with a new proposed field-based path-planning method to overcome common path-planning problems such as local minima, unreachable destination and unnecessary lengthy paths around obstacles. The method proposes an exponential angle deviation field around each obstacle that affects the orientation of a close robot. As the robot directs toward, the goal point obstacles are classified into right and left groups, and a deviation angle is exponentially added or subtracted to the orientation of the robot. Exponential field parameters are chosen based on Lyapunov stability criterion to guarantee robot convergence to the destination. The proposed method uses obstacles' shape and location, extracted from global vision system, through a collision prediction mechanism to decide whether to activate or deactivate obstacles field. In addition, a search mechanism is developed in case of robot or goal point is trapped among obstacles to find suitable exit or entrance. The proposed algorithm is validated both in simulation and through experiments. The algorithm shows effectiveness in obstacles' avoidance and destination convergence, overcoming common path planning problems found in classical methods.

Keywords: path planning, collision avoidance, convergence, computer vision, mobile robots

Procedia PDF Downloads 194
1348 Selecting the Best Sub-Region Indexing the Images in the Case of Weak Segmentation Based on Local Color Histograms

Authors: Mawloud Mosbah, Bachir Boucheham

Abstract:

Color Histogram is considered as the oldest method used by CBIR systems for indexing images. In turn, the global histograms do not include the spatial information; this is why the other techniques coming later have attempted to encounter this limitation by involving the segmentation task as a preprocessing step. The weak segmentation is employed by the local histograms while other methods as CCV (Color Coherent Vector) are based on strong segmentation. The indexation based on local histograms consists of splitting the image into N overlapping blocks or sub-regions, and then the histogram of each block is computed. The dissimilarity between two images is reduced, as consequence, to compute the distance between the N local histograms of the both images resulting then in N*N values; generally, the lowest value is taken into account to rank images, that means that the lowest value is that which helps to designate which sub-region utilized to index images of the collection being asked. In this paper, we make under light the local histogram indexation method in the hope to compare the results obtained against those given by the global histogram. We address also another noteworthy issue when Relying on local histograms namely which value, among N*N values, to trust on when comparing images, in other words, which sub-region among the N*N sub-regions on which we base to index images. Based on the results achieved here, it seems that relying on the local histograms, which needs to pose an extra overhead on the system by involving another preprocessing step naming segmentation, does not necessary mean that it produces better results. In addition to that, we have proposed here some ideas to select the local histogram on which we rely on to encode the image rather than relying on the local histogram having lowest distance with the query histograms.

Keywords: CBIR, color global histogram, color local histogram, weak segmentation, Euclidean distance

Procedia PDF Downloads 359
1347 Perspectives of Saudi Students on Reasons for Seeking Private Tutors in English

Authors: Ghazi Alotaibi

Abstract:

The current study examined and described the views of secondary school students and their parents on their reasons for seeking private tutors in English. These views were obtained through two group interviews with the students and parents separately. Several causes were brought up during the two interviews. These causes included difficulty of the English language, weak teacher performance, the need to pass exams with high marks, lack of parents’ follow-up of student school performance, social pressure, variability in student comprehension levels at school, weak English foundation in previous school years, repeated student absence from school, large classes, as well as English teachers’ heavy teaching loads. The study started with a description of the EFL educational system in Saudi Arabia and concluded with recommendations for the improvement of the school learning environment.

Keywords: english, learning difficulty, private tutoring, Saudi, teaching practices, learning environment

Procedia PDF Downloads 456
1346 Preschool Teachers' Teaching Performance in Relation to Their Technology and 21st Century Skills

Authors: Vida Dones-Jimenez

Abstract:

The main purpose of this study is to determine the preschool teachers’ technology and 21st-century skills and its relation to teachers’ performance. The participants were 94 preschool teachers and 59 school administrators from the CDAPS member schools. The data were collected by using 21st Century Skill, developed by ISSA (2009), Technology Skills of Teachers Survey (2013) and Teacher Performance Evaluation Criteria and Descriptors (200) was modified by the current researcher to suit the needs of her study and was administered personally by her. The surveys were designed to measure the participants’ 21st-century skills, technology skills and teaching performance. The result of the study indicates that the majority of the preschool teachers are the college graduate. Most of them are in the teaching profession for 0 to 10 years. It also indicated that the majority of the school administrators are masters’ degree holder. The preschool teachers are outstanding in their teaching performance as rated by the school administrators. The preschool teachers are skillful in using technology, and they are very skillful in executing the 21st-century skills in teaching. It was further determined that no significant difference between preschool teachers 21st-century skill in regards to educational attainment same as with the number of years in teaching, likewise with their technology skills. Furthermore, the study has shown that there is a very weak relationship between technology and 21st-century skills of preschool teachers, a weak relationship between technology skills and teaching performance and a very weak relationship between 21st-century skills and teaching performance were also established. The study recommends that the preschool teachers should be encouraged to enroll in master degree programs. School administrators should support the implementation of newly adopted technologies and support faculty members at various levels of use and experience. It is also recommended that regular review of the professional development plan be undertaken to upgrade 21st-century teaching and learning skills of preschool teachers.

Keywords: preschool teacher, teaching performance, technology, 21st century skills

Procedia PDF Downloads 399
1345 Additional Method for the Purification of Lanthanide-Labeled Peptide Compounds Pre-Purified by Weak Cation Exchange Cartridge

Authors: K. Eryilmaz, G. Mercanoglu

Abstract:

Aim: Purification of the final product, which is the last step in the synthesis of lanthanide-labeled peptide compounds, can be accomplished by different methods. Among these methods, the two most commonly used methods are C18 solid phase extraction (SPE) and weak cation exchanger cartridge elution. SPE C18 solid phase extraction method yields high purity final product, while elution from the weak cation exchanger cartridge is pH dependent and ineffective in removing colloidal impurities. The aim of this work is to develop an additional purification method for the lanthanide-labeled peptide compound in cases where the desired radionuclidic and radiochemical purity of the final product can not be achieved because of pH problem or colloidal impurity. Material and Methods: For colloidal impurity formation, 3 mL of water for injection (WFI) was added to 30 mCi of 177LuCl3 solution and allowed to stand for 1 day. 177Lu-DOTATATE was synthesized using EZAG ML-EAZY module (10 mCi/mL). After synthesis, the final product was mixed with the colloidal impurity solution (total volume:13 mL, total activity: 40 mCi). The resulting mixture was trapped in SPE-C18 cartridge. The cartridge was washed with 10 ml saline to remove impurities to the waste vial. The product trapped in the cartridge was eluted with 2 ml of 50% ethanol and collected to the final product vial via passing through a 0.22μm filter. The final product was diluted with 10 mL of saline. Radiochemical purity before and after purification was analysed by HPLC method. (column: ACE C18-100A. 3µm. 150 x 3.0mm, mobile phase: Water-Acetonitrile-Trifluoro acetic acid (75:25:1), flow rate: 0.6 mL/min). Results: UV and radioactivity detector results in HPLC analysis showed that colloidal impurities were completely removed from the 177Lu-DOTATATE/ colloidal impurity mixture by purification method. Conclusion: The improved purification method can be used as an additional method to remove impurities that may result from the lanthanide-peptide synthesis in which the weak cation exchange purification technique is used as the last step. The purification of the final product and the GMP compliance (the final aseptic filtration and the sterile disposable system components) are two major advantages.

Keywords: lanthanide, peptide, labeling, purification, radionuclide, radiopharmaceutical, synthesis

Procedia PDF Downloads 160
1344 Fuzzy Sentiment Analysis of Customer Product Reviews

Authors: Samaneh Nadali, Masrah Azrifah Azmi Murad

Abstract:

As a result of the growth of the web, people are able to express their views and opinions. They can now post reviews of products at merchant sites and express their views on almost anything in internet forums, discussion groups, and blogs. Therefore, the number of product reviews has grown rapidly. The large numbers of reviews make it difficult for manufacturers or businesses to automatically classify them into different semantic orientations (positive, negative, and neutral). For sentiment classification, most existing methods utilize a list of opinion words whereas this paper proposes a fuzzy approach for evaluating sentiments expressed in customer product reviews, to predict the strength levels (e.g. very weak, weak, moderate, strong and very strong) of customer product reviews by combinations of adjective, adverb and verb. The proposed fuzzy approach has been tested on eight benchmark datasets and obtained 74% accuracy, which leads to help the organization with a more clear understanding of customer's behavior in support of business planning process.

Keywords: fuzzy logic, customer product review, sentiment analysis

Procedia PDF Downloads 363
1343 Application of Heuristic Integration Ant Colony Optimization in Path Planning

Authors: Zeyu Zhang, Guisheng Yin, Ziying Zhang, Liguo Zhang

Abstract:

This paper mainly studies the path planning method based on ant colony optimization (ACO), and proposes heuristic integration ant colony optimization (HIACO). This paper not only analyzes and optimizes the principle, but also simulates and analyzes the parameters related to the application of HIACO in path planning. Compared with the original algorithm, the improved algorithm optimizes probability formula, tabu table mechanism and updating mechanism, and introduces more reasonable heuristic factors. The optimized HIACO not only draws on the excellent ideas of the original algorithm, but also solves the problems of premature convergence, convergence to the sub optimal solution and improper exploration to some extent. HIACO can be used to achieve better simulation results and achieve the desired optimization. Combined with the probability formula and update formula, several parameters of HIACO are tested. This paper proves the principle of the HIACO and gives the best parameter range in the research of path planning.

Keywords: ant colony optimization, heuristic integration, path planning, probability formula

Procedia PDF Downloads 250
1342 A Novel Guided Search Based Multi-Objective Evolutionary Algorithm

Authors: A. Baviskar, C. Sandeep, K. Shankar

Abstract:

Solving Multi-objective Optimization Problems requires faster convergence and better spread. Though existing Evolutionary Algorithms (EA's) are able to achieve this, the computation effort can further be reduced by hybridizing them with innovative strategies. This study is focuses on converging to the pareto front faster while adapting the advantages of Strength Pareto Evolutionary Algorithm-II (SPEA-II) for a better spread. Two different approaches based on optimizing the objective functions independently are implemented. In the first method, the decision variables corresponding to the optima of individual objective functions are strategically used to guide the search towards the pareto front. In the second method, boundary points of the pareto front are calculated and their decision variables are seeded to the initial population. Both the methods are applied to different constrained and unconstrained multi-objective test functions. It is observed that proposed guided search based algorithm gives better convergence and diversity than several well-known existing algorithms (such as NSGA-II and SPEA-II) in considerably less number of iterations.

Keywords: boundary points, evolutionary algorithms (EA's), guided search, strength pareto evolutionary algorithm-II (SPEA-II)

Procedia PDF Downloads 277
1341 Variants of Mathematical Induction as Strong Proof Techniques in Theory of Computing

Authors: Ahmed Tarek, Ahmed Alveed

Abstract:

In the theory of computing, there are a wide variety of direct and indirect proof techniques. However, mathematical induction (MI) stands out to be one of the most powerful proof techniques for proving hypotheses, theorems, and new results. There are variations of mathematical induction-based proof techniques, which are broadly classified into three categories, such as structural induction (SI), weak induction (WI), and strong induction (SI). In this expository paper, several different variants of the mathematical induction techniques are explored, and the specific scenarios are discussed where a specific induction technique stands out to be more advantageous as compared to other induction strategies. Also, the essential difference among the variants of mathematical induction are explored. The points of separation among mathematical induction, recursion, and logical deduction are precisely analyzed, and the relationship among variations of recurrence relations, and mathematical induction are being explored. In this context, the application of recurrence relations, and mathematical inductions are considered together in a single framework for codewords over a given alphabet.

Keywords: alphabet, codeword, deduction, mathematical, induction, recurrence relation, strong induction, structural induction, weak induction

Procedia PDF Downloads 164
1340 Efficiency of the Strain Based Approach Formulation for Plate Bending Analysis

Authors: Djamal Hamadi, Sifeddine Abderrahmani, Toufik Maalem, Oussama Temami

Abstract:

In recent years many finite elements have been developed for plate bending analysis. The formulated elements are based on the strain based approach. This approach leads to the representation of the displacements by higher order polynomial terms without the need for the introduction of additional internal and unnecessary degrees of freedom. Good convergence can also be obtained when the results are compared with those obtained from the corresponding displacement based elements, having the same total number of degrees of freedom. Furthermore, the plate bending elements are free from any shear locking since they converge to the Kirchhoff solution for thin plates contrarily for the corresponding displacement based elements. In this paper the efficiency of the strain based approach compared to well known displacement formulation is presented. The results obtained by a new formulated plate bending element based on the strain approach and Kirchhoff theory are compared with some others elements. The good convergence of the new formulated element is confirmed.

Keywords: displacement fields, finite elements, plate bending, Kirchhoff theory, strain based approach

Procedia PDF Downloads 296
1339 Performance Analysis of Encased Sand Columns in Different Clayey Soils Using 3D Numerical Method

Authors: Enayatallah Najari, Ali Noorzad, Mehdi Siavoshnia

Abstract:

One of the most decent and low-cost options in soft clayey soil improvement is using stone columns to reduce the settlement and increase the bearing capacity which is used for different ways to do this in various projects with diverse conditions. In the current study, it is tried to evaluate this improvement method in 4 different weak soils with diverse properties like specific gravity, permeability coefficient, over consolidation ratio (OCR), poison’s ratio, internal friction angle and bulk modulus by using ABAQUS 3D finite element software. Increment and decrement impacts of each mentioned factor on settlement and lateral displacement of weak soil beds are analyzed. In analyzed models, the properties related to sand columns and geosynthetic cover are assumed to be constant with their optimum values, and just soft clayey soil parameters are considered to be variable. It’s also demonstrated that OCR value can play a determinant role in soil resistance.

Keywords: stone columns, geosynthetic, finite element, 3D analysis, soft soils

Procedia PDF Downloads 361
1338 Effect of Atmospheric Pressure on the Flow at the Outlet of a Propellant Nozzle

Authors: R. Haoui

Abstract:

The purpose of this work is to simulate the flow at the exit of Vulcan 1 engine of European launcher Ariane 5. The geometry of the propellant nozzle is already determined using the characteristics method. The pressure in the outlet section of the nozzle is less than atmospheric pressure on the ground, causing the existence of oblique and normal shock waves at the exit. During the rise of the launcher, the atmospheric pressure decreases and the shock wave disappears. The code allows the capture of shock wave at exit of nozzle. The numerical technique uses the Flux Vector Splitting method of Van Leer to ensure convergence and avoid the calculation instabilities. The Courant, Friedrichs and Lewy coefficient (CFL) and mesh size level are selected to ensure the numerical convergence. The nonlinear partial derivative equations system which governs this flow is solved by an explicit unsteady numerical scheme by the finite volume method. The accuracy of the solution depends on the size of the mesh and also the step of time used in the discretized equations. We have chosen in this study the mesh that gives us a stationary solution with good accuracy.

Keywords: finite volume, lunchers, nozzles, shock wave

Procedia PDF Downloads 289
1337 Estimation of Relative Subsidence of Collapsible Soils Using Electromagnetic Measurements

Authors: Henok Hailemariam, Frank Wuttke

Abstract:

Collapsible soils are weak soils that appear to be stable in their natural state, normally dry condition, but rapidly deform under saturation (wetting), thus generating large and unexpected settlements which often yield disastrous consequences for structures unwittingly built on such deposits. In this study, a prediction model for the relative subsidence of stressed collapsible soils based on dielectric permittivity measurement is presented. Unlike most existing methods for soil subsidence prediction, this model does not require moisture content as an input parameter, thus providing the opportunity to obtain accurate estimation of the relative subsidence of collapsible soils using dielectric measurement only. The prediction model is developed based on an existing relative subsidence prediction model (which is dependent on soil moisture condition) and an advanced theoretical frequency and temperature-dependent electromagnetic mixing equation (which effectively removes the moisture content dependence of the original relative subsidence prediction model). For large scale sub-surface soil exploration purposes, the spatial sub-surface soil dielectric data over wide areas and high depths of weak (collapsible) soil deposits can be obtained using non-destructive high frequency electromagnetic (HF-EM) measurement techniques such as ground penetrating radar (GPR). For laboratory or small scale in-situ measurements, techniques such as an open-ended coaxial line with widely applicable time domain reflectometry (TDR) or vector network analysers (VNAs) are usually employed to obtain the soil dielectric data. By using soil dielectric data obtained from small or large scale non-destructive HF-EM investigations, the new model can effectively predict the relative subsidence of weak soils without the need to extract samples for moisture content measurement. Some of the resulting benefits are the preservation of the undisturbed nature of the soil as well as a reduction in the investigation costs and analysis time in the identification of weak (problematic) soils. The accuracy of prediction of the presented model is assessed by conducting relative subsidence tests on a collapsible soil at various initial soil conditions and a good match between the model prediction and experimental results is obtained.

Keywords: collapsible soil, dielectric permittivity, moisture content, relative subsidence

Procedia PDF Downloads 363
1336 Incorporating Polya’s Problem Solving Process: A Polytechnic Mathematics Module Case Study

Authors: Pei Chin Lim

Abstract:

School of Mathematics and Science of Singapore Polytechnic offers a Basic Mathematics module to students who did not pass GCE O-Level Additional Mathematics. These students are weaker in Mathematics. In particular, they struggle with word problems and tend to leave them blank in tests and examinations. In order to improve students’ problem-solving skills, the school redesigned the Basic Mathematics module to incorporate Polya’s problem-solving methodology. During tutorial lessons, students have to work through learning activities designed to raise their metacognitive awareness by following Polya’s problem-solving process. To assess the effectiveness of the redesign, students’ working for a challenging word problem in the mid-semester test were analyzed. Sixty-five percent of students attempted to understand the problem by making sketches. Twenty-eight percent of students went on to devise a plan and implement it. Only five percent of the students still left the question blank. These preliminary results suggest that with regular exposure to an explicit and systematic problem-solving approach, weak students’ problem-solving skills can potentially be improved.

Keywords: mathematics education, metacognition, problem solving, weak students

Procedia PDF Downloads 162
1335 The Impact of Host Country Effects on Transferring HRM Practices from Western Headquarters to Ukrainian Subsidiaries

Authors: Olga Novitskaya

Abstract:

The emerging markets of post-USSR countries have attracted Western multinational companies; however, weak institutions and unstable host country environments have hindered the implementation of successful management practices. The Ukrainian market, in light of recent events, is particularly interesting to study for its compatibility with Western businesses. This paper focuses on factors that can facilitate or inhibit the transfer of human resource management practices from Western headquarters to Ukrainian subsidiaries. To explain the national context’s effects better, a business systems approach has been applied to a qualitative study of 16 wholly owned Western subsidiaries, dissecting the reasons for a weak integration of Western practices in Ukraine. Results show that underdeveloped institutions have forced companies to develop additional practices that compensate for national weaknesses, as well as to adjust to a constantly changing environment. Flexibility and local responsiveness were observed as vital for success in Ukraine.

Keywords: human resource management, Ukraine, business system, multinational companies, HR practices

Procedia PDF Downloads 393
1334 The Application of Creative Economy in National R&D Programs of Health Technology (HT) Area in Korea

Authors: Hong Bum Kim

Abstract:

Health technology (HT) area have high growth potential because of global trends such as ageing and economical development. For its high employment effect and capability for creating new business, HT is being considered as one of the major next-generation growth power. Particularly, convergence technologies which are emerged by fusion of HT and other technological area is emphasized for new industry creation in Korea, as a part of Creative Economy. In this study, current status of HT area in Korea is analyzed. The aspect of transition in emphasized technological area of HT-related national R&D enterprise is statistically reviewed. Current level of HT-related technologies such as BT, IT and NT is investigated in this context. Existing research system for HT-convergence technology development such as establishment of research center is also analyzed. Finally, proposed research support system such as system of legislation for developing HT area as one of the main component of Creative Economy in Korea will be analyzed. Analysis of technology trend and policy will help to draw a new direction in progression of R&D enterprise in HT area. Improvement of policy such as legal system reorganization and measure of social agreement for burden of expense could be deduced based on these results.

Keywords: HT, creative economy, policy, national R&D programs

Procedia PDF Downloads 387
1333 Smartphone Photography in Urban China

Authors: Wen Zhang

Abstract:

The smartphone plays a significant role in media convergence, and smartphone photography is reconstructing the way we communicate and think. This article aims to explore the smartphone photography practices of urban Chinese smartphone users and images produced by smartphones from a techno-cultural perspective. The analysis consists of two types of data: One is a semi-structured interview of 21 participants, and the other consists of the images created by the participants. The findings are organised in two parts. The first part summarises the current tendencies of capturing, editing, sharing and archiving digital images via smartphones. The second part shows that food and selfie/anti-selfie are the preferred subjects of smartphone photographic images from a technical and multi-purpose perspective and demonstrates that screenshots and image texts are new genres of non-photographic images that are frequently made by smartphones, which contributes to improving operational efficiency, disseminating information and sharing knowledge. The analyses illustrate the positive impacts between smartphones and photography enthusiasm and practices based on the diffusion of innovation theory, which also makes us rethink the value of photographs and the practice of ‘photographic seeing’ from the screen itself.

Keywords: digital photography, image-text, media convergence, photographic- seeing, selfie/anti-selfie, smartphone, technological innovation

Procedia PDF Downloads 354
1332 Modelling of Structures by Advanced Finites Elements Based on the Strain Approach

Authors: Sifeddine Abderrahmani, Sonia Bouafia

Abstract:

The finite element method is the most practical tool for the analysis of structures, whatever the geometrical shape and behavior. It is extensively used in many high-tech industries, such as civil or military engineering, for the modeling of bridges, motor bodies, fuselages, and airplane wings. Additionally, experience demonstrates that engineers like modeling their structures using the most basic finite elements. Numerous models of finite elements may be utilized in the numerical analysis depending on the interpolation field that is selected, and it is generally known that convergence to the proper value will occur considerably more quickly with a good displacement pattern than with a poor pattern, saving computation time. The method for creating finite elements using the strain approach (S.B.A.) is presented in this presentation. When the results are compared with those provided by equivalent displacement-based elements, having the same total number of degrees of freedom, an excellent convergence can be obtained through some application and validation tests using recently developed membrane elements, plate bending elements, and flat shell elements. The effectiveness and performance of the strain-based finite elements in modeling structures are proven by the findings for deflections and stresses.

Keywords: finite elements, plate bending, strain approach, displacement formulation, shell element

Procedia PDF Downloads 99
1331 Shaft Friction of Bored Pile Socketed in Weathered Limestone in Qatar

Authors: Thanawat Chuleekiat

Abstract:

Socketing of bored piles in rock is always seen as a matter of debate on construction sites between consultants and contractors. The socketing depth normally depends on the type of rock, depth at which the rock is available below the pile cap and load carrying capacity of the pile. In this paper, the review of field load test data of drilled shaft socketed in weathered limestone conducted using conventional static pile load test and dynamic pile load test was made to evaluate a unit shaft friction for the bored piles socketed in weathered limestone (weak rock). The borehole drilling data were also reviewed in conjunction with the pile test result. In addition, the back-calculated unit shaft friction was reviewed against various empirical methods for bored piles socketed in weak rock. The paper concludes with an estimated ultimate unit shaft friction from the case study in Qatar for preliminary design.

Keywords: piled foundation, weathered limestone, shaft friction, rock socket, pile load test

Procedia PDF Downloads 180
1330 Photon Blockade in Non-Hermitian Optomechanical Systems with Nonreciprocal Couplings

Authors: J. Y. Sun, H. Z. Shen

Abstract:

We study the photon blockade at exceptional points for a non-Hermitian optomechanical system coupled to the driven whispering-gallery-mode microresonator with two nanoparticles under the weak optomechanical coupling approximation, where exceptional points emerge periodically by controlling the relative angle of the nanoparticles. We find that conventional photon blockade occurs at exceptional points for the eigenenergy resonance of the single-excitation subspace driven by a laser field and discuss the physical origin of conventional photon blockade. Under the weak driving condition, we analyze the influences of the different parameters on conventional photon blockade. We investigate conventional photon blockade at nonexceptional points, which exists at two optimal detunings due to the eigenstates in the single-excitation subspace splitting from one (coalescence) at exceptional points to two at nonexceptional points. Unconventional photon blockade can occur at nonexceptional points, while it does not exist at exceptional points since the destructive quantum interference cannot occur due to the two different quantum pathways to the two-photon state not being formed. The realization of photon blockade in our proposal provides a viable and flexible way for the preparation of single-photon sources in the non-Hermitian optomechanical system.

Keywords: optomechanical systems, photon blockade, non-hermitian, exceptional points

Procedia PDF Downloads 140
1329 Particle Filter State Estimation Algorithm Based on Improved Artificial Bee Colony Algorithm

Authors: Guangyuan Zhao, Nan Huang, Xuesong Han, Xu Huang

Abstract:

In order to solve the problem of sample dilution in the traditional particle filter algorithm and achieve accurate state estimation in a nonlinear system, a particle filter method based on an improved artificial bee colony (ABC) algorithm was proposed. The algorithm simulated the process of bee foraging and optimization and made the high likelihood region of the backward probability of particles moving to improve the rationality of particle distribution. The opposition-based learning (OBL) strategy is introduced to optimize the initial population of the artificial bee colony algorithm. The convergence factor is introduced into the neighborhood search strategy to limit the search range and improve the convergence speed. Finally, the crossover and mutation operations of the genetic algorithm are introduced into the search mechanism of the following bee, which makes the algorithm jump out of the local extreme value quickly and continue to search the global extreme value to improve its optimization ability. The simulation results show that the improved method can improve the estimation accuracy of particle filters, ensure the diversity of particles, and improve the rationality of particle distribution.

Keywords: particle filter, impoverishment, state estimation, artificial bee colony algorithm

Procedia PDF Downloads 151
1328 The Role of Metaheuristic Approaches in Engineering Problems

Authors: Ferzat Anka

Abstract:

Many types of problems can be solved using traditional analytical methods. However, these methods take a long time and cause inefficient use of resources. In particular, different approaches may be required in solving complex and global engineering problems that we frequently encounter in real life. The bigger and more complex a problem, the harder it is to solve. Such problems are called Nondeterministic Polynomial time (NP-hard) in the literature. The main reasons for recommending different metaheuristic algorithms for various problems are the use of simple concepts, the use of simple mathematical equations and structures, the use of non-derivative mechanisms, the avoidance of local optima, and their fast convergence. They are also flexible, as they can be applied to different problems without very specific modifications. Thanks to these features, it can be easily embedded even in many hardware devices. Accordingly, this approach can also be used in trend application areas such as IoT, big data, and parallel structures. Indeed, the metaheuristic approaches are algorithms that return near-optimal results for solving large-scale optimization problems. This study is focused on the new metaheuristic method that has been merged with the chaotic approach. It is based on the chaos theorem and helps relevant algorithms to improve the diversity of the population and fast convergence. This approach is based on Chimp Optimization Algorithm (ChOA), that is a recently introduced metaheuristic algorithm inspired by nature. This algorithm identified four types of chimpanzee groups: attacker, barrier, chaser, and driver, and proposed a suitable mathematical model for them based on the various intelligence and sexual motivations of chimpanzees. However, this algorithm is not more successful in the convergence rate and escaping of the local optimum trap in solving high-dimensional problems. Although it and some of its variants use some strategies to overcome these problems, it is observed that it is not sufficient. Therefore, in this study, a newly expanded variant is described. In the algorithm called Ex-ChOA, hybrid models are proposed for position updates of search agents, and a dynamic switching mechanism is provided for transition phases. This flexible structure solves the slow convergence problem of ChOA and improves its accuracy in multidimensional problems. Therefore, it tries to achieve success in solving global, complex, and constrained problems. The main contribution of this study is 1) It improves the accuracy and solves the slow convergence problem of the ChOA. 2) It proposes new hybrid movement strategy models for position updates of search agents. 3) It provides success in solving global, complex, and constrained problems. 4) It provides a dynamic switching mechanism between phases. The performance of the Ex-ChOA algorithm is analyzed on a total of 8 benchmark functions, as well as a total of 2 classical and constrained engineering problems. The proposed algorithm is compared with the ChoA, and several well-known variants (Weighted-ChoA, Enhanced-ChoA) are used. In addition, an Improved algorithm from the Grey Wolf Optimizer (I-GWO) method is chosen for comparison since the working model is similar. The obtained results depict that the proposed algorithm performs better or equivalently to the compared algorithms.

Keywords: optimization, metaheuristic, chimp optimization algorithm, engineering constrained problems

Procedia PDF Downloads 77
1327 Federated Knowledge Distillation with Collaborative Model Compression for Privacy-Preserving Distributed Learning

Authors: Shayan Mohajer Hamidi

Abstract:

Federated learning has emerged as a promising approach for distributed model training while preserving data privacy. However, the challenges of communication overhead, limited network resources, and slow convergence hinder its widespread adoption. On the other hand, knowledge distillation has shown great potential in compressing large models into smaller ones without significant loss in performance. In this paper, we propose an innovative framework that combines federated learning and knowledge distillation to address these challenges and enhance the efficiency of distributed learning. Our approach, called Federated Knowledge Distillation (FKD), enables multiple clients in a federated learning setting to collaboratively distill knowledge from a teacher model. By leveraging the collaborative nature of federated learning, FKD aims to improve model compression while maintaining privacy. The proposed framework utilizes a coded teacher model that acts as a reference for distilling knowledge to the client models. To demonstrate the effectiveness of FKD, we conduct extensive experiments on various datasets and models. We compare FKD with baseline federated learning methods and standalone knowledge distillation techniques. The results show that FKD achieves superior model compression, faster convergence, and improved performance compared to traditional federated learning approaches. Furthermore, FKD effectively preserves privacy by ensuring that sensitive data remains on the client devices and only distilled knowledge is shared during the training process. In our experiments, we explore different knowledge transfer methods within the FKD framework, including Fine-Tuning (FT), FitNet, Correlation Congruence (CC), Similarity-Preserving (SP), and Relational Knowledge Distillation (RKD). We analyze the impact of these methods on model compression and convergence speed, shedding light on the trade-offs between size reduction and performance. Moreover, we address the challenges of communication efficiency and network resource utilization in federated learning by leveraging the knowledge distillation process. FKD reduces the amount of data transmitted across the network, minimizing communication overhead and improving resource utilization. This makes FKD particularly suitable for resource-constrained environments such as edge computing and IoT devices. The proposed FKD framework opens up new avenues for collaborative and privacy-preserving distributed learning. By combining the strengths of federated learning and knowledge distillation, it offers an efficient solution for model compression and convergence speed enhancement. Future research can explore further extensions and optimizations of FKD, as well as its applications in domains such as healthcare, finance, and smart cities, where privacy and distributed learning are of paramount importance.

Keywords: federated learning, knowledge distillation, knowledge transfer, deep learning

Procedia PDF Downloads 75
1326 Kirchoff Type Equation Involving the p-Laplacian on the Sierpinski Gasket Using Nehari Manifold Technique

Authors: Abhilash Sahu, Amit Priyadarshi

Abstract:

In this paper, we will discuss the existence of weak solutions of the Kirchhoff type boundary value problem on the Sierpinski gasket. Where S denotes the Sierpinski gasket in R² and S₀ is the intrinsic boundary of the Sierpinski gasket. M: R → R is a positive function and h: S × R → R is a suitable function which is a part of our main equation. ∆p denotes the p-Laplacian, where p > 1. First of all, we will define a weak solution for our problem and then we will show the existence of at least two solutions for the above problem under suitable conditions. There is no well-known concept of a generalized derivative of a function on a fractal domain. Recently, the notion of differential operators such as the Laplacian and the p-Laplacian on fractal domains has been defined. We recall the result first then we will address the above problem. In view of literature, Laplacian and p-Laplacian equations are studied extensively on regular domains (open connected domains) in contrast to fractal domains. In fractal domains, people have studied Laplacian equations more than p-Laplacian probably because in that case, the corresponding function space is reflexive and many minimax theorems which work for regular domains is applicable there which is not the case for the p-Laplacian. This motivates us to study equations involving p-Laplacian on the Sierpinski gasket. Problems on fractal domains lead to nonlinear models such as reaction-diffusion equations on fractals, problems on elastic fractal media and fluid flow through fractal regions etc. We have studied the above p-Laplacian equations on the Sierpinski gasket using fibering map technique on the Nehari manifold. Many authors have studied the Laplacian and p-Laplacian equations on regular domains using this Nehari manifold technique. In general Euler functional associated with such a problem is Frechet or Gateaux differentiable. So, a critical point becomes a solution to the problem. Also, the function space they consider is reflexive and hence we can extract a weakly convergent subsequence from a bounded sequence. But in our case neither the Euler functional is differentiable nor the function space is known to be reflexive. Overcoming these issues we are still able to prove the existence of at least two solutions of the given equation.

Keywords: Euler functional, p-Laplacian, p-energy, Sierpinski gasket, weak solution

Procedia PDF Downloads 234
1325 Convergence and Stability in Federated Learning with Adaptive Differential Privacy Preservation

Authors: Rizwan Rizwan

Abstract:

This paper provides an overview of Federated Learning (FL) and its application in enhancing data security, privacy, and efficiency. FL utilizes three distinct architectures to ensure privacy is never compromised. It involves training individual edge devices and aggregating their models on a server without sharing raw data. This approach not only provides secure models without data sharing but also offers a highly efficient privacy--preserving solution with improved security and data access. Also we discusses various frameworks used in FL and its integration with machine learning, deep learning, and data mining. In order to address the challenges of multi--party collaborative modeling scenarios, a brief review FL scheme combined with an adaptive gradient descent strategy and differential privacy mechanism. The adaptive learning rate algorithm adjusts the gradient descent process to avoid issues such as model overfitting and fluctuations, thereby enhancing modeling efficiency and performance in multi-party computation scenarios. Additionally, to cater to ultra-large-scale distributed secure computing, the research introduces a differential privacy mechanism that defends against various background knowledge attacks.

Keywords: federated learning, differential privacy, gradient descent strategy, convergence, stability, threats

Procedia PDF Downloads 30
1324 Enhanced Multi-Scale Feature Extraction Using a DCNN by Proposing Dynamic Soft Margin SoftMax for Face Emotion Detection

Authors: Armin Nabaei, M. Omair Ahmad, M. N. S. Swamy

Abstract:

Many facial expression and emotion recognition methods in the traditional approaches of using LDA, PCA, and EBGM have been proposed. In recent years deep learning models have provided a unique platform addressing by automatically extracting the features for the detection of facial expression and emotions. However, deep networks require large training datasets to extract automatic features effectively. In this work, we propose an efficient emotion detection algorithm using face images when only small datasets are available for training. We design a deep network whose feature extraction capability is enhanced by utilizing several parallel modules between the input and output of the network, each focusing on the extraction of different types of coarse features with fined grained details to break the symmetry of produced information. In fact, we leverage long range dependencies, which is one of the main drawback of CNNs. We develop this work by introducing a Dynamic Soft-Margin SoftMax.The conventional SoftMax suffers from reaching to gold labels very soon, which take the model to over-fitting. Because it’s not able to determine adequately discriminant feature vectors for some variant class labels. We reduced the risk of over-fitting by using a dynamic shape of input tensor instead of static in SoftMax layer with specifying a desired Soft- Margin. In fact, it acts as a controller to how hard the model should work to push dissimilar embedding vectors apart. For the proposed Categorical Loss, by the objective of compacting the same class labels and separating different class labels in the normalized log domain.We select penalty for those predictions with high divergence from ground-truth labels.So, we shorten correct feature vectors and enlarge false prediction tensors, it means we assign more weights for those classes with conjunction to each other (namely, “hard labels to learn”). By doing this work, we constrain the model to generate more discriminate feature vectors for variant class labels. Finally, for the proposed optimizer, our focus is on solving weak convergence of Adam optimizer for a non-convex problem. Our noteworthy optimizer is working by an alternative updating gradient procedure with an exponential weighted moving average function for faster convergence and exploiting a weight decay method to help drastically reducing the learning rate near optima to reach the dominant local minimum. We demonstrate the superiority of our proposed work by surpassing the first rank of three widely used Facial Expression Recognition datasets with 93.30% on FER-2013, and 16% improvement compare to the first rank after 10 years, reaching to 90.73% on RAF-DB, and 100% k-fold average accuracy for CK+ dataset, and shown to provide a top performance to that provided by other networks, which require much larger training datasets.

Keywords: computer vision, facial expression recognition, machine learning, algorithms, depp learning, neural networks

Procedia PDF Downloads 74
1323 Stability of a Natural Weak Rock Slope under Rapid Water Drawdowns: Interaction between Guadalfeo Viaduct and Rules Reservoir, Granada, Spain

Authors: Sonia Bautista Carrascosa, Carlos Renedo Sanchez

Abstract:

The effect of a rapid drawdown is a classical scenario to be considered in slope stability under submerged conditions. This situation arises when totally or partially submerged slopes experience a descent of the external water level and is a typical verification to be done in a dam engineering discipline, as reservoir water levels commonly fluctuate noticeably during seasons and due to operational reasons. Although the scenario is well known and predictable in general, site conditions can increase the complexity of its assessment and external factors are not always expected, can cause a reduction in the stability or even a failure in a slope under a rapid drawdown situation. The present paper describes and discusses the interaction between two different infrastructures, a dam and a highway, and the impact on the stability of a natural rock slope overlaid by the north abutment of a viaduct of the A-44 Highway due to the rapid drawdown of the Rules Dam, in the province of Granada (south of Spain). In the year 2011, with both infrastructures, the A-44 Highway and the Rules Dam already constructed, delivered and under operation, some movements start to be recorded in the approximation embankment and north abutment of the Guadalfeo Viaduct, included in the highway and developed to solve the crossing above the tail of the reservoir. The embankment and abutment were founded in a low-angle natural rock slope formed by grey graphic phyllites, distinctly weathered and intensely fractured, with pre-existing fault and weak planes. After the first filling of the reservoir, to a relative level of 243m, three consecutive drawdowns were recorded in the autumns 2010, 2011 and 2012, to relative levels of 234m, 232m and 225m. To understand the effect of these drawdowns in the weak rock mass strength and in its stability, a new geological model was developed, after reviewing all the available ground investigations, updating the geological mapping of the area and supplemented with an additional geotechnical and geophysical investigations survey. Together with all this information, rainfall and reservoir level evolution data have been reviewed in detail to incorporate into the monitoring interpretation. The analysis of the monitoring data and the new geological and geotechnical interpretation, supported by the use of limit equilibrium software Slide2, concludes that the movement follows the same direction as the schistosity of the phyllitic rock mass, coincident as well with the direction of the natural slope, indicating a deep-seated movement of the whole slope towards the reservoir. As part of these conclusions, the solutions considered to reinstate the highway infrastructure to the required FoS will be described, and the geomechanical characterization of these weak rocks discussed, together with the influence of water level variations, not only in the water pressure regime but in its geotechnical behavior, by the modification of the strength parameters and deformability.

Keywords: monitoring, rock slope stability, water drawdown, weak rock

Procedia PDF Downloads 160
1322 The Association of Southeast Asian Nations (ASEAN) and the Dynamics of Resistance to Sovereignty Violation: The Case of East Timor (1975-1999)

Authors: Laura Southgate

Abstract:

The Association of Southeast Asian Nations (ASEAN), as well as much of the scholarship on the organisation, celebrates its ability to uphold the principle of regional autonomy, understood as upholding the norm of non-intervention by external powers in regional affairs. Yet, in practice, this has been repeatedly violated. This dichotomy between rhetoric and practice suggests an interesting avenue for further study. The East Timor crisis (1975-1999) has been selected as a case-study to test the dynamics of ASEAN state resistance to sovereignty violation in two distinct timeframes: Indonesia’s initial invasion of the territory in 1975, and the ensuing humanitarian crisis in 1999 which resulted in a UN-mandated, Australian-led peacekeeping intervention force. These time-periods demonstrate variation on the dependent variable. It is necessary to observe covariation in order to derive observations in support of a causal theory. To establish covariation, my independent variable is therefore a continuous variable characterised by variation in convergence of interest. Change of this variable should change the value of the dependent variable, thus establishing causal direction. This paper investigates the history of ASEAN’s relationship to the norm of non-intervention. It offers an alternative understanding of ASEAN’s history, written in terms of the relationship between a key ASEAN state, which I call a ‘vanguard state’, and selected external powers. This paper will consider when ASEAN resistance to sovereignty violation has succeeded, and when it has failed. It will contend that variation in outcomes associated with vanguard state resistance to sovereignty violation can be best explained by levels of interest convergence between the ASEAN vanguard state and designated external actors. Evidence will be provided to support the hypothesis that in 1999, ASEAN’s failure to resist violations to the sovereignty of Indonesia was a consequence of low interest convergence between Indonesia and the external powers. Conversely, in 1975, ASEAN’s ability to resist violations to the sovereignty of Indonesia was a consequence of high interest convergence between Indonesia and the external powers. As the vanguard state, Indonesia was able to apply pressure on the ASEAN states and obtain unanimous support for Indonesia’s East Timor policy in 1975 and 1999. However, the key factor explaining the variance in outcomes in both time periods resides in the critical role played by external actors. This view represents a serious challenge to much of the existing scholarship that emphasises ASEAN’s ability to defend regional autonomy. As these cases attempt to show, ASEAN autonomy is much more contingent than portrayed in the existing literature.

Keywords: ASEAN, east timor, intervention, sovereignty

Procedia PDF Downloads 358
1321 Algorithms for Computing of Optimization Problems with a Common Minimum-Norm Fixed Point with Applications

Authors: Apirak Sombat, Teerapol Saleewong, Poom Kumam, Parin Chaipunya, Wiyada Kumam, Anantachai Padcharoen, Yeol Je Cho, Thana Sutthibutpong

Abstract:

This research is aimed to study a two-step iteration process defined over a finite family of σ-asymptotically quasi-nonexpansive nonself-mappings. The strong convergence is guaranteed under the framework of Banach spaces with some additional structural properties including strict and uniform convexity, reflexivity, and smoothness assumptions. With similar projection technique for nonself-mapping in Hilbert spaces, we hereby use the generalized projection to construct a point within the corresponding domain. Moreover, we have to introduce the use of duality mapping and its inverse to overcome the unavailability of duality representation that is exploit by Hilbert space theorists. We then apply our results for σ-asymptotically quasi-nonexpansive nonself-mappings to solve for ideal efficiency of vector optimization problems composed of finitely many objective functions. We also showed that the obtained solution from our process is the closest to the origin. Moreover, we also give an illustrative numerical example to support our results.

Keywords: asymptotically quasi-nonexpansive nonself-mapping, strong convergence, fixed point, uniformly convex and uniformly smooth Banach space

Procedia PDF Downloads 260