Search results for: random generation
4707 An Investigation of System and Operating Parameters on the Performance of Parabolic Trough Solar Collector for Power Generation
Authors: Umesh Kumar Sinha, Y. K. Nayak, N. Kumar, Swapnil Saurav, Monika Kashyap
Abstract:
The authors investigate the effect of system and operating parameters on the performance of high temperature solar concentrator for power generation. The effects of system and operating parameters were investigated using the developed mathematical expressions for collector efficiency, heat removal factor, fluid outlet temperature and power, etc. The results were simulated using C++program. The simulated results were plotted for investigation like effect of thermal loss parameter and radiative loss parameters on the collector efficiency, heat removal factor, fluid outlet temperature, rise of temperature and effect of mass flow rate of the fluid outlet temperature. In connection with the power generation, plots were drawn for the effect of (TM–TAMB) on the variation of concentration efficiency, concentrator irradiance on PM/PMN, evaporation temperature on thermal to electric power efficiency (Conversion efficiency) of the plant and overall efficiency of solar power plant.Keywords: parabolic trough solar collector, radiative and thermal loss parameters, collector efficiency, heat removal factor, fluid outlet and inlet temperatures, rise of temperature, mass flow rate, conversion efficiency, concentrator irradiance
Procedia PDF Downloads 3224706 Impact of the Energy Transition on Security of Supply - A Case Study of Vietnam Power System in 2030
Authors: Phuong Nguyen, Trung Tran
Abstract:
Along with the global ongoing energy transition, Vietnam has indicated a strong commitment in the last COP events on the zero-carbon emission target. However, it is a real challenge for the nation to replace fossil-fired power plants by a significant amount of renewable energy sources (RES) while maintaining security of supply. The unpredictability and variability of RES would cause technical issues for supply-demand balancing, network congestions, system balancing, among others. It is crucial to take these into account while planning the future grid infrastructure. This study will address both generation and transmission adequacy and reveal a comprehensive analysis about the impact of ongoing energy transition on the development of Vietnam power system in 2030. This will provide insight for creating an secure, stable, and affordable pathway for the country in upcoming years.Keywords: generation adequacy, transmission adequacy, security of supply, energy transition
Procedia PDF Downloads 864705 Measurement of Coal Fineness, Air Fuel Ratio, and Fuel Weight Distribution in a Vertical Spindle Mill’s Pulverized Fuel Pipes at Classifier Vane 40%
Authors: Jayasiler Kunasagaram
Abstract:
In power generation, coal fineness is crucial to maintain flame stability, ensure combustion efficiency, and lower emissions to the environment. In order for the pulverized coal to react effectively in the boiler furnace, the size of coal particles needs to be at least 70% finer than 74 μm. This paper presents the experiment results of coal fineness, air fuel ratio and fuel weight distribution in pulverized fuel pipes at classifier vane 40%. The aim of this experiment is to extract the pulverized coal is kinetically and investigate the data accordingly. Dirty air velocity, coal sample extraction, and coal sieving experiments were performed to measure coal fineness. The experiment results show that required coal fineness can be achieved at 40 % classifier vane. However, this does not surpass the desired value by a great margin.Keywords: coal power, emissions, isokinetic sampling, power generation
Procedia PDF Downloads 6094704 Stock Prediction and Portfolio Optimization Thesis
Authors: Deniz Peksen
Abstract:
This thesis aims to predict trend movement of closing price of stock and to maximize portfolio by utilizing the predictions. In this context, the study aims to define a stock portfolio strategy from models created by using Logistic Regression, Gradient Boosting and Random Forest. Recently, predicting the trend of stock price has gained a significance role in making buy and sell decisions and generating returns with investment strategies formed by machine learning basis decisions. There are plenty of studies in the literature on the prediction of stock prices in capital markets using machine learning methods but most of them focus on closing prices instead of the direction of price trend. Our study differs from literature in terms of target definition. Ours is a classification problem which is focusing on the market trend in next 20 trading days. To predict trend direction, fourteen years of data were used for training. Following three years were used for validation. Finally, last three years were used for testing. Training data are between 2002-06-18 and 2016-12-30 Validation data are between 2017-01-02 and 2019-12-31 Testing data are between 2020-01-02 and 2022-03-17 We determine Hold Stock Portfolio, Best Stock Portfolio and USD-TRY Exchange rate as benchmarks which we should outperform. We compared our machine learning basis portfolio return on test data with return of Hold Stock Portfolio, Best Stock Portfolio and USD-TRY Exchange rate. We assessed our model performance with the help of roc-auc score and lift charts. We use logistic regression, Gradient Boosting and Random Forest with grid search approach to fine-tune hyper-parameters. As a result of the empirical study, the existence of uptrend and downtrend of five stocks could not be predicted by the models. When we use these predictions to define buy and sell decisions in order to generate model-based-portfolio, model-based-portfolio fails in test dataset. It was found that Model-based buy and sell decisions generated a stock portfolio strategy whose returns can not outperform non-model portfolio strategies on test dataset. We found that any effort for predicting the trend which is formulated on stock price is a challenge. We found same results as Random Walk Theory claims which says that stock price or price changes are unpredictable. Our model iterations failed on test dataset. Although, we built up several good models on validation dataset, we failed on test dataset. We implemented Random Forest, Gradient Boosting and Logistic Regression. We discovered that complex models did not provide advantage or additional performance while comparing them with Logistic Regression. More complexity did not lead us to reach better performance. Using a complex model is not an answer to figure out the stock-related prediction problem. Our approach was to predict the trend instead of the price. This approach converted our problem into classification. However, this label approach does not lead us to solve the stock prediction problem and deny or refute the accuracy of the Random Walk Theory for the stock price.Keywords: stock prediction, portfolio optimization, data science, machine learning
Procedia PDF Downloads 804703 An Authentic Algorithm for Ciphering and Deciphering Called Latin Djokovic
Authors: Diogen Babuc
Abstract:
The question that is a motivation of writing is how many devote themselves to discovering something in the world of science where much is discerned and revealed, but at the same time, much is unknown. Methods: The insightful elements of this algorithm are the ciphering and deciphering algorithms of Playfair, Caesar, and Vigenère. Only a few of their main properties are taken and modified, with the aim of forming a specific functionality of the algorithm called Latin Djokovic. Specifically, a string is entered as input data. A key k is given, with a random value between the values a and b = a+3. The obtained value is stored in a variable with the aim of being constant during the run of the algorithm. In correlation to the given key, the string is divided into several groups of substrings, and each substring has a length of k characters. The next step involves encoding each substring from the list of existing substrings. Encoding is performed using the basis of Caesar algorithm, i.e., shifting with k characters. However, that k is incremented by 1 when moving to the next substring in that list. When the value of k becomes greater than b+1, it’ll return to its initial value. The algorithm is executed, following the same procedure, until the last substring in the list is traversed. Results: Using this polyalphabetic method, ciphering and deciphering of strings are achieved. The algorithm also works for a 100-character string. The x character isn’t used when the number of characters in a substring is incompatible with the expected length. The algorithm is simple to implement, but it’s questionable if it works better than the other methods from the point of view of execution time and storage space.Keywords: ciphering, deciphering, authentic, algorithm, polyalphabetic cipher, random key, methods comparison
Procedia PDF Downloads 1034702 Design of a Phemt Buffer Amplifier in Mm-Wave Band around 60 GHz
Authors: Maryam Abata, Moulhime El Bekkali, Said Mazer, Catherine Algani, Mahmoud Mehdi
Abstract:
One major problem of most electronic systems operating in the millimeter wave band is the signal generation with a high purity and a stable carrier frequency. This problem is overcome by using the combination of a signal with a low frequency local oscillator (LO) and several stages of frequency multipliers. The use of these frequency multipliers to create millimeter-wave signals is an attractive alternative to direct generation signal. Therefore, the isolation problem of the local oscillator from the other stages is always present, which leads to have various mechanisms that can disturb the oscillator performance, thus a buffer amplifier is often included in oscillator outputs. In this paper, we present the study and design of a buffer amplifier in the mm-wave band using a 0.15μm pHEMT from UMS foundry. This amplifier will be used as a part of a frequency quadrupler at 60 GHz.Keywords: Mm-wave band, local oscillator, frequency quadrupler, buffer amplifier
Procedia PDF Downloads 5454701 Performance Evaluation of Extruded-type Heat sinks Used in Inverter for Solar Power Generation
Authors: Jung Hyun Kim, Gyo Woo Lee
Abstract:
In this study, heat release performances of the three extruded-type heat sinks can be used in the inverter for solar power generation were evaluated. Numbers of fins in the heat sinks (namely E-38, E-47 and E-76) were 38, 47 and 76, respectively. Heat transfer areas of them were 1.8, 1.9 and 2.8 m2. The heat release performances of E-38, E-47, and E-76 heat sinks were measured as 79.6, 81.6, and 83.2%, respectively. The results of heat release performance show that the larger amount of heat transfer area the higher heat release rate. While on the other, in this experiment, variations of the mass flow rates caused by different cross-sectional areas of the three heat sinks may not be the major parameter of the heat release. Despite the 47.4% increment of heat transfer area of E-76 heat sink than that of E-47 one, its heat release rate was higher by only 2.0%; this suggests that its heat transfer area need to be optimized.Keywords: solar Inverter, heat sink, forced convection, heat transfer, performance evaluation
Procedia PDF Downloads 4674700 Rethinking Classical Concerts in the Digital Era: Transforming Sound, Experience, and Engagement for the New Generation
Authors: Orit Wolf
Abstract:
Classical music confronts a crucial challenge: updating cherished concert traditions for the digital age. This paper is a journey, and a quest to make classical concerts resonate with a new generation. It's not just about asking questions; it's about exploring the future of classical concerts and their potential to captivate and connect with today's audience in an era defined by change. The younger generation, known for their love of diversity, interactive experiences, and multi-sensory immersion, cannot be overlooked. This paper explores innovative strategies that forge deep connections with audiences whose relationship with classical music differs from the past. The urgency of this challenge drives the transformation of classical concerts. Examining classical concerts is necessary to understand how they can harmonize with contemporary sensibilities. New dimensions in audiovisual experiences that enchant the emerging generation are sought. Classical music must embrace the technological era while staying open to fusion and cross-cultural collaboration possibilities. The role of technology and Artificial Intelligence (AI) in reshaping classical concerts is under research. The fusion of classical music with digital experiences and dynamic interdisciplinary collaborations breathes new life into the concert experience. It aligns classical music with the expectations of modern audiences, making it more relevant and engaging. Exploration extends to the structure of classical concerts. Conventions are challenged, and ways to make classical concerts more accessible and captivating are sought. Inspired by innovative artistic collaborations, musical genres and styles are redefined, transforming the relationship between performers and the audience. This paper, therefore, aims to be a catalyst for dialogue and a beacon of innovation. A set of critical inquiries integral to reshaping classical concerts for the digital age is presented. As the world embraces digital transformation, classical music seeks resonance with contemporary audiences, redefining the concert experience while remaining true to its roots and embracing revolutions in the digital age.Keywords: new concert formats, reception of classical music, interdiscplinary concerts, innovation in the new musical era, mash-up, cross culture, innovative concerts, engaging musical performances
Procedia PDF Downloads 644699 DeepOmics: Deep Learning for Understanding Genome Functioning and the Underlying Genetic Causes of Disease
Authors: Vishnu Pratap Singh Kirar, Madhuri Saxena
Abstract:
Advancement in sequence data generation technologies is churning out voluminous omics data and posing a massive challenge to annotate the biological functional features. With so much data available, the use of machine learning methods and tools to make novel inferences has become obvious. Machine learning methods have been successfully applied to a lot of disciplines, including computational biology and bioinformatics. Researchers in computational biology are interested to develop novel machine learning frameworks to classify the huge amounts of biological data. In this proposal, it plan to employ novel machine learning approaches to aid the understanding of how apparently innocuous mutations (in intergenic DNA and at synonymous sites) cause diseases. We are also interested in discovering novel functional sites in the genome and mutations in which can affect a phenotype of interest.Keywords: genome wide association studies (GWAS), next generation sequencing (NGS), deep learning, omics
Procedia PDF Downloads 974698 Parameter Estimation for Contact Tracing in Graph-Based Models
Authors: Augustine Okolie, Johannes Müller, Mirjam Kretzchmar
Abstract:
We adopt a maximum-likelihood framework to estimate parameters of a stochastic susceptible-infected-recovered (SIR) model with contact tracing on a rooted random tree. Given the number of detectees per index case, our estimator allows to determine the degree distribution of the random tree as well as the tracing probability. Since we do not discover all infectees via contact tracing, this estimation is non-trivial. To keep things simple and stable, we develop an approximation suited for realistic situations (contract tracing probability small, or the probability for the detection of index cases small). In this approximation, the only epidemiological parameter entering the estimator is the basic reproduction number R0. The estimator is tested in a simulation study and applied to covid-19 contact tracing data from India. The simulation study underlines the efficiency of the method. For the empirical covid-19 data, we are able to compare different degree distributions and perform a sensitivity analysis. We find that particularly a power-law and a negative binomial degree distribution meet the data well and that the tracing probability is rather large. The sensitivity analysis shows no strong dependency on the reproduction number.Keywords: stochastic SIR model on graph, contact tracing, branching process, parameter inference
Procedia PDF Downloads 784697 The Effect of Artificial Intelligence on Food and Beverages
Authors: Remon Karam Zakry Kelada
Abstract:
This survey research ambitions to examine the usual of carrier quality of meals and beverage provider staffs in lodge business by way of studying the carrier fashionable of 3 pattern inns, Siam Kempinski lodge Bangkok, four Seasons lodge Chiang Mai, and Banyan Tree Phuket. as a way to locate the international provider general of food and beverage provider, triangular research, i.e. quantitative, qualitative, and survey were hired. on this research, questionnaires and in-depth interview have been used for getting the statistics on the sequences and method of services. There had been three components of modified questionnaires to degree carrier pleasant and visitor’s satisfaction inclusive of carrier facilities, attentiveness, obligation, reliability, and circumspection. This observe used pattern random sampling to derive topics with the go back fee of the questionnaires changed into 70% or 280. information have been analyzed via SPSS to find mathematics mean, SD, percent, and comparison by using t-take a look at and One-manner ANOVA. The outcomes revealed that the service first-rate of the three lodges have been in the worldwide stage that could create excessive pride to the international clients. hints for studies implementations have been to hold the area of precise carrier satisfactory, and to enhance some dimensions of service fine together with reliability. training in service fashionable, product expertise, and new generation for employees must be provided. furthermore, for you to develop the provider pleasant of the enterprise, training collaboration among inn corporation and academic institutions in food and beverage carrier should be considered.Keywords: food and beverage staff, food poisoning, food production, hygiene knowledge BPA, health, regulations, toxicity service standard, food and beverage department, sequence of service, service method
Procedia PDF Downloads 344696 Machine Learning Techniques for Estimating Ground Motion Parameters
Authors: Farid Khosravikia, Patricia Clayton
Abstract:
The main objective of this study is to evaluate the advantages and disadvantages of various machine learning techniques in forecasting ground-motion intensity measures given source characteristics, source-to-site distance, and local site condition. Intensity measures such as peak ground acceleration and velocity (PGA and PGV, respectively) as well as 5% damped elastic pseudospectral accelerations at different periods (PSA), are indicators of the strength of shaking at the ground surface. Estimating these variables for future earthquake events is a key step in seismic hazard assessment and potentially subsequent risk assessment of different types of structures. Typically, linear regression-based models, with pre-defined equations and coefficients, are used in ground motion prediction. However, due to the restrictions of the linear regression methods, such models may not capture more complex nonlinear behaviors that exist in the data. Thus, this study comparatively investigates potential benefits from employing other machine learning techniques as a statistical method in ground motion prediction such as Artificial Neural Network, Random Forest, and Support Vector Machine. The algorithms are adjusted to quantify event-to-event and site-to-site variability of the ground motions by implementing them as random effects in the proposed models to reduce the aleatory uncertainty. All the algorithms are trained using a selected database of 4,528 ground-motions, including 376 seismic events with magnitude 3 to 5.8, recorded over the hypocentral distance range of 4 to 500 km in Oklahoma, Kansas, and Texas since 2005. The main reason of the considered database stems from the recent increase in the seismicity rate of these states attributed to petroleum production and wastewater disposal activities, which necessities further investigation in the ground motion models developed for these states. Accuracy of the models in predicting intensity measures, generalization capability of the models for future data, as well as usability of the models are discussed in the evaluation process. The results indicate the algorithms satisfy some physically sound characteristics such as magnitude scaling distance dependency without requiring pre-defined equations or coefficients. Moreover, it is shown that, when sufficient data is available, all the alternative algorithms tend to provide more accurate estimates compared to the conventional linear regression-based method, and particularly, Random Forest outperforms the other algorithms. However, the conventional method is a better tool when limited data is available.Keywords: artificial neural network, ground-motion models, machine learning, random forest, support vector machine
Procedia PDF Downloads 1224695 Phenomena-Based Approach for Automated Generation of Process Options and Process Models
Authors: Parminder Kaur Heer, Alexei Lapkin
Abstract:
Due to global challenges of increased competition and demand for more sustainable products/processes, there is a rising pressure on the industry to develop innovative processes. Through Process Intensification (PI) the existing and new processes may be able to attain higher efficiency. However, very few PI options are generally considered. This is because processes are typically analysed at a unit operation level, thus limiting the search space for potential process options. PI performed at more detailed levels of a process can increase the size of the search space. The different levels at which PI can be achieved is unit operations, functional and phenomena level. Physical/chemical phenomena form the lowest level of aggregation and thus, are expected to give the highest impact because all the intensification options can be described by their enhancement. The objective of the current work is thus, generation of numerous process alternatives based on phenomena, and development of their corresponding computer aided models. The methodology comprises: a) automated generation of process options, and b) automated generation of process models. The process under investigation is disintegrated into functions viz. reaction, separation etc., and these functions are further broken down into the phenomena required to perform them. E.g., separation may be performed via vapour-liquid or liquid-liquid equilibrium. A list of phenomena for the process is formed and new phenomena, which can overcome the difficulties/drawbacks of the current process or can enhance the effectiveness of the process, are added to the list. For instance, catalyst separation issue can be handled by using solid catalysts; the corresponding phenomena are identified and added. The phenomena are then combined to generate all possible combinations. However, not all combinations make sense and, hence, screening is carried out to discard the combinations that are meaningless. For example, phase change phenomena need the co-presence of the energy transfer phenomena. Feasible combinations of phenomena are then assigned to the functions they execute. A combination may accomplish a single or multiple functions, i.e. it might perform reaction or reaction with separation. The combinations are then allotted to the functions needed for the process. This creates a series of options for carrying out each function. Combination of these options for different functions in the process leads to the generation of superstructure of process options. These process options, which are formed by a list of phenomena for each function, are passed to the model generation algorithm in the form of binaries (1, 0). The algorithm gathers the active phenomena and couples them to generate the model. A series of models is generated for the functions, which are combined to get the process model. The most promising process options are then chosen subjected to a performance criterion, for example purity of product, or via a multi-objective Pareto optimisation. The methodology was applied to a two-step process and the best route was determined based on the higher product yield. The current methodology can identify, produce and evaluate process intensification options from which the optimal process can be determined. It can be applied to any chemical/biochemical process because of its generic nature.Keywords: Phenomena, Process intensification, Process models , Process options
Procedia PDF Downloads 2324694 Optimized Techniques for Reducing the Reactive Power Generation in Offshore Wind Farms in India
Authors: Pardhasaradhi Gudla, Imanual A.
Abstract:
The generated electrical power in offshore needs to be transmitted to grid which is located in onshore by using subsea cables. Long subsea cables produce reactive power, which should be compensated in order to limit transmission losses, to optimize the transmission capacity, and to keep the grid voltage within the safe operational limits. Installation cost of wind farm includes the structure design cost and electrical system cost. India has targeted to achieve 175GW of renewable energy capacity by 2022 including offshore wind power generation. Due to sea depth is more in India, the installation cost will be further high when compared to European countries where offshore wind energy is already generating successfully. So innovations are required to reduce the offshore wind power project cost. This paper presents the optimized techniques to reduce the installation cost of offshore wind firm with respect to electrical transmission systems. This technical paper provides the techniques for increasing the current carrying capacity of subsea cable by decreasing the reactive power generation (capacitance effect) of the subsea cable. There are many methods for reactive power compensation in wind power plants so far in execution. The main reason for the need of reactive power compensation is capacitance effect of subsea cable. So if we diminish the cable capacitance of cable then the requirement of the reactive power compensation will be reduced or optimized by avoiding the intermediate substation at midpoint of the transmission network.Keywords: offshore wind power, optimized techniques, power system, sub sea cable
Procedia PDF Downloads 1934693 Sensitivity Analysis for 14 Bus Systems in a Distribution Network with Distributed Generators
Authors: Lakshya Bhat, Anubhav Shrivastava, Shiva Rudraswamy
Abstract:
There has been a formidable interest in the area of Distributed Generation in recent times. A wide number of loads are addressed by Distributed Generators and have better efficiency too. The major disadvantage in Distributed Generation is voltage control- is highlighted in this paper. The paper addresses voltage control at buses in IEEE 14 Bus system by regulating reactive power. An analysis is carried out by selecting the most optimum location in placing the Distributed Generators through load flow analysis and seeing where the voltage profile rises. MATLAB programming is used for simulation of voltage profile in the respective buses after introduction of DG’s. A tolerance limit of +/-5% of the base value has to be maintained. To maintain the tolerance limit, 3 methods are used. Sensitivity analysis of 3 methods for voltage control is carried out to determine the priority among the methods.Keywords: distributed generators, distributed system, reactive power, voltage control, sensitivity analysis
Procedia PDF Downloads 7034692 Application of Multilayer Perceptron and Markov Chain Analysis Based Hybrid-Approach for Predicting and Monitoring the Pattern of LULC Using Random Forest Classification in Jhelum District, Punjab, Pakistan
Authors: Basit Aftab, Zhichao Wang, Feng Zhongke
Abstract:
Land Use and Land Cover Change (LULCC) is a critical environmental issue that has significant effects on biodiversity, ecosystem services, and climate change. This study examines the spatiotemporal dynamics of land use and land cover (LULC) across a three-decade period (1992–2022) in a district area. The goal is to support sustainable land management and urban planning by utilizing the combination of remote sensing, GIS data, and observations from Landsat satellites 5 and 8 to provide precise predictions of the trajectory of urban sprawl. In order to forecast the LULCC patterns, this study suggests a hybrid strategy that combines the Random Forest method with Multilayer Perceptron (MLP) and Markov Chain analysis. To predict the dynamics of LULC change for the year 2035, a hybrid technique based on multilayer Perceptron and Markov Chain Model Analysis (MLP-MCA) was employed. The area of developed land has increased significantly, while the amount of bare land, vegetation, and forest cover have all decreased. This is because the principal land types have changed due to population growth and economic expansion. The study also discovered that between 1998 and 2023, the built-up area increased by 468 km² as a result of the replacement of natural resources. It is estimated that 25.04% of the study area's urbanization will be increased by 2035. The performance of the model was confirmed with an overall accuracy of 90% and a kappa coefficient of around 0.89. It is important to use advanced predictive models to guide sustainable urban development strategies. It provides valuable insights for policymakers, land managers, and researchers to support sustainable land use planning, conservation efforts, and climate change mitigation strategies.Keywords: land use land cover, Markov chain model, multi-layer perceptron, random forest, sustainable land, remote sensing.
Procedia PDF Downloads 344691 An Intergenerational Study of Iranian Migrant Families in Australia: Exploring Language, Identity, and Acculturation
Authors: Alireza Fard Kashani
Abstract:
This study reports on the experiences and attitudes of six Iranian migrant families, from two groups of asylum seekers and skilled workers, with regard to their language, identity, and acculturation in Australia. The participants included first generation parents and 1.5-generation adolescents, who had lived in Australia for a minimum of three years. For this investigation, Mendoza’s (1984, 2016) acculturation model, as well as poststructuralist views of identity, were employed. The semi-structured interview results have highlighted that Iranian parents and adolescents face low degrees of intergenerational conflicts in most domains of their acculturation. However, the structural and lawful patterns in Australia have caused some internal conflicts for the parents, especially fathers (e.g., their power status within the family or their children’s freedom). Furthermore, while most participants reported ‘cultural eclecticism’ as their preferred acculturation orientation, female participants seemed to be more eclectic than their male counterparts who showed inclination towards keeping more aspects of their home culture. This finding, however, highlights a meaningful effort on the part of husbands that in order to make their married lives continue well in Australia they need to re-consider the traditional male-dominated customs they used to have in Iran. As for identity, not only the parents but also the adolescents proudly identified themselves as Persians. In addition, with respect to linguistic behaviour, almost all adolescents showed enthusiasm to retain the Persian language at home to be able to maintain contacts with their relatives and friends in Iran and to enjoy many other benefits the language may offer them in the future.Keywords: acculturation, asylum seekers, identity, intergenerational conflicts, language, skilled workers, 1.5 generation
Procedia PDF Downloads 2394690 Performance Evaluation of Parallel Surface Modeling and Generation on Actual and Virtual Multicore Systems
Authors: Nyeng P. Gyang
Abstract:
Even though past, current and future trends suggest that multicore and cloud computing systems are increasingly prevalent/ubiquitous, this class of parallel systems is nonetheless underutilized, in general, and barely used for research on employing parallel Delaunay triangulation for parallel surface modeling and generation, in particular. The performances, of actual/physical and virtual/cloud multicore systems/machines, at executing various algorithms, which implement various parallelization strategies of the incremental insertion technique of the Delaunay triangulation algorithm, were evaluated. T-tests were run on the data collected, in order to determine whether various performance metrics differences (including execution time, speedup and efficiency) were statistically significant. Results show that the actual machine is approximately twice faster than the virtual machine at executing the same programs for the various parallelization strategies. Results, which furnish the scalability behaviors of the various parallelization strategies, also show that some of the differences between the performances of these systems, during different runs of the algorithms on the systems, were statistically significant. A few pseudo superlinear speedup results, which were computed from the raw data collected, are not true superlinear speedup values. These pseudo superlinear speedup values, which arise as a result of one way of computing speedups, disappear and give way to asymmetric speedups, which are the accurate kind of speedups that occur in the experiments performed.Keywords: cloud computing systems, multicore systems, parallel Delaunay triangulation, parallel surface modeling and generation
Procedia PDF Downloads 2064689 Environmental and Space Travel
Authors: Alimohammad
Abstract:
Man's entry into space is one of the most important results of developments and advances made in information technology. But this human step, like many of his other actions, is not free of danger, as space pollution today has become a major problem for the global community. Paying attention to the issue of preserving the space environment is in the interest of all governments and mankind, and ignoring it can increase the possibility of conflict between countries. What many space powers still do not pay attention to is the freedom to explore and exploit space should be limited by banning pollution of the space environment. Therefore, freedom and prohibition are complementary and should not be considered conflicting concepts. The legal system created by the current space treaties for the effective preservation of the space environment has failed. Customary international law also does not have an effective provision and guarantee of sufficient executions in order to prevent damage to the environment. Considering the responsibility of each generation in the healthy transfer of the environment to the next generation and considering the sustainable development concept, the space environment must also be passed on to future generations in a healthy and undamaged manner. As a result, many environmental policies related to Earth should also be applied to the space environment..Keywords: law, space, environment, responsibility
Procedia PDF Downloads 854688 The Greek Diaspora in Australia: Identity and Transnational Identity
Authors: Panayiota Romios
Abstract:
As the use of 'diaspora' has proliferated in the last decade, its meaning has been stretched in various directions. Current diaspora frames of identity representation do not adequately capture the complexities of everyday lived experiences of transnational individuals and groups. This paper presents the findings of a qualitative research project conducted in Melbourne, Australia with second generation Greek Australians. It analyses the forms of intercultural identities of the second generation Greek Australians returning to Australia post-2008, after living in Greece for an extended period of time. The discussion highlights key characteristics in relation to diaspora-homeland ties, seeking to denaturalise the commonplace assumptions and imaginations about the cultures and identities of Greek Australian diaspora communities and probe the relevance of identity markers such a country of origin, nationality, ethnicity, ethnic origin, language and mother tongue. The definition of diaspora experienced in this transnational lexicon is interestingly quite distinct from original articulations and also from others returning ‘home’.Keywords: diaspora, identity, migration, displacement
Procedia PDF Downloads 3614687 Study on Energy Performance Comparison of Information Centric Network Based on Difference of Network Architecture
Authors: Takumi Shindo, Koji Okamura
Abstract:
The first generation of the wide area network was circuit centric network. How the optimal circuit can be signed was the most important issue to get the best performance. This architecture had succeeded for line based telephone system. The second generation was host centric network and Internet based on this architecture has very succeeded world widely. And Internet became as new social infrastructure. Currently the architecture of the network is based on the location of the information. This future network is called Information centric network (ICN). The information-centric network (ICN) has being researched by many projects and different architectures for implementation of ICN have been proposed. The goal of this study is to compare performances of those ICN architectures. In this paper, the authors propose general ICN model which can represent two typical ICN architectures and compare communication performances using request routing. Finally, simulation results are shown. Also, we assume that this network architecture should be adapt to energy on-demand routing.Keywords: ICN, information centric network, CCN, energy
Procedia PDF Downloads 3374686 LACGC: Business Sustainability Research Model for Generations Consumption, Creation, and Implementation of Knowledge: Academic and Non-Academic
Authors: Satpreet Singh
Abstract:
This paper introduces the new LACGC model to sustain the academic and non-academic business to future educational and organizational generations. The consumption of knowledge and the creation of new knowledge is a strength and focal interest of all academics and Non-academic organizations. Implementing newly created knowledge sustains the businesses to the next generation with growth without detriment. Existing models like the Scholar-practitioner model and Organization knowledge creation models focus specifically on academic or non-academic, not both. LACGC model can be used for both Academic and Non-academic at the domestic or international level. Researchers and scholars play a substantial role in finding literature and practice gaps in academic and non-academic disciplines. LACGC model has unrestricted the number of recurrences because the Consumption, Creation, and implementation of new ideas, disciplines, systems, and knowledge is a never-ending process and must continue from one generation to the next.Keywords: academics, consumption, creation, generations, non-academics, research, sustainability
Procedia PDF Downloads 1974685 Modeling of Strong Motion Generation Areas of the 2011 Tohoku, Japan Earthquake Using Modified Semi-Empirical Technique Incorporating Frequency Dependent Radiation Pattern Model
Authors: Sandeep, A. Joshi, Kamal, Piu Dhibar, Parveen Kumar
Abstract:
In the present work strong ground motion has been simulated using a modified semi-empirical technique (MSET), with frequency dependent radiation pattern model. Joshi et al. (2014) have modified the semi-empirical technique to incorporate the modeling of strong motion generation areas (SMGAs). A frequency dependent radiation pattern model is applied to simulate high frequency ground motion more precisely. Identified SMGAs (Kurahashi and Irikura 2012) of the 2011 Tohoku earthquake (Mw 9.0) were modeled using this modified technique. Records are simulated for both frequency dependent and constant radiation pattern function. Simulated records for both cases are compared with observed records in terms of peak ground acceleration and pseudo acceleration response spectra at different stations. Comparison of simulated and observed records in terms of root mean square error suggests that the method is capable of simulating record which matches in a wide frequency range for this earthquake and bears realistic appearance in terms of shape and strong motion parameters. The results confirm the efficacy and suitability of rupture model defined by five SMGAs for the developed modified technique.Keywords: strong ground motion, semi-empirical, strong motion generation area, frequency dependent radiation pattern, 2011 Tohoku Earthquake
Procedia PDF Downloads 5374684 Modeling Biomass and Biodiversity across Environmental and Management Gradients in Temperate Grasslands with Deep Learning and Sentinel-1 and -2
Authors: Javier Muro, Anja Linstadter, Florian Manner, Lisa Schwarz, Stephan Wollauer, Paul Magdon, Gohar Ghazaryan, Olena Dubovyk
Abstract:
Monitoring the trade-off between biomass production and biodiversity in grasslands is critical to evaluate the effects of management practices across environmental gradients. New generations of remote sensing sensors and machine learning approaches can model grasslands’ characteristics with varying accuracies. However, studies often fail to cover a sufficiently broad range of environmental conditions, and evidence suggests that prediction models might be case specific. In this study, biomass production and biodiversity indices (species richness and Fishers’ α) are modeled in 150 grassland plots for three sites across Germany. These sites represent a North-South gradient and are characterized by distinct soil types, topographic properties, climatic conditions, and management intensities. Predictors used are derived from Sentinel-1 & 2 and a set of topoedaphic variables. The transferability of the models is tested by training and validating at different sites. The performance of feed-forward deep neural networks (DNN) is compared to a random forest algorithm. While biomass predictions across gradients and sites were acceptable (r2 0.5), predictions of biodiversity indices were poor (r2 0.14). DNN showed higher generalization capacity than random forest when predicting biomass across gradients and sites (relative root mean squared error of 0.5 for DNN vs. 0.85 for random forest). DNN also achieved high performance when using the Sentinel-2 surface reflectance data rather than different combinations of spectral indices, Sentinel-1 data, or topoedaphic variables, simplifying dimensionality. This study demonstrates the necessity of training biomass and biodiversity models using a broad range of environmental conditions and ensuring spatial independence to have realistic and transferable models where plot level information can be upscaled to landscape scale.Keywords: ecosystem services, grassland management, machine learning, remote sensing
Procedia PDF Downloads 2184683 Electrolysis Ship for Green Hydrogen Production and Possible Applications
Authors: Julian David Hunt, Andreas Nascimento
Abstract:
Green hydrogen is the most environmental, renewable alternative to produce hydrogen. However, an important challenge to make hydrogen a competitive energy carrier is a constant supply of renewable energy, such as solar, wind and hydropower. Given that the electricity generation potential of these sources vary seasonally and interannually, this paper proposes installing an electrolysis hydrogen production plant in a ship and move the ship to the locations where electricity is cheap, or where the seasonal potential for renewable generation is high. An example of electrolysis ship application is to produce green hydrogen with hydropower from the North region of Brazil and then sail to the Northeast region of Brazil and generate hydrogen using excess electricity from offshore wind power. The electrolysis ship concept is interesting because it has the flexibility to produce green hydrogen using the cheapest renewable electricity available in the market.Keywords: green hydrogen, electrolysis ship, renewable energies, seasonal variations
Procedia PDF Downloads 1624682 Sensitivity Analysis for 14 Bus Systems in a Distribution Network with Distribution Generators
Authors: Lakshya Bhat, Anubhav Shrivastava, Shivarudraswamy
Abstract:
There has been a formidable interest in the area of Distributed Generation in recent times. A wide number of loads are addressed by Distributed Generators and have better efficiency too. The major disadvantage in Distributed Generation is voltage control- is highlighted in this paper. The paper addresses voltage control at buses in IEEE 14 Bus system by regulating reactive power. An analysis is carried out by selecting the most optimum location in placing the Distributed Generators through load flow analysis and seeing where the voltage profile rises. Matlab programming is used for simulation of voltage profile in the respective buses after introduction of DG’s. A tolerance limit of +/-5% of the base value has to be maintained.To maintain the tolerance limit , 3 methods are used. Sensitivity analysis of 3 methods for voltage control is carried out to determine the priority among the methods.Keywords: distributed generators, distributed system, reactive power, voltage control, sensitivity analysis
Procedia PDF Downloads 5874681 Unconventional Explorers: Gen Z Travelers Redefinding the Travel Experience
Authors: M. Panidou, F. Kilipiris, E. Christou, K. Alexandris
Abstract:
This study intends to investigate the travel preferences of Generation Z (born between 1996 and 2012), focusing on their inclination towards unique and unconventional travel experiences, prioritization of authentic cultural immersion and local experiences over traditional tourist attractions, and their value for flexibility and spontaneity in travel plans. By examining these aspects, the research aims to provide insights into the preferences and behaviors of Generation Z travelers, contributing to a better understanding of their travel choices and informing the tourism industry in catering to their needs and desires. Secondary data was gathered from academic literature and industry reports to offer a thorough study of the topic. A quantitative method was used, and primary data was collected through an online questionnaire. One hundred Greek people between the ages of eighteen and twenty-seven were the study's sample. SPSS software was used to assist in the analysis of the data. The findings of the research showed that Gen Z is attracted to unusual and distinctive travel experiences, prioritizing genuine cultural immersion over typical tourist attractions, and they highly value flexibility in their travel decision-making. This research contributes to a deeper understanding of how Gen Z travelers are reshaping the travel industry. Travel companies, marketers, and destination management organizations will find the findings useful in adjusting their products to suit this influential demographic's changing demands and preferences. Considering the limitations of the sample size, future studies could expand the sample size to include individuals from different cultural backgrounds for a more comprehensive understanding.Keywords: cultural immersion, flexibility, generation Z, travel preferences, unique experiences
Procedia PDF Downloads 604680 Foodxervices Inc.: Corporate Responsibility and Business as Usual
Authors: Allan Chia, Gabriel Gervais
Abstract:
The case study on FoodXervices Inc shows how businesses need to reinvent and transform themselves in order to adapt and thrive and it also features how an SME can also devote resources to CSR causes. The company, Ng Chye Mong, was set up in 1937 and it went through ups and downs and encountered several failures and successes. In the 1970’s, the management of the company was entrusted to the next generation who continued to manage and expanded the business. In early 2003, the business encountered several challenges. A pair of siblings from the next generation of the Ng family joined the business fulltime and together they set-out to transform the company into FoodXervices Inc. In 2012, they started a charity, Food Bank Singapore Pte Ltd. The authors conducted case study research involving a series of in-depth interviews with the business owner and staff. This case study is an example of how to run a business and coordinate a charity concurrently while mobilising the same resources. The uniqueness of this case is the operational synergy of both the business and charity to promote corporate responsibility causes and initiatives in Singapore.Keywords: family-owned business, charity, corporate social responsibility, branding
Procedia PDF Downloads 4394679 Recyclable Household Solid Waste Generation and Collection in Beijing, China
Authors: Tingting Liu, Yufeng Wu, Xi Tian, Yu Gong, Tieyong Zuo
Abstract:
The household solid waste generated by household in Beijing is increasing quickly due to rapid population growth and lifestyle changes. However, there are no rigorous data on the generation and collection of the recyclable household solid wastes. The Beijing city government needs this information to make appropriate policies and plans for waste management. To address this information need, we undertook the first comprehensive study of recyclable household solid waste for Beijing. We carried out a survey of 500 families across sixteen districts in Beijing. We also analyzed the quantities, spatial distribution and categories of collected waste handled by curbside recyclers and permanent recycling centers for 340 of the 9797 city-defined residential areas of Beijing. From our results, we estimate that the total quantity of recyclable household solid waste was 1.8 million tonnes generated by Beijing household in 2013 and 71.6% of that was collected. The main generation categories were waste paper (24.4%), waste glass bottle (23.7%) and waste furniture (14.3%). The recycling rate was varied among different kinds of municipal solid waste. Also based on our study, we estimate there were 22.8 thousand curbside recyclers and 5.7 thousand permanent recycling centers in Beijing. The problems of household solid waste collecting system were inadequacies of authorized collection centers, skewed ratios of curbside recyclers and authorized permanent recycling centers, weak recycling awareness of residents and lack of recycling resources statistics and appraisal system. According to the existing problems, we put forward the suggestions to improve household solid waste management.Keywords: Municipal waste; Recyclable waste; Waste categories; Waste collection
Procedia PDF Downloads 2964678 Predicting the Diagnosis of Alzheimer’s Disease: Development and Validation of Machine Learning Models
Authors: Jay L. Fu
Abstract:
Patients with Alzheimer's disease progressively lose their memory and thinking skills and, eventually, the ability to carry out simple daily tasks. The disease is irreversible, but early detection and treatment can slow down the disease progression. In this research, publicly available MRI data and demographic data from 373 MRI imaging sessions were utilized to build models to predict dementia. Various machine learning models, including logistic regression, k-nearest neighbor, support vector machine, random forest, and neural network, were developed. Data were divided into training and testing sets, where training sets were used to build the predictive model, and testing sets were used to assess the accuracy of prediction. Key risk factors were identified, and various models were compared to come forward with the best prediction model. Among these models, the random forest model appeared to be the best model with an accuracy of 90.34%. MMSE, nWBV, and gender were the three most important contributing factors to the detection of Alzheimer’s. Among all the models used, the percent in which at least 4 of the 5 models shared the same diagnosis for a testing input was 90.42%. These machine learning models allow early detection of Alzheimer’s with good accuracy, which ultimately leads to early treatment of these patients.Keywords: Alzheimer's disease, clinical diagnosis, magnetic resonance imaging, machine learning prediction
Procedia PDF Downloads 143